


How Can I Calculate Working Hours Between Dates in PostgreSQL, Considering Weekends and Specific Working Hours?
Jan 03, 2025 am 10:35 AMCalculating Working Hours Between Dates in PostgreSQL
Introduction
In various scenarios, determining the number of working hours between two timestamps can prove to be essential in fields such as payroll and scheduling. In PostgreSQL, this calculation requires careful consideration of weekday and time-specific parameters. This article outlines a comprehensive solution, taking into account the following criteria:
- Weekends (Saturdays and Sundays) are excluded from working hours.
- Working hours are defined as Monday through Friday, 8 am to 3 pm.
- Fractional hours are to be included in the calculation.
Solution
Method 1: Rounded Results for Just Two Timestamps
This approach operates on units of 1 hour, ignoring fractional hours. It is a simple but less precise method.
Query:
SELECT count(*) AS work_hours FROM generate_series (timestamp '2013-06-24 13:30' , timestamp '2013-06-24 15:29' - interval '1h' , interval '1h') h WHERE EXTRACT(ISODOW FROM h) < 6 AND h::time >= '08:00' AND h::time &lt;= '14:00';
Example Input:
2013-06-24 13:30, 2013-06-24 15:29
Output:
2
Method 2: Rounded Results for a Table of Timestamps
This approach extends the previous method to handle a table of timestamp pairs.
Query:
SELECT t_id, count(*) AS work_hours FROM ( SELECT t_id, generate_series (t_start, t_end - interval '1h', interval '1h') AS h FROM t ) sub WHERE EXTRACT(ISODOW FROM h) < 6 AND h::time >= '08:00' AND h::time <= '14:00' GROUP BY 1 ORDER BY 1;
Method 3: More Precise Calculation
For a finer-grained calculation, smaller time units can be considered.
Query:
SELECT t_id, count(*) * interval '5 min' AS work_interval FROM ( SELECT t_id, generate_series (t_start, t_end - interval '5 min', interval '5 min') AS h FROM t ) sub WHERE EXTRACT(ISODOW FROM h) < 6 AND h::time >= '08:00' AND h::time <= '14:55' GROUP BY 1 ORDER BY 1;
Example Input:
| t_id | t_start | t_end | |------|-------------------------|-------------------------| | 1 | 2009-12-03 14:00:00 | 2009-12-04 09:00:00 | | 2 | 2009-12-03 15:00:00 | 2009-12-07 08:00:00 | | 3 | 2013-06-24 07:00:00 | 2013-06-24 12:00:00 | | 4 | 2013-06-24 12:00:00 | 2013-06-24 23:00:00 | | 5 | 2013-06-23 13:00:00 | 2013-06-25 11:00:00 | | 6 | 2013-06-23 14:01:00 | 2013-06-24 08:59:00 |
Output:
| t_id | work_interval | |------|----------------| | 1 | 1 hour | | 2 | 8 hours | | 3 | 0 hours | | 4 | 0 hours | | 5 | 6 hours | | 6 | 1 hour |
Method 4: Exact Results
This approach provides exact results with microsecond precision. It is more complex but more computationally efficient.
Query:
WITH var AS (SELECT '08:00'::time AS v_start , '15:00'::time AS v_end) SELECT t_id , COALESCE(h.h, '0') -- add / subtract fractions - CASE WHEN EXTRACT(ISODOW FROM t_start) < 6 AND t_start::time > v_start AND t_start::time < v_end THEN t_start - date_trunc('hour', t_start) ELSE '0'::interval END + CASE WHEN EXTRACT(ISODOW FROM t_end) < 6 AND t_end::time > v_start AND t_end::time < v_end THEN t_end - date_trunc('hour', t_end) ELSE '0'::interval END AS work_interval FROM t CROSS JOIN var LEFT JOIN ( -- count full hours, similar to above solutions SELECT t_id, count(*)::int * interval '1h' AS h FROM ( SELECT t_id, v_start, v_end , generate_series (date_trunc('hour', t_start) , date_trunc('hour', t_end) - interval '1h' , interval '1h') AS h FROM t, var ) sub WHERE EXTRACT(ISODOW FROM h) < 6 AND h::time >= v_start AND h::time <= v_end - interval '1h' GROUP BY 1 ) h USING (t_id) ORDER BY 1;
This comprehensive solution addresses the need to calculate working hours accurately and efficiently in PostgreSQL.
The above is the detailed content of How Can I Calculate Working Hours Between Dates in PostgreSQL, Considering Weekends and Specific Working Hours?. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undress AI Tool
Undress images for free

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

mysqldump is a common tool for performing logical backups of MySQL databases. It generates SQL files containing CREATE and INSERT statements to rebuild the database. 1. It does not back up the original file, but converts the database structure and content into portable SQL commands; 2. It is suitable for small databases or selective recovery, and is not suitable for fast recovery of TB-level data; 3. Common options include --single-transaction, --databases, --all-databases, --routines, etc.; 4. Use mysql command to import during recovery, and can turn off foreign key checks to improve speed; 5. It is recommended to test backup regularly, use compression, and automatic adjustment.

To view the size of the MySQL database and table, you can query the information_schema directly or use the command line tool. 1. Check the entire database size: Execute the SQL statement SELECTtable_schemaAS'Database',SUM(data_length index_length)/1024/1024AS'Size(MB)'FROMinformation_schema.tablesGROUPBYtable_schema; you can get the total size of all databases, or add WHERE conditions to limit the specific database; 2. Check the single table size: use SELECTta

Character set and sorting rules issues are common when cross-platform migration or multi-person development, resulting in garbled code or inconsistent query. There are three core solutions: First, check and unify the character set of database, table, and fields to utf8mb4, view through SHOWCREATEDATABASE/TABLE, and modify it with ALTER statement; second, specify the utf8mb4 character set when the client connects, and set it in connection parameters or execute SETNAMES; third, select the sorting rules reasonably, and recommend using utf8mb4_unicode_ci to ensure the accuracy of comparison and sorting, and specify or modify it through ALTER when building the library and table.

MySQL supports transaction processing, and uses the InnoDB storage engine to ensure data consistency and integrity. 1. Transactions are a set of SQL operations, either all succeed or all fail to roll back; 2. ACID attributes include atomicity, consistency, isolation and persistence; 3. The statements that manually control transactions are STARTTRANSACTION, COMMIT and ROLLBACK; 4. The four isolation levels include read not committed, read submitted, repeatable read and serialization; 5. Use transactions correctly to avoid long-term operation, turn off automatic commits, and reasonably handle locks and exceptions. Through these mechanisms, MySQL can achieve high reliability and concurrent control.

The setting of character sets and collation rules in MySQL is crucial, affecting data storage, query efficiency and consistency. First, the character set determines the storable character range, such as utf8mb4 supports Chinese and emojis; the sorting rules control the character comparison method, such as utf8mb4_unicode_ci is case-sensitive, and utf8mb4_bin is binary comparison. Secondly, the character set can be set at multiple levels of server, database, table, and column. It is recommended to use utf8mb4 and utf8mb4_unicode_ci in a unified manner to avoid conflicts. Furthermore, the garbled code problem is often caused by inconsistent character sets of connections, storage or program terminals, and needs to be checked layer by layer and set uniformly. In addition, character sets should be specified when exporting and importing to prevent conversion errors

The most direct way to connect to MySQL database is to use the command line client. First enter the mysql-u username -p and enter the password correctly to enter the interactive interface; if you connect to the remote database, you need to add the -h parameter to specify the host address. Secondly, you can directly switch to a specific database or execute SQL files when logging in, such as mysql-u username-p database name or mysql-u username-p database name

To set up asynchronous master-slave replication for MySQL, follow these steps: 1. Prepare the master server, enable binary logs and set a unique server-id, create a replication user and record the current log location; 2. Use mysqldump to back up the master library data and import it to the slave server; 3. Configure the server-id and relay-log of the slave server, use the CHANGEMASTER command to connect to the master library and start the replication thread; 4. Check for common problems, such as network, permissions, data consistency and self-increase conflicts, and monitor replication delays. Follow the steps above to ensure that the configuration is completed correctly.

MySQL query performance optimization needs to start from the core points, including rational use of indexes, optimization of SQL statements, table structure design and partitioning strategies, and utilization of cache and monitoring tools. 1. Use indexes reasonably: Create indexes on commonly used query fields, avoid full table scanning, pay attention to the combined index order, do not add indexes in low selective fields, and avoid redundant indexes. 2. Optimize SQL queries: Avoid SELECT*, do not use functions in WHERE, reduce subquery nesting, and optimize paging query methods. 3. Table structure design and partitioning: select paradigm or anti-paradigm according to read and write scenarios, select appropriate field types, clean data regularly, and consider horizontal tables to divide tables or partition by time. 4. Utilize cache and monitoring: Use Redis cache to reduce database pressure and enable slow query
