Analyzing MySQL slow query log for performance bottlenecks
Jul 02, 2025 pm 04:46 PMMySQL slow query log is used to locate database performance bottlenecks. By checking and turning on slow query logs (slow_query_log=1), setting the log file path and query time threshold (long_query_time), recording the execution time-consuming SQL. When analyzing the content of the log, you need to pay attention to information such as query time, number of scanned rows, and number of returned rows. Common problems include the lack of indexes that lead to full table scanning, unnecessary sorting or grouping, and unreasonable association queries. The optimization suggestions are: 1. Use EXPLAIN to analyze the execution plan and add appropriate indexes; 2. Ensure that the sorted fields have indexes and avoid depth paging; 3. Ensure that the join fields are indexed and simplify the JOIN logic. Providing SQL with high resource consumption can be given priority with mysqldumpslow, pt-query-digest or third-party monitoring tools to assist with analysis.
MySQL's slow query log is an important tool for locating database performance bottlenecks. If you find that the system response is slow, or a certain interface query is particularly time-consuming, turning on and analyzing the slow query log can quickly find out the problem SQL.

Turn on slow query log
Before analyzing, you must first make sure that the slow query log is enabled. By default, many MySQL instances do not enable this feature.

You can check whether it is enabled by:
SHOW VARIABLES LIKE 'slow_query_log';
If the result is OFF
, it needs to be turned on manually. Modify the configuration file my.cnf
or my.ini
to add or adjust the following configuration:

-
slow_query_log = 1
-
slow_query_log_file = /var/log/mysql/slow.log
-
long_query_time = 1
(unit: seconds, can be set as needed)
Take effect after restarting MySQL. It can also be turned on online, but it is not recommended for long-term use in production environments.
Note: The smaller
long_query_time
is set, the more SQL is recorded, the more helpful it is for troubleshooting problems, but it may also affect performance and log volume.
Analyze log content
When enabled, the slow query log will record SQL whose execution time exceeds the set threshold. Each record usually includes:
- Query time
- Lock waiting time
- Number of scan lines
- Return the number of rows
- The SQL statement itself
For example, a typical log:
# Time: 2025-04-05T10:00:00.123456Z # Query_time: 2.34 Lock_time: 0.00 Rows_sent: 1 Rows_examined: 100000 SET timestamp=1234567890; SELECT * FROM orders WHERE user_id = 123;
From this log, we can see:
- This query took 2.34 seconds
- Scanned 100,000 rows of data and returned only one row, indicating that the index is not taken or the index is useless
- You can consider adding index optimization to
user_id
field
FAQs and Optimization Suggestions
1. Missing index causes full table scan
This is one of the most common performance issues. If you see that a certain SQL has a large Rows_examined
and Rows_sent
is very small, you can basically judge that it is missing a suitable index.
Suggested practices:
- Analyze SQL execution plans using
EXPLAIN
- Add index to fields that are often used as query conditions
- Pay attention to the order of combined indexes and the use of overriding indexes
2. Unnecessary sorting or grouping
Some SQLs include ORDER BY
and GROUP BY
, but these operations do not utilize the index, which will lead to temporary table and file sorting, which is very resource-consuming.
Solution:
- Make sure the sorted fields have indexes
- Avoid using
SELECT *
- Control the page depth to avoid performance drops when the last few pages are found
3. The association query is unreasonable
If the JOIN operation does not use the index correctly, or if too many tables are associated, it can also cause the query to be slowed down.
Optimization direction:
- Make sure that the connection fields have indexes
- Avoid cross-library JOIN or multi-layer nesting
- Consider whether the business logic can be split into multiple simple queries
Tool-assisted analysis
Although you can directly view the log, it is very inefficient when facing a large amount of data. Some tools can be used to statistics and analyze:
- mysqldumpslow : MySQL comes with slow query log analysis tool, suitable for basic summary
- pt-query-digest (Percona Toolkit): Powerful function, supports aggregation analysis and report generation
- Third-party monitoring platforms : such as Prometheus Grafana, Alibaba Cloud DAS, etc., can graphically display slow query trends
Use pt-query-digest
to give an example:
pt-query-digest /var/log/mysql/slow.log > slow_report.txt
The output content will classify similar SQLs and give information such as average time and total execution times, so that you can give priority to the "most painful" batch of SQLs.
Basically that's it. Analyzing slow query logs may seem cumbersome, but as long as you master a few key points, you can quickly locate most performance problems. The focus is on those SQL that "scan more lines but return less", as well as frequent slow statements.
The above is the detailed content of Analyzing MySQL slow query log for performance bottlenecks. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undress AI Tool
Undress images for free

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

Turn on MySQL slow query logs and analyze locationable performance issues. 1. Edit the configuration file or dynamically set slow_query_log and long_query_time; 2. The log contains key fields such as Query_time, Lock_time, Rows_examined to assist in judging efficiency bottlenecks; 3. Use mysqldumpslow or pt-query-digest tools to efficiently analyze logs; 4. Optimization suggestions include adding indexes, avoiding SELECT*, splitting complex queries, etc. For example, adding an index to user_id can significantly reduce the number of scanned rows and improve query efficiency.

mysqldump is a common tool for performing logical backups of MySQL databases. It generates SQL files containing CREATE and INSERT statements to rebuild the database. 1. It does not back up the original file, but converts the database structure and content into portable SQL commands; 2. It is suitable for small databases or selective recovery, and is not suitable for fast recovery of TB-level data; 3. Common options include --single-transaction, --databases, --all-databases, --routines, etc.; 4. Use mysql command to import during recovery, and can turn off foreign key checks to improve speed; 5. It is recommended to test backup regularly, use compression, and automatic adjustment.

When handling NULL values ??in MySQL, please note: 1. When designing the table, the key fields are set to NOTNULL, and optional fields are allowed NULL; 2. ISNULL or ISNOTNULL must be used with = or !=; 3. IFNULL or COALESCE functions can be used to replace the display default values; 4. Be cautious when using NULL values ??directly when inserting or updating, and pay attention to the data source and ORM framework processing methods. NULL represents an unknown value and does not equal any value, including itself. Therefore, be careful when querying, counting, and connecting tables to avoid missing data or logical errors. Rational use of functions and constraints can effectively reduce interference caused by NULL.

MySQL transactions and lock mechanisms are key to concurrent control and performance tuning. 1. When using transactions, be sure to explicitly turn on and keep the transactions short to avoid resource occupation and undolog bloating due to long transactions; 2. Locking operations include shared locks and exclusive locks, SELECT...FORUPDATE plus X locks, SELECT...LOCKINSHAREMODE plus S locks, write operations automatically locks, and indexes should be used to reduce the lock granularity; 3. The isolation level is repetitively readable by default, suitable for most scenarios, and modifications should be cautious; 4. Deadlock inspection can analyze the details of the latest deadlock through the SHOWENGINEINNODBSTATUS command, and the optimization methods include unified execution order, increase indexes, and introduce queue systems.

GROUPBY is used to group data by field and perform aggregation operations, and HAVING is used to filter the results after grouping. For example, using GROUPBYcustomer_id can calculate the total consumption amount of each customer; using HAVING can filter out customers with a total consumption of more than 1,000. The non-aggregated fields after SELECT must appear in GROUPBY, and HAVING can be conditionally filtered using an alias or original expressions. Common techniques include counting the number of each group, grouping multiple fields, and filtering with multiple conditions.

MySQL paging is commonly implemented using LIMIT and OFFSET, but its performance is poor under large data volume. 1. LIMIT controls the number of each page, OFFSET controls the starting position, and the syntax is LIMITNOFFSETM; 2. Performance problems are caused by excessive records and discarding OFFSET scans, resulting in low efficiency; 3. Optimization suggestions include using cursor paging, index acceleration, and lazy loading; 4. Cursor paging locates the starting point of the next page through the unique value of the last record of the previous page, avoiding OFFSET, which is suitable for "next page" operation, and is not suitable for random jumps.

To set up asynchronous master-slave replication for MySQL, follow these steps: 1. Prepare the master server, enable binary logs and set a unique server-id, create a replication user and record the current log location; 2. Use mysqldump to back up the master library data and import it to the slave server; 3. Configure the server-id and relay-log of the slave server, use the CHANGEMASTER command to connect to the master library and start the replication thread; 4. Check for common problems, such as network, permissions, data consistency and self-increase conflicts, and monitor replication delays. Follow the steps above to ensure that the configuration is completed correctly.

To view the size of the MySQL database and table, you can query the information_schema directly or use the command line tool. 1. Check the entire database size: Execute the SQL statement SELECTtable_schemaAS'Database',SUM(data_length index_length)/1024/1024AS'Size(MB)'FROMinformation_schema.tablesGROUPBYtable_schema; you can get the total size of all databases, or add WHERE conditions to limit the specific database; 2. Check the single table size: use SELECTta
