国产av日韩一区二区三区精品,成人性爱视频在线观看,国产,欧美,日韩,一区,www.成色av久久成人,2222eeee成人天堂

Home Database Mysql Tutorial MySQL Enterprise Backup Improved Compression Algorithm for 3_MySQL

MySQL Enterprise Backup Improved Compression Algorithm for 3_MySQL

Jun 01, 2016 pm 01:13 PM

Background:

Prior to version 3.10, MySQL Enterprise Backup (MEB) used zlib compression for in-memory compression of datafiles. The compression worked by splitting the innodb datafiles into fixed size blocks and compressing each block independently.After searching on the web we found there are many compression algorithms available which can be used for compression. This triggered the idea of testing the performance of available compression algorithms. If the benchmark shows improved performance we can make backup and/or restore faster by adding the new compression algorithm to MEB.

Implementation :

The idea to implement the algorithms procceded as follows .

1. Select a "long list" of algorithms based on literature and what Google and other databases are using.
2. Create a prototype of MEB supporting the algorithms in the long list.
3. Run comparison tests of algorithms with the MEB prototype.
4. Select a "short list" of algorithms that will be added to MEB 3.10.

Criteria for Selecting the Algorithm:

The following criteria were used in comparing compression algorithms.

1. Compression speed
2. Decompression speed
3. Compression ratio
4. CPU-usage
5. Licensing model

These criteria have a differing importance. Compression speed, and compression ratio are probably more important to most users than decompression speed.

Performance Test:

We have now completed the performance tests of the new compression algorithms for MEB. See the below table for the list of compression algorithms were evaluated in the test.

Machine and OS Configurations:

OS : Oracle Linux 6 (x86_64)
Memory: 29 GB RAM
Cpu : 8 vCPUs (2 quad-core processors, no HT)
Read speed of the source dir(data directory) : 600 MB/s
Write speed of the destination dir(backup directory) : 300 MB/s

A backup of a 441 GB database was generated using TPC-H datagen tool taken when the mysqld process was not running .

Compression Algorithms Time [min] Compr. size [GB] Compr. / Orig. size Avg. CPU usage Avg. CPU Idle Reads [MB/s] Writes [MB/s] Source Disk busy
uncompressed/Normal Backup to Directory 31 N/A 100% 20% 65% 250 250 100%
Zlib (level=1) 34 165 37% 82% 15% 220 90 70%
Zlib (level=9) 720 120 27% - - - - -
LZF 27 222 50% 45% 50% 270 140 100%
LZO 27 224 51% 40% 55% 270 140 100%
Snappy 31 221 50% 55% 40% 260 130 80%
QuickLZ 26 203 46% 35% 55% 280 120 100%
LZ4 26 215 49% 35% 55% 280 130 100%
LZMA (level=1) 90 110 25% 78% 20% 80 22 25%
LZMA (level=9) 360 88 20% - - - - -

Few Important Notes:

? Some columns are blanks because the test ran for longer duration of time so it was not feasible to collect monitoring stats.

“Source Disk busy" is the number of I/O per second in percent of what the device can execute. It is not related to the device throughput (MB/s).

MEB has an internal work queue to process data that is managed by separate read, process, and write threads. Read threads will place data in the process queue where processing threads then process it, and finally after the processing is complete, the data will be placed in the write queue where they will be written out to storage. Due to this design, if writes are slower than reads (which they often are), then the reads will effectively be throttled by the write speeds (write speeds typically being the limiting factor).

Analysis of the Compression Test's:

LZ4 and QuickLZ were the fastest algorithms, while ZLib (level=9) was by far the slowest. For compression ratios, LZMA (level=9) was only able to reach 20%, whereas QuickLZ reached 46%, and LZ4 49%. This illustrates the fact that there is a trade-off between backup speed and the reduction in data size. Nevertheless, we could say that algorithm A is better than algorithm B, if A is faster than B and produces a backup which is not larger than that of B, or if A produces a smaller backup than B and A is not slower than B. Using this criteria we can say that QuickLZ is a better compression algorithm than LZ4, Snappy, LZO, or LZF. Similarly, LZMA (level=1) is superior to Zlib (level=9). ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ??????????? The summary table shows two limiting factors for the backup speed. The IO speed of the of disk on which the database resides (the source disk) is thelimiting factor for uncompressed backup and compressed backups made with LZF, LZO,QuickLZ and LZ4. For Zlib (level=1), Snappy and LZMA (level=1) the limiting factor is the CPU. After removing the worst performing algorithms, we have four remaining that we can organize into a line where you get higher speeds as you move to the left, and better compression as you move to the right.

BEST SPEED --- QuickLZ --LZ4------ Zlib (lev.=1) ---- LZMA (lev.=1)---- LZMA(lev=9) --- BEST COMPRESSION

Restore Speed:

The restore speed was almost the same for all the algorithms. The restore of uncompressed backup and ZLib compressed backup took 28 minutes, and for all the other algorithms the restore time was 29 minutes.

Conclusion:

For licensing reasons QuickLZ cannot be used with MEB. Therefore it was replaced with LZ4. Thus, the new compression algorithms are LZ4 (for fast compression) and LZMA (for high compression ratio).

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undress AI Tool

Undress AI Tool

Undress images for free

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

What is GTID (Global Transaction Identifier) and what are its advantages? What is GTID (Global Transaction Identifier) and what are its advantages? Jun 19, 2025 am 01:03 AM

GTID (Global Transaction Identifier) ??solves the complexity of replication and failover in MySQL databases by assigning a unique identity to each transaction. 1. It simplifies replication management, automatically handles log files and locations, allowing slave servers to request transactions based on the last executed GTID. 2. Ensure consistency across servers, ensure that each transaction is applied only once on each server, and avoid data inconsistency. 3. Improve troubleshooting efficiency. GTID includes server UUID and serial number, which is convenient for tracking transaction flow and accurately locate problems. These three core advantages make MySQL replication more robust and easy to manage, significantly improving system reliability and data integrity.

What is a typical process for MySQL master failover? What is a typical process for MySQL master failover? Jun 19, 2025 am 01:06 AM

MySQL main library failover mainly includes four steps. 1. Fault detection: Regularly check the main library process, connection status and simple query to determine whether it is downtime, set up a retry mechanism to avoid misjudgment, and can use tools such as MHA, Orchestrator or Keepalived to assist in detection; 2. Select the new main library: select the most suitable slave library to replace it according to the data synchronization progress (Seconds_Behind_Master), binlog data integrity, network delay and load conditions, and perform data compensation or manual intervention if necessary; 3. Switch topology: Point other slave libraries to the new master library, execute RESETMASTER or enable GTID, update the VIP, DNS or proxy configuration to

How to connect to a MySQL database using the command line? How to connect to a MySQL database using the command line? Jun 19, 2025 am 01:05 AM

The steps to connect to the MySQL database are as follows: 1. Use the basic command format mysql-u username-p-h host address to connect, enter the username and password to log in; 2. If you need to directly enter the specified database, you can add the database name after the command, such as mysql-uroot-pmyproject; 3. If the port is not the default 3306, you need to add the -P parameter to specify the port number, such as mysql-uroot-p-h192.168.1.100-P3307; In addition, if you encounter a password error, you can re-enter it. If the connection fails, check the network, firewall or permission settings. If the client is missing, you can install mysql-client on Linux through the package manager. Master these commands

Why do indexes improve MySQL query speed? Why do indexes improve MySQL query speed? Jun 19, 2025 am 01:05 AM

IndexesinMySQLimprovequeryspeedbyenablingfasterdataretrieval.1.Theyreducedatascanned,allowingMySQLtoquicklylocaterelevantrowsinWHEREorORDERBYclauses,especiallyimportantforlargeorfrequentlyqueriedtables.2.Theyspeedupjoinsandsorting,makingJOINoperation

Why is InnoDB the recommended storage engine now? Why is InnoDB the recommended storage engine now? Jun 17, 2025 am 09:18 AM

InnoDB is MySQL's default storage engine because it outperforms other engines such as MyISAM in terms of reliability, concurrency performance and crash recovery. 1. It supports transaction processing, follows ACID principles, ensures data integrity, and is suitable for key data scenarios such as financial records or user accounts; 2. It adopts row-level locks instead of table-level locks to improve performance and throughput in high concurrent write environments; 3. It has a crash recovery mechanism and automatic repair function, and supports foreign key constraints to ensure data consistency and reference integrity, and prevent isolated records and data inconsistencies.

What are the transaction isolation levels in MySQL, and which is the default? What are the transaction isolation levels in MySQL, and which is the default? Jun 23, 2025 pm 03:05 PM

MySQL's default transaction isolation level is RepeatableRead, which prevents dirty reads and non-repeatable reads through MVCC and gap locks, and avoids phantom reading in most cases; other major levels include read uncommitted (ReadUncommitted), allowing dirty reads but the fastest performance, 1. Read Committed (ReadCommitted) ensures that the submitted data is read but may encounter non-repeatable reads and phantom readings, 2. RepeatableRead default level ensures that multiple reads within the transaction are consistent, 3. Serialization (Serializable) the highest level, prevents other transactions from modifying data through locks, ensuring data integrity but sacrificing performance;

What are the ACID properties of a MySQL transaction? What are the ACID properties of a MySQL transaction? Jun 20, 2025 am 01:06 AM

MySQL transactions follow ACID characteristics to ensure the reliability and consistency of database transactions. First, atomicity ensures that transactions are executed as an indivisible whole, either all succeed or all fail to roll back. For example, withdrawals and deposits must be completed or not occur at the same time in the transfer operation; second, consistency ensures that transactions transition the database from one valid state to another, and maintains the correct data logic through mechanisms such as constraints and triggers; third, isolation controls the visibility of multiple transactions when concurrent execution, prevents dirty reading, non-repeatable reading and fantasy reading. MySQL supports ReadUncommitted and ReadCommi.

How to add the MySQL bin directory to the system PATH How to add the MySQL bin directory to the system PATH Jul 01, 2025 am 01:39 AM

To add MySQL's bin directory to the system PATH, it needs to be configured according to the different operating systems. 1. Windows system: Find the bin folder in the MySQL installation directory (the default path is usually C:\ProgramFiles\MySQL\MySQLServerX.X\bin), right-click "This Computer" → "Properties" → "Advanced System Settings" → "Environment Variables", select Path in "System Variables" and edit it, add the MySQLbin path, save it and restart the command prompt and enter mysql--version verification; 2.macOS and Linux systems: Bash users edit ~/.bashrc or ~/.bash_

See all articles