Do MySQL instances with the same specifications (e.g., 4 vCPU and 16 GB) from different cloud providers offer the same performance? This series aims to explore and answer this question by Sysbench benchmark.
Summary
This article presents the results of the author’s recent testing of managed MySQL services from multiple cloud providers using Sysbench. It also provides a detailed guide on how to reproduce the test.
The benchmark uses consistent sysbench parameters and MySQL instance specifications, focusing on instances with 4 vCPU, 16 GB of memory, and 100 GB storage, with IOPS set to 3000 where applicable. The test includes services from Alibaba Cloud, Huawei Cloud, Tencent Cloud, AWS, Azure, Oracle Cloud, and Google Cloud, Azure, all with high availability, high data reliability, and dedicated computing resources.
The benchmark was executed using sysbench’s oltp_read_write_with_hooks.lua script and covers a range of thread counts. While the regions used for testing were chosen randomly, performance is assumed to be consistent across regions for each provider. Limitations include differences in minor MySQL versions, hardware configurations, and default parameter templates across cloud providers. These factors mean the comparison isn’t that equal.
The goal of the benchmark is to understand MySQL performance differences across cloud providers to help developers adapt when migrating between them.
Latest benchmark result
threads/qps | aws | azure | oracle | |
---|---|---|---|---|
4 | 1639 | 2025 | 723 | 3551 |
8 | 3313 | 3654 | 1341 | 5936 |
16 | 6427 | 6548 | 2502 | 8054 |
32 | 12157 | 10363 | 4857 | 8317 |
48 | 16516 | 11973 | 6745 | 8130 |
64 | 18118 | 12761 | 8071 | 7838 |
96 | 20782 | 13300 | 9675 | 8504 |
128 | 22446 | 13388 | 10620 | 8198 |
192 | 22590 | 13478 | 11507 | 8043 |
256 | 22323 | 12985 | 11872 | 7907 |
384 | 21902 | 12904 | 12131 | 8209 |
512 | 21591 | 12930 | 12106 | 8386 |
have_ssl | YES | YES | YES | YES |
innodb_buffer_pool_size | 11GB | 12GB | 11GB | 17GB |
innodb_doublewrite | OFF | OFF | ON | ON |
innodb_flush_log_at_trx_commit | 1 | 1 | 1 | 1 |
innodb_flush_method | O_DIRECT | fsync | O_DIRECT | O_DIRECT |
innodb_io_capacity | 200 | 200 | 5000 | 1250 |
innodb_read_io_threads | 4 | NA | 4 | 2 |
innodb_write_io_threads | 4 | NA | 4 | 4 |
log_bin | OFF | ON | ON | ON |
performance_schema | OFF | ON | ON | ON |
sync_binlog | 1 | 1 | 1 | 1 |
version | 8.0.35 | 8.0.37-azure | 8.0.31-google | 8.0.39-cloud |
cpu_capacity | 110.9 | 56.3 | 49.9 | 114.7 |
thread_pool_size | NA | 4 | NA | 16 |
Why is there such a large performance difference?
This test is CPU-intensive and primarily reflects the CPU and basic transaction processing capabilities of MySQL across different cloud providers. After some analysis, the following points were made:
- Although the specifications appear the same across providers, the actual CPU computing power provided varies significantly.
- Different high-availability (“log synchronization”) architectures have a major impact on performance. For instance, AWS, Google Cloud, and Azure use similar cross-availability zone “shared storage” architectures, while Alibaba Cloud, Tencent Cloud, and Baidu Cloud adopt semi-sync for high availability. Oracle Cloud, on the other hand, uses a three-node MGR architecture. This leads to significant performance differences despite seemingly identical specifications.
Additionally, other factors contribute to performance differences, such as whether SSL is enabled, the performance_schema
is active, log_bin
is turned on, or a thread pool is utilized.
How to repeat the benchmark ?
You can repeat the benchmark with the same sysbench parameters and the same instance shape in the cloud provider
MySQL instance specifications
In this test, we selected the managed MySQL services (RDS MySQL) from Alibaba Cloud, Huawei Cloud, Tencent Cloud, AWS, Azure, Oracle Cloud, and Google Cloud. The specifications for the test instances meet the following conditions:
- 4 vCPU and 16 GB memory, with 100 GB of storage
- If IOPS needs to be specified, it is set to 3000
- High availability across multiple availability zones
- Very high data reliability, using synchronous replication, semi-synchronous replication, MGR replication, or synchronous replication at the storage layer
- Excellent performance consistency (dedicated computing resources)
Cloud Provider | MySQL specifications |
Amazon RDS | db.m6i.xlarge |
Oracle Cloud | MySQL.4 |
Google Cloud | db-custom-4-16384 |
Alibaba Cloud | mysql.x4.large.2c |
Tencent Cloud | cpu = 4; mem_size = 16000; device_type=EXCLUSIVE |
Azure | GP_Standard_D4ds_v4 |
Huawei Cloud | rds.mysql.x1.xlarge.4.ha |
Baidu Cloud | cpu_count = 4 memory_capacity = 16 |
Sysbench CMD and parameters
The latest Sysbench is used, which actually is the master branch on GitHub .
- lua benchmark file : oltp_read_write_with_hooks.lua (which is almost the same as the sysbnech default oltp_read_write.lua)
--time=300
--table_size=10000 --tables=10
--skip_trx=on --db-ps-mode=disable
--rand-type=uniform
--threads=[4|8|16|32|48|64|96|128|192|256|384|512
]
sysbench oltp_read_write_with_hooks --threads=$conthread --time=$run_time \
--report-interval=3 --percentile=95 --histogram=on --db-driver=mysql \
$sysb_mysql_conn \
--skip_trx=on --db-ps-mode=disable --rand-type=uniform $ssl_param \
--table_size=$table_size --tables=$tables run >> $run_file 2>&1
Limitation of the benchmark
This section provides additional notes on some limitations of the benchmark
- In this series of benchmark, the region selection was relatively random. For example, Alibaba Cloud was tested in Hangzhou, Baidu Cloud in Beijing, AWS/Google Cloud in Tokyo, and Azure Cloud in the Eastern US, Hong Kong, and Tokyo. The underlying assumption here is that the RDS MySQL performance across different regions of each cloud provider should be the same or similar.
- Due to personal time and resource constraints, only the commonly used 4 vCPU 16 GB instance type was tested, with each single concurrent test lasting 300 seconds.
- Although MySQL 8.0 was used in all cases, the minor database versions differed among the cloud providers.
- The CPU, disk types, and pricing vary across providers, so this is not a perfectly equal comparison, nor could it be.
- The default parameter templates of RDS instances also vary between providers, and even a single provider may adjust these parameters at different stages.
- Some cloud providers offer multiple disk storage options. For instance, Alibaba Cloud supports ESSD PL1/2/3, and AWS offers gp3/io1 storage options. These different storage choices result in different performance outcomes.
- Regarding the choice of availability zones: In the tests, I tried to place the ECS/EC2/VM instances and the database master nodes in the same availability zone whenever possible. However, there were cases where this wasn’t feasible. For example, during the tests, GCP’s availability zone b in the Tokyo region allowed the creation of the database but had no resources available to create VM nodes (“A n2-highcpu-8 VM instance is currently unavailable in the asia-northeast1-b zone.”).
more about the benchmark
- The main goal of the test is to understand the performance differences of MySQL across different cloud providers with the same specifications. This helps developers adapt to the architecture and performance of RDS when migrating between different cloud providers.
- About the Author
"orczhou"
: Oracle ACE (MySQL focused), Co-founder of NineData, and former Senior Database Expert at Alibaba.
Leave a Reply