Performance Benchmark MySQL on Cloud

Do MySQL instances with the same specifications (e.g., 4 vCPU and 16 GB) from different cloud providers offer the same performance? This series aims to explore and answer this question by Sysbench benchmark.

Summary

This article presents the results of the author’s recent testing of managed MySQL services from multiple cloud providers using Sysbench. It also provides a detailed guide on how to reproduce the test.

The benchmark uses consistent sysbench parameters and MySQL instance specifications, focusing on instances with 4 vCPU, 16 GB of memory, and 100 GB storage, with IOPS set to 3000 where applicable. The test includes services from Alibaba Cloud, Huawei Cloud, Tencent Cloud, AWS, Azure, Oracle Cloud, and Google Cloud, Azure, all with high availability, high data reliability, and dedicated computing resources.

The benchmark was executed using sysbench’s oltp_read_write_with_hooks.lua script and covers a range of thread counts. While the regions used for testing were chosen randomly, performance is assumed to be consistent across regions for each provider. Limitations include differences in minor MySQL versions, hardware configurations, and default parameter templates across cloud providers. These factors mean the comparison isn’t that equal.

The goal of the benchmark is to understand MySQL performance differences across cloud providers to help developers adapt when migrating between them.

Latest benchmark result

threads/qpsaliyuntencenthuaweibaiduawsazuregoogleoracle
47102559225572206163920257233551
897029936467441013313365413415936
161466016141822972986427654825028054
3222155223361352012022121571036348578317
4827905247701784916448165161197367458130
6432704264952011418187181181276180717838
9636846290772088321007207821330096758504
128396972991820128210292244613388106208198
192389993061020521220912259013478115078043
256383563105221187216652232312985118727907
384396793122421729211672190212904121318209
512403333180522647216272159112930121068386
have_sslDISABLEDDISABLEDDISABLEDDISABLEDYESYESYESYES
innodb_buffer_pool_size9.75GB12GB9GB12GB11GB12GB11GB17GB
innodb_doublewriteONONONONOFFOFFONON
innodb_flush_log_at_trx_commit11111111
innodb_flush_methodO_DIRECTO_DIRECTO_DIRECTfsyncO_DIRECTfsyncO_DIRECTO_DIRECT
innodb_io_capacity200002000012000200020020050001250
innodb_read_io_threads44484NA42
innodb_write_io_threads44484NA44
log_binONONONONOFFONONON
performance_schemaOFFOFFOFFOFFOFFONONON
rpl_semi_sync_master_enabledONONONONNANANANA
rpl_semi_sync_master_timeout1000100001000010000NANANANA
sync_binlog11110001111
thread_pool_size84NANANA4NA16
version8.0.368.0.30-txsql8.0.28-2310038.0.32-2.0.0.28.0.358.0.37-azure8.0.31-google8.0.39-cloud
cpu_capacity80.493.3163.673.9110.956.349.9114.7

Why is there such a large performance difference?

This test is CPU-intensive and primarily reflects the CPU and basic transaction processing capabilities of MySQL across different cloud providers. After some analysis, the following points were made:

  1. Although the specifications appear the same across providers, the actual CPU computing power provided varies significantly.
  2. Different high-availability (“log synchronization”) architectures have a major impact on performance. For instance, AWS, Google Cloud, and Azure use similar cross-availability zone “shared storage” architectures, while Alibaba Cloud, Tencent Cloud, and Baidu Cloud adopt semi-sync for high availability. Oracle Cloud, on the other hand, uses a three-node MGR architecture. This leads to significant performance differences despite seemingly identical specifications.

Additionally, other factors contribute to performance differences, such as whether SSL is enabled, the performance_schema is active, log_bin is turned on, or a thread pool is utilized.

How to repeat the benchmark ?

You can repeat the benchmark with the same sysbench parameters and the same instance shape in the cloud provider

MySQL instance specifications

In this test, we selected the managed MySQL services (RDS MySQL) from Alibaba Cloud, Huawei Cloud, Tencent Cloud, AWS, Azure, Oracle Cloud, and Google Cloud. The specifications for the test instances meet the following conditions:

  • 4 vCPU and 16 GB memory, with 100 GB of storage
  • If IOPS needs to be specified, it is set to 3000
  • High availability across multiple availability zones
  • Very high data reliability, using synchronous replication, semi-synchronous replication, MGR replication, or synchronous replication at the storage layer
  • Excellent performance consistency (dedicated computing resources)
Cloud ProviderMySQL specifications
Amazon RDSdb.m6i.xlarge
Oracle CloudMySQL.4
Google Clouddb-custom-4-16384
Alibaba Cloudmysql.x4.large.2c
Tencent Cloudcpu = 4; mem_size = 16000;
device_type=EXCLUSIVE
AzureGP_Standard_D4ds_v4
Huawei Cloudrds.mysql.x1.xlarge.4.ha
Baidu Cloudcpu_count = 4 memory_capacity = 16

Sysbench CMD and parameters

The latest Sysbench is used, which actually is the master branch on GitHub .

  • lua benchmark file : oltp_read_write_with_hooks.lua (which is almost the same as the sysbnech default oltp_read_write.lua)
  • --time=300
  • --table_size=10000 --tables=10
  • --skip_trx=on --db-ps-mode=disable
  • --rand-type=uniform
  • --threads=[4|8|16|32|48|64|96|128|192|256|384|512]
sysbench oltp_read_write_with_hooks --threads=$conthread --time=$run_time \
 --report-interval=3 --percentile=95 --histogram=on --db-driver=mysql \
 $sysb_mysql_conn \
 --skip_trx=on --db-ps-mode=disable --rand-type=uniform $ssl_param \
 --table_size=$table_size --tables=$tables run >> $run_file 2>&1

Limitation of the benchmark

This section provides additional notes on some limitations of the benchmark

  • In this series of benchmark, the region selection was relatively random. For example, Alibaba Cloud was tested in Hangzhou, Baidu Cloud in Beijing, AWS/Google Cloud in Tokyo, and Azure Cloud in the Eastern US, Hong Kong, and Tokyo. The underlying assumption here is that the RDS MySQL performance across different regions of each cloud provider should be the same or similar.
  • Due to personal time and resource constraints, only the commonly used 4 vCPU 16 GB instance type was tested, with each single concurrent test lasting 300 seconds.
  • Although MySQL 8.0 was used in all cases, the minor database versions differed among the cloud providers.
  • The CPU, disk types, and pricing vary across providers, so this is not a perfectly equal comparison, nor could it be.
  • The default parameter templates of RDS instances also vary between providers, and even a single provider may adjust these parameters at different stages.
  • Some cloud providers offer multiple disk storage options. For instance, Alibaba Cloud supports ESSD PL1/2/3, and AWS offers gp3/io1 storage options. These different storage choices result in different performance outcomes.
  • Regarding the choice of availability zones: In the tests, I tried to place the ECS/EC2/VM instances and the database master nodes in the same availability zone whenever possible. However, there were cases where this wasn’t feasible. For example, during the tests, GCP’s availability zone b in the Tokyo region allowed the creation of the database but had no resources available to create VM nodes (“A n2-highcpu-8 VM instance is currently unavailable in the asia-northeast1-b zone.”).

more about the benchmark

  • The main goal of the test is to understand the performance differences of MySQL across different cloud providers with the same specifications. This helps developers adapt to the architecture and performance of RDS when migrating between different cloud providers.
  • About the Author "orczhou" : Oracle ACE (MySQL focused), Co-founder of NineData, and former Senior Database Expert at Alibaba.

Leave a Reply

Your email address will not be published. Required fields are marked *