Glenn Engstrand

In my last blog, I compared two functionally identical micro-services, one written in Scala and the other in Clojure, in terms of performance under load in AWS. In this blog, I compare the performance of the Scala micro-service under load in AWS running with MySql, running with PostGreSql, and running both with and without Docker. For a non-trivial, heterogeneous data source micro-service with only one thing different per test run (MySql vs PostGreSql and no container vs Docker), I was able to get some good insight on how these popular technology choices stack up against each other in terms of performance.

MySql Throughput

MySql Throughput Per Minute

Let's review the environment in which these performance tests were conducted. I would always use a development configured db m3 large RDS instance with 100 GB SSD. I would always use 7 m3 large EC2 instances for Cassandra, Redis, Kafka, Solr, Elastic Search with Kibana, the news feed micro-service, and the load test application. Each of these instances had only 8 GB hard disk space.

MySql Mean

MySql Mean Latency in Milli Seconds Per Minute

I did the same thing with every test run. I would let it run until the Cassandra instance ran out of disk space then I would look at the last hour of the test run which was compared with the other test runs. I would always release all instances after the single test run was completed. The next test run would always start with fresh instances.

MySql Median

MySql Median Latency in Milli Seconds Per Minute

MySql 95th Percentile

MySql Latency 95th Percentile

The load test application has not changed. It runs 3 concurrent threads where each thread keeps running the same batch operations. In each batch, 10 participants get created. Each participant gets from 2 to 4 friends. Then each participant broadcasts 10 stories of 150 words each. It is this broadcast operation that gets measured, compared, and reported on.

PostGreSql Throughput

PostGreSql Throughput Per Minute

PostGreSql Mean

PostGreSql Per Minute Average Latency in Milli Seconds

The micro-service logs the latency of each request to Kafka. The perf3 application consumes these messages, aggregates them on a per entity and operation basis, then updates elastic search with the per minute throughput, mean, median, and 95th percentiles. The screen shots you see here come from a dashboard on this data that I set up in Kibana.

PostGreSql Median

PostGreSql Per Minute 50th Percentile Latency

PostGreSql 95th Percentile

PostGreSql 95th Percentile Latency

The first test was with using MySql for the participant and friends information and the micro-service ran without any container. The throughput was very consistent and averaged out to 13,961 requests per minute. Average per minute latency was 51 ms but started at a little over 40 ms and ended at about 60 ms. So the latency would climb over time.

Docker Bridge Mode Throughput

Docker Bridge Mode Throughput Per Minute

Docker Bridge Mode Mean

Docker Bridge Mode Per Minute Average Latency

In the next test, I replaced MySql with PostGreSql. Throughput was also very stable and 10% better than MySql with a per minute average of 15,502 requests per minute. Latency was very similar to the MySql test run both in terms of number of milliseconds and in the rate that the duration would climb over time.

Docker Bridge Mode Median

Docker Bridge Mode 50th Percentile Latency

Docker Bridge Mode 95th Percentile

Docker Bridge Mode 95th Percentile Latency

The test run after that was also with using PostGreSql but this time the micro-service was running in Docker in bridge mode. Bridge mode means that the container image runs with its own IP stack and inbound requests go through a NAT layer. This is the default mode which is supposed to be more secure. Throughput was also very consistent and averaged out to 15,132 requests per minute. This is 3% lower than the no container test run. Latency was 30% slower and climbed but not as quickly as the no container test run.

Docker Host Mode Throughput

Docker Host Mode Throughput Per Minute

Docker Host Mode Mean

Docker Host Mode Per Minute Average Latency

In the final test run, PostGreSql and Docker was still being tested. This time, the micro-service was run with Docker host mode networking. Host mode means that the container image runs with the same IP stack as the host machine. In this run, both throughput and latency was very similar to the test run with PostGreSql and no container.

Docker Host Mode Median

Docker Host Mode 50th Percentile Latency

Docker Host Mode 95th Percentile

Docker Host Mode 95th Percentile Latency

In conclusion, I would have to say that the AWS RDS version of PostGreSql is now just as good as the AWS RDS version of MySql when it comes to latency. Furthermore, PostGreSql has overtaken MySql when it comes to throughput.

Throughput Summary

There are a lot of reasons for running micro-services in Docker. There will be no performance penalty if you run your micro-service in Docker with host mode networking configured.

Latency Summary