Effect of filesystems on MySQL performance
I start asking questions about this topic on this forum, and later posted similar questions elsewhere.
When migrating from CentOS 5.8 to CentOS 6.4, I found that MySQL was running 80 times slower.
To test the difference in performance, I did the following:
- Install CentOS using URL install on a physical (not VPS) computer, and select all default options.
- Made no server configuration changes.
- Install MySQL using yum install mysql-server which uses CentOS's native repository (note that different versions of MySQL are included in CentOS 5 and 6 repository
- Execute mysql_secure_installation.
- Created a very simple script called test.sql which creates 40 tables and inserts 5 rows in each table
- Execute time mysql -uroot -pmyPassword < test.sql.
I've done these tests on two computers; one a 7 year old laptop and the second a brand new desktop, and obtained similar results. I've tried CentOS 6.3 and found that it reacts the same as CentOS 6.4, and tried CentOS 5.8 and found that it reacts the same as CentOS 5.9. In addition to installing MySQL using yum with CentOS's repository, I also tried MySQL 5.5 using yum with IUS's repository as well as MySQL 5.6 using RPM directly from MySQL, however, had no change in performance.
So, I started messing around with filesystems, and found huge differences in performance. I found very little documented on this subject, but did find MySQL Bug #46959. See the below times which it took to perform the test under using various versions of CentOS, MySQL, and filesystem configurations.
Has anyone experienced similar behavior? Am I missing any critical configuration steps? I've obviously spent significant time researching this, however, at least I now know more about Linux than I ever had. Please comment.
Centos 5.9, MySQL 5.0.95, filesystem: ext4, barrier=1*
Centos 6.4, MySQL 5.1.69, filesystem: ext4, barrier=1
Centos 6.4, MySQL 5.1.69, filesystem: ext4, barrier=0
Centos 6.4, MySQL 5.1.69, filesystem: ext3, barrier=1
Centos 6.4, MySQL 5.1.69, filesystem: ext3, barrier=0
* I believe CentOS 5X defaults to ext4 and barrier=1, but I no longer have CentOS 5X installed and haven't verified.
Any thoughts or opinions?
While I do not think so, maybe I am just installing CentOS incorrectly? Think so? If so, what settings should I check to confirm?
If I am setting CentOS correctly, isn't this kind of a big deal, and should it not be better documented?
July 12th, 2013, 03:05 PM
What table type are you creating?
MyISAM or InnoDB?
Or haven't you set an ENGINE=xxx on your create table statement?
My reason for asking is that usually what filesystem you have doesn't have that much impact on the overall speed of an database. Some filesystems are faster on certain marginal activities than others. But in the end the data has to be written to or read from the disks and that is where the bottleneck usually is.
But one thing that I thought of that could cause a scenario like yours is if you haven't set storage engine type on the tables.
Because somewhere around 5.1 IIRC the default storage engine switched from MyISAM to InnoDB and the default setting for InnoDB to make it ACID compliant is that it needs to write down every insert to disk before it continues (the transaction log).
Which equals that you can only make about 100-150 inserts/updates/deletes per second.
MyISAM doesn't have this and would handle all your small inserts in memory at a fraction of the time (at the cost of less reliability if the power to the server went away).
If this is the case and you can live with a possibility to loose data in case of a power outage then you can switch in the MySQL configuration to:
innodb_flush_log_at_trx_commit = 2
Which usually solves one of the more common performance issues with InnoDB.
July 13th, 2013, 09:06 AM
Thank you Stefan,
I have specified InnoDB for the tests on both machines. I also had previously tried making changes to innodb_flush_log_at_trx_commit, but only witnessed marginal improvements.
The huge improvements came from filesystem changes by setting barrier=0 and secondarily changing from ext3 to ext4.
After writing my original post, I have since goggled "MySQL barrier bugs", and received several confirming posts.
I just cannot believe this has been going on so long, but have performed multiple tests, and don't know what else to think. Barriers provide data integrity and thus provide value, however, given the performance hit, they are totally unsuitable for use with MySQL.
Has anyone experienced similar behavior? I very much appreciate your reply Stefan, however, would like several others to provide insight.
Last edited by NotionCommotion; July 13th, 2013 at 09:19 AM.
July 13th, 2013, 06:04 PM
If you google on EXT4 and Barriers in general you get more hits about setting nobarrier mount option.
And this article about EXT4 and performance tests shows that Postgresql behaves the same way.
The thing is that the performance hit for things like these are usually only seen on desktops/laptops with a single disk configuration.
On servers you basically always have a RAID controller with a battery backed cache set in write back mode which means that it "fakes" the fsyncs and tells the OS that the data has been written to disk although it has only been stored in the cache.
1. You haven't done anything wrong at installation.
2. EXT4 developers has chosen robustness over performance as default, requiring people to make a conscious decision if they want more performance at a higher risk level.
3. This is not just a MySQL problem but a general one, it's just that it's more noticeable with database write patterns.
4. If you want to get more performance you will actively have to decide and use the nobarrier mount option (and possibly some others) the same way as you need to set the innodb_flush_log_at_trx_commit = 2 for MySQL.
Comments on this post
July 14th, 2013, 11:42 AM
I don't know if you are right or not, but your reasoning makes sense, and accept it. Before seeing your latest reply, I posted a bug on the MySQL site, and I now believe it wasn't a MySQL bug after all.
I in fact do not have a RAID controller which would make me experience these problems yet not 90% of everyone else who has RAID.
I have looked at the previous articles that I posted as well as the lasted you did, and none of them mention RAID.
I find it totally amazing that this behavior isn't better documented. At least now I have an answer, and can let it go.