September 25th, 2012, 03:28 AM
MySQL query takes too long and Apache hangs out
I have a query that, some times, takes a lot of time to execute. I know I have to tune that query and fix some MySQL problems, but, my question is other.
When this query is "locking" other queries in MySQL, the Apache web request that generated that query hangs out (that's normal), but prevents other web request to execute, since other web requests generates other SQL queries, but MySQL is "locked" with that first one.
How can I do to prevent this ? I would like that one slow SQL query of an apache request would not affect other queries of other web requests.
user1 in test1.php generates a query into table1 ... and takes too long
user2 in test2.php generates a query into table2 ... and is waiting for the first one
user3 in test3.php generates a query into table3 ... and is waiting for the first one
and so on ...
I would understand that a MyISAM table, since is table-locking level, would prevent other queries to be executed, but the thing is that a slow query is avoiding any queries in any table to be executed, and sometimes I have a long tail of queries, and my website seems to be down for some seconds.
September 25th, 2012, 03:04 PM
Since you say that you are trying to access different tables your "Lock" from the bad query is probably high CPU and/or high IO. And since the other queries are running on the same computer and competing for the same resources they are held up/delayed during the execution of the first heavy query.
You can't prioritize one query over another in MySQL so you really should fix the flawed query/database design/server configuration to get all queries running smooth.
September 26th, 2012, 02:39 AM
So a long & bad query can make a tail of 200 small & simple queries waiting to be executed ?
I mean ... other apache processes (other users) should not be affected for a single user visiting a page with a slow query ...
Does not MySQL has any way to separate this so a bunch of queries (depending of one web server process/PHP script) has nothing to do with another bunch of them of other request ?
September 26th, 2012, 05:26 AM
Correct, it's like when you're at work and your boss tells you to fix 200 small bugs and at the _same_ time he wants you to code a new Apache webserver.
The fixing of the 200 small bugs will not go so fast if you are bogged down with the code for the new webserver.
September 26th, 2012, 05:39 AM
I can't believe MySQL can handle this ...
In Apache Web server, a user can request some things, and if this things takes too time to be served, this does NOT affect other users, since there's different childs threads serving per user.
September 26th, 2012, 10:13 AM
No, you are missing the point.
MySQL is also threaded and each client connection has it's own thread to minimize the effect of other threads.
And a web server usually opens a lot of connections (connection pooling) to the database in which the queries are executed. So as long as the queries are behaving as normal, several of them will be executed in parallel on different cores etc and everything is fast.
BUT since they are still executed on the same machine, if one query does something hugely abnormally it means that the other queries will be starved of _shared_ resources like RAM or disk I/O and all cores in the world will not save you from that.
It's the same way with your Apache, if you have a bug in your application which causes a particular thread to consume 4GB of RAM on your 2GB box that machine will start to swap and all user threads will be affected since the machine will spend more time swapping in and out data than actually performing something useful and all threads will be starved of execution time and everything will go in slow motion.
As long as you run on the same machine you will always have shared resources that you might compete about (depending on if you are in the vicinity of max throughput/capacity for each resource) which will slow down the overall performance of all processes/threads dependent on this resource.
September 26th, 2012, 11:59 AM
Thank you for your explanation ... but in this case MySQL is in one host, and webserver in other.
In Apache Webserver you can limit memory and resources per thread, so even a single request needs a lot of resources does not affect to others ... that request will take 4 minutes (or whatever), and will "block" that child, but other clients can live without suffering at all.
I would like the same in MySQL ... that queries from a client (in this case the PHP script through Apache) would not affect to others even if they are wrong done and need a lot of resources of the host.
Or even to put some kind of limit of execution (like the execution timeout of php).
Something that prevent that a single wrong query affects others.
September 28th, 2012, 12:00 PM
I meant competing about shared resources between the threads within MySQL, not that MySQL will compete with Apache (which is a whole other ballgame).
Originally Posted by emdo82
That's only partly true, last time I checked you can limit on the maximum number of threads, how much memory you allow for the stack of each thread and set a max of PHP execution time, etc.
Originally Posted by emdo82
But if you write a script that runs in an infinite loop and consumes 100% cpu then it will try to consume all cpu time of one core until it is kicked out due to exceeding the execution time. And if you have a high load on the server then other clients _will_ be affected by this excessive CPU if it puts you over the top of the servers capacity and their responses will be delayed.
The big problem with trying to use limitations like this can be summarized in the question: How can you separate a legitimate execution from a rogue one?
Maybe the script that has taken 30 seconds of execution time needed just 5 seconds more to finish the report that the board of directors is desperately waiting for when it was kicked out and they got the notice execution time exceeded.
Which means that you will have to set a limit so high that the the heaviest task can be performed.
This problem is accentuated with databases and a declarative language like SQL.
Where the execution time of the same query can take anything from 1 to a factor of 10000 depending on the amount of data in your database, what indexes you have and how you write the query.
And at the same time you want each query to be executed as fast as possible. So if there are no other queries at the moment, you want to give all the machines resources to this query and at other times you want it to share the resources with other queries.
My point is that there are a lot of different attempts in the software world to try to compartmentalize and minimize the effect off one process/thread gone rogue. But the problem is and always has been, how do you know what is legitimate and what is not. And which thread should you penalize when you reached a collective limit on the server? And at the same time if you don't have any other threads running you want to give all resources to the one that is if it needs it.
Which brings us to most of todays solutions that are basically just best-effort, try to do as much work as possible juggling all tasks at the same time.
So fix that query!
There is no such thing built into MySQL.
Originally Posted by emdo82
You have an InnoDB_lock_timeout but that is only for detecting dead lock situations.
The best you can do is to set up a script that runs in parallel with MySQL that queries MySQL and kills client threads that has been running for a certain amount of time for example: mysql-genocide.
October 11th, 2012, 03:19 AM
Thanx a lot for your reply !
October 11th, 2012, 06:20 AM
Part of the problem is that you are using MyIsam and not InnoDB. Change your database settings to use InnoDB (unfortunately you will have to mysqldump and reload), but it might very well solve your problem.
InnoDB has several BIG advantages over MyIsam, not least of which is its locking mechanism (which is what you are suffering from). It is also transactional (start transaction--commit) ensuring database consistency.
MySQL recognised this and 5.5 makes InnoDB the default engine. ead the mysql documentation for details.
You should also set the long query log in my.cnf and run mysqltuner to see how you can speed up the query (and should most likely redesign it using things like limit and offset to reduce resultset sizes and ensure you have the correct indexes).
BTW your lockup could be caused by the user table that is presumably used by all queries.
October 11th, 2012, 08:46 AM
that isn't necessary
Originally Posted by shumifan50
you can just alter the table
ALTER TABLE myisamtable ENGINE=InnoDB