February 24th, 2004, 01:50 AM
Slow Query in PHP Script, Fast in phpMyAdmin: Why?
When I run a query in my php script it takes from 5 to 10 seconds, but only about 0.0064 sec in phpMyAdmin. The result contains about 140K rows. I am using MySQL 4.0.16. I timed the script right before and after the query itself and determined that only that part takes up from 5 to 10 seconds. I tried this several times with the same result. Why would this query run so slowly in the php script yet so fast when I run it directly in phpMyAdmin?
The query is:
$query = "SELECT * FROM table1 AS t1, table2 AS t2 WHERE t2.cstatus = 'Active' AND t2.field_id = t1.field_id";
$result = mysql_query($query) or die ('Invalid query');
Thanks for any info.
Last edited by yoyo; February 24th, 2004 at 02:16 AM.
Reason: More Research
February 24th, 2004, 09:31 AM
I assume that you have an index on t1.field_id and one on t2.cstatus so that mysql offers optimal performance.
But apart from that.
If I remember correctly then phpMyAdmin always adds a LIMIT clause to the query and what could take time in your query is to read in all rows from mysql to the PHP environment. The usual way that mysql clients use to work is to read all rows in the result and then you can start to read them in the environment you are in. The thought is that since you have selected all rows you usually are interrested in all of them else you should have written another query.
February 24th, 2004, 10:49 AM
I do have field_id indexed, but not cstatus because it is ENUM with only 2 values so it's better not to index that I thought.
OK, I see how it's adding a LIMIT 30 to the query, but it still calculates the sum of the total rows because it displays right above:
"Showing rows 0 - 29 (142000 total, Query took 0.0005 sec)"
So I am assuming it uses SQL_CALC_FOUND_ROWS to get this value, but that doesn't explain why the other query is slower.
Is there any other explanation why this query is slower through a PHP script?
February 24th, 2004, 11:22 AM
Right thinking. With a small but. If almost all posts are Inactive and you regularly only ask on Active then you can still benefit from index.
Originally Posted by yoyo
But that is probably not what takes time here.
What probably takes time here is PHPs reading and conversion of all rows from mysql.
The steps getting values from mysqld until you can use them in PHP
# Query (select ....)
# mysqld returns result from query
// This can also take quite some time depending on the amount of data.
# each value must be processed and placed in a PHP variable or array. // And this step is what I'm guessing takes time. Since this step has to allocate memory, parse the value that mysqld returned and create a dynamic variable of PHP type. This takes time and if you do this 140 000 x [nr of columns] then it can take quite a lot of time. In the phpMyAdmin case you are only doing this 30 x [nr of columns time]. And since the limit is used you also reduce the returning of results in the step above.
# You can access the value thru PHP.
So if you aren't going to use all 140 000 rows that your query returns then either change your where or use LIMIT.
February 25th, 2004, 02:10 AM
Almost all posts are Active. Should it still be indexed if I do WHERE t2.cstatus != 'Inactive'?
OK, so I upgraded to 4.0.18 and used LIMIT with SQL_CALC_FOUND_ROWS to get the total rows. The query takes 10 seconds. Since this is way too long, I then tried a COUNT(*) instead and the query time has been reduced to 1-2 seconds. (I was using mysql_num_rows to get the total rows)
I think it's strange that SQL_CALC_FOUND_ROWS is the exact thing I should be using but gives WAY slower results. Pretty pointless it seems.
Anyway, thanks for your feedback and help.
February 25th, 2004, 08:38 AM
You will probably not benefit from an index since != 'Inactive' in this case means = 'Active' and most rows are Active. The more unique value you are searching for the more you will benefit from an index. Think of the two extremes:
1. all values are the same
2. all values are unique (primary key).
Besides you should be aware that writing != is usually slower than =[constant] since if you write != an index can't be used in normal manner . Instead the server will have to do an index scan and compare each value if it is not equal to your criteria.
I agree with you that it is a bit strange that SQL_CALC_FOUND_ROWS combined with LIMIT still takes so long.
But even the count(*) sounds a bit slow.
What is the output if you try EXPLAIN SELECT ... ?
And how big is your KEY_BUFFER_SIZE compared to how much RAM you got on the machine. Can you increase KEY_BUFFER_SIZE and give mysqld more memory?
February 25th, 2004, 11:13 AM
It seems that the reason it takes so long is the query contains several LEFT JOINS and I'm selecting a lot more fields for displaying. With the COUNT I take out all these unecessary fields/tables and so it's a lot faster overall even with 2 separate queries.
The 1-2 sec result was on my local machine. On the main server it's much faster. But since you mention this KEY_BUFFER variable, I'd like to know if I should increase the value anyway. I don't have a KEY_BUFFER_SIZE in my.cnf but I see that KEY_BUFFER is set to 256M. I have 2G RAM shared with Apache etc. Do you think this value should be increased? Thanks.
February 25th, 2004, 11:26 AM
If it's already set to 256M then it probably won't do any good to increase it more. It just wanted to make sure that you weren't running with default 8M which is to common.
Also if you are using group by or order by on large result sets then you might also check out the variable SORT_BUFFER_SIZE. This is default 2M and after that mysql starts writing a temporary file to disk which degrades performance a lot.
February 25th, 2004, 12:19 PM
Right. I just read on the MySQL site (http://www.mysql.com/information/pre...719/index.html) that one should check for 'Opened_tables' value and if it's large then it might be a good idea to increase table_cache. On my site MySQL has been up about a half day and Opened Tables is 60743. Is this a large value such that it would be good to increase table_cache to 512 given the circumstances?
If I use order by only with LIMIT then this shouldn't matter then?
Hmm, even my-huge.cnf has sort_buffer_size set to 2M. But I imagine since I use LIMIT for such ORDER BY requests that I don't need to increase this. Thanks again.