MySQL user's Conference

Daniel Berlin dberlin at
Fri Apr 18 06:50:31 UTC 2003

>> Optimizing MySQL
>> (Jeremy Zawodny)
>>    * MySQL is tuned for small-medium data sets and systems by default.
>>      If you have lots of memory you need to configure it to use it.
> Is there tuning we can do? bmo has massive ammounts of RAM, right - its
> just CPU bound.
Well, if you are cpu bound, and are using 4.x, among other things, you  
should turn on the query cache (It's disabled by default).

Particularly since invalidating the cache (IE inserts) are much rarer  
than searches (IE selects), i would imagine, and there is probably some  
common queries that people always run that the cache would help with.

Add "set-variable = query_cache_size=32M" (or whatever size you like)  
to your my.cnf under [mysqld].

If you have multiple CPU's, you probably also want to add:
set-variable    = thread_cache=8
# Try number of CPU's*2 for thread_concurrency
set-variable    = thread_concurrency=8

I dunno if anyone ever tuned the mysql on b.m.o. If not, you probably  
want to look at the my-(huge, large).cnf files that exist somewhere  
(it's in /usr/share/mysql on my machine).
They are generally the settings you should be using as a starting point  
to tune from (huge is for 1G-2G mainly mysql running machines, large is  
for 512M mainly mysql running machines)

>>    * Compress data if possible.  gzip is quick, cheap, and makes  
>> things
>>      faster, but you can't use it if you want to search the data.
> That doesn't apply to us, except for attachments, and I don't think  
> that
> that woudl help anything we do.
For GCC, our attachments are mainly huge preprocessed source files, so  
it helps quite a bit.
We get about a 70% reduction in attachment size.

If attachments are mainly text, compression might help. The changes to  
do it are trivial.

>>    * InnoDB can be slower with large data than MyISAM, but it's
>>      generally about as fast.
> I have that test case from a while back - someone want to point the
> mysql folks to it?
> mj_wwwusr?list=developers&brief=on&func=archive-get-part&extra=200302/ 
> 118
> I am very interested to see if this can be tuned, or if 4.0 improves
> that.

It might, there were some problems in the range optimizer of 3.x with  
InnoDB, i believe.

At least, for me, on my bug db of 10500 bugs and 44463 comments, the  
query takes 12 seconds on InnoDB, and 4 seconds on MyISAM.
Explain tells me the *InnoDB* one uses the assigned_to index, while the  
MyISAM one doesn't, which is the opposite of what i would expect.

| table     | type  | possible_keys | key         | key_len | ref        
        | rows  | Extra                        |
| bugs      | index | assigned_to   | assigned_to |       3 | NULL       
        | 10148 | Using index; Using temporary |
| longdescs | ref   | bug_id        | bug_id      |       3 |  
bugs.bugs.bug_id |     2 | Using where; Distinct        |
2 rows in set (0.02 sec)


| table     | type | possible_keys | key    | key_len | ref              
  | rows | Extra                 |
| bugs      | ALL  | assigned_to   | NULL   |    NULL | NULL             
  | 10417 | Using temporary       |
| longdescs | ref  | bug_id        | bug_id |       3 |  
bugs.bugs.bug_id |     4 | Using where; Distinct |
2 rows in set (0.00 sec)

Maybe the MyISAM query decides it's not worth using the index. Maybe  
its right.

I can tell you it's consistently *not* using an index in the MyISAM one.

I'll also note that if I create a bug_id, who index, like so:

create index bug_id_who on longdescs(bug_id, who);

then both database types use the new index for the query, and the query  
takes 0.1 on either MyISAM or InnoDB.

If this query is common, we should probably add this index.

More information about the developers mailing list