[prev in list] [next in list] [prev in thread] [next in thread] 

List:       mysql
Subject:    Re: MySQL Closing/Opening tables
From:       Baron Schwartz <baron () xaprb ! com>
Date:       2009-02-28 15:24:59
Message-ID: 4cfa0b030902280724m717b55cbw9ffc1882b5f0999e () mail ! gmail ! com
[Download RAW message or body]

Hi,

On Fri, Feb 27, 2009 at 2:51 PM,  <dbrb2002-sql@yahoo.com> wrote:
> Thanks for the quick followup Baron..
>
>  vmstat
> procs -----------memory---------- ---swap-- -----io---- --system--
> -----cpu------
>  r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id
> wa st
>  3  0    100 499380 139256 5604000    0    0   190   693   11   11 20  2 70
> 8  0

The first line of output is averages since boot, you need to let it
run for several iterations to see what's happening NOW.  But on
average we can see that you're spending 8% of CPU time waiting for
I/O, which may be significant.  If you have for example a 4-core
system, that could mean one core is spending 32% of its time.  And
that's on average, which means peaks are higher.  If you run "mpstat
-P ALL 5" you will be able to see the iowait for each CPU or core.
But based on iostat output you pasted, I can pretty much predict
you're going to see high I/O wait.  Looking at iostat, I can see your
await (average wait time) is pretty long.  I like to see await in the
low-single-digit ms range.  And you've got reasonably high utilization
percent too.  All this while not doing many writes per second, and
with a short disk queue, in a non-peak time.  Look at iostat during
the times of high stalls and I bet you'll see the problem clearly.

I think the answer is probably that you have slow disks.  Get more and
faster disks.  If you need high performance, upgrade to a RAID array
with a battery-backed write cache on the controller, set to writeback
policy.  Get 10k or 15k RPM disks.  You'll see *much* higher
performance.  A typical Percona client has an LSI MegaRAID card (the
Dell PERC is basically the same thing) with 4 or 6 15k RPM 2.5"
server-grade SAS drives in a RAID 10 array.  You don't need to go this
high-end -- maybe you can get perfectly fine performance with three
7200RPM or 10kRPM disks in RAID 5, I don't know, that's up to you.  It
would be cheaper and if it's good enough, that's great.

We always benchmark drives to make sure they are installed correctly.
Here's a set of benchmarks with iozone from a recent job that has this
setup.  You may need to view this in fixed-width font:

./iozone -c -n 128M -g 32G -r 16k -k -l -o -O -P -T -f -x -S 4096 -a C 1 +D

	Command line used: ./iozone -c -n 128M -g 32G -r 16k -k -l -o -O -P
-T -f -x -S 4096 -a C 1 +D
	Time Resolution = 0.000001 seconds.
	Processor cache size set to 4096 Kbytes.
	Processor cache line size set to 32 bytes.
	File stride size set to 17 * record size.
                                                            random
random    bkwd   record   stride
              KB  reclen   write rewrite    read    reread    read
write    read  rewrite     read
          131072      16     175    3184    73943    72501   73101
932   74864     3232    72611
          262144      16     173    3214    74557    73412   73468
1120   74673     3280    73020
          524288      16     181    3266    75108    72978   72991
912   74291     3268    72524
         1048576      16     186    3267    74741    73103   72578
769   74096     3271    73487
         2097152      16     184    3267    74730    73474   72316
645   38541     3035    73862

Look ma, only 175 writes per second!  Slow as a dog!  So I checked the
RAID configuration and found out that the hosting provider had
mistakenly set the controller to WriteThrough policy... after fixing
that, look at the difference:

        Command line used: ./iozone -c -n 128M -g 32G -r 16k -k -l -o
-O -P -T -f -x -S 4096 -a C 1 +D
        Time Resolution = 0.000001 seconds.
        Processor cache size set to 4096 Kbytes.
        Processor cache line size set to 32 bytes.
        File stride size set to 17 * record size.
                                                            random
random    bkwd   record   stride
              KB  reclen   write rewrite    read    reread    read
write    read  rewrite     read
          131072      16    3260    7215    76574    75236   74495
1396   75960     7299    74788
          262144      16    3122    7342    76255    75272   73661
1259   75304     7422    74755
          524288      16    3118    7346    76156    75214   73629
1114   75390     7400    74256
         1048576      16    3112    7454    75981    74478   73206
1029   75029     7571    73901
         2097152      16    3110    7468    76100    74780   73218
926   75292     7573    74316

That's more like it.  Over 3000 synchronous writes per second (TO
DURABLE STORAGE, not just the OS cache), while maintaining await in
the 1 to 3 ms range (as shown by iostat, not shown here).  This is
what I consider to be a "real" disk subsystem for a database server
:-)  You might consider benchmarking your disks to see what level of
performance they can achieve.  It is always worth doing IMO.

Dan Nelson's advice is also right on target.  And Eric Bergen's too,
hi Eric :)  I disagree with Mr. Musatto -- just because you're doing
more writes than reads doesn't mean your writes are high.  70, 80,
even 126 writes per second is trivial unless you have a single slow
disk, and the await you showed us is high, not low.  You've got some
5-second periods where the await is nearly 1/5th of a second, which
means the spikes are even higher -- how many writes per second do you
expect your disks to do? A database server needs to do more than 5 or
10 writes per second.

I would also install some monitoring/trending system like cacti or
munin so you can have forensics for looking at what's happening on
your systems.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/mysql?unsub=mysql-marcsub@progressive-comp.com


[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic