Boston Linux & Unix (BLU) Home | Calendar | Mail Lists | List Archives | Desktop SIG | Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings
Linux Cafe | Meeting Notes | Blog | Linux Links | Bling | About BLU

BLU Discuss list archive


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Discuss] zpool iostat - Does this make sense?



I'm investigating a performance issue and trying to determine if something might be funny with ZFS.  I have two pools, each are 6x300G SCSI in raidz1 config.  Zpool version 15.  Each pool has a single filesystem (so 2 total).  'zpool iostat -v 5' shows the following pretty consistently:

                capacity     operations    bandwidth
pool          used  avail   read  write   read  write
-----------  -----  -----  -----  -----  -----  -----
fast         1.11T   529G    113      0   472K      0
  raidz1     1.11T   529G    113      0   472K      0
    c4t0d0       -      -     83      0  4.10M      0
    c4t1d0       -      -     80      0  4.21M      0
    c4t2d0       -      -     86      0  4.40M      0
    c4t3d0       -      -     81      0  4.27M      0
    c4t4d0       -      -     81      0  4.06M      0
    c4t5d0       -      -     78      0  4.22M      0
-----------  -----  -----  -----  -----  -----  -----
fast2        1.05T   586G    105      0   509K      0
  raidz1     1.05T   586G    105      0   509K      0
    c4t8d0       -      -     81      0  3.90M      0
    c4t9d0       -      -     77      0  3.76M      0
    c4t10d0      -      -     83      0  3.90M      0
    c4t11d0      -      -     78      0  3.79M      0
    c4t12d0      -      -     79      0  3.72M      0
    c4t13d0      -      -     77      0  3.76M      0
-----------  -----  -----  -----  -----  -----  -----

Why would the per-disk bandwidth be so much higher than the bandwidth for the top-level device?  Should this be expected?  How can I investigate further?

I have atime disabled on both pools, zpool status shows no errors, and prtpicl shows the sync speed for all disks to be 320k (as expected for SCSI).

CPU load is very low, network link appears OK.

The report from my user is that content is being processed at 80dps (docs/sec) from this system, where a different box with all 12 disks in a single raidz1 is doing 200dps.  Additionally, the faster system only has 4GB memory whereas the slower one has 16G.  (I know, "remove the extra memory!")

Any help is appreciated - I only know enough Solaris to be dangerous.  :)

Thanks,
Dan




BLU is a member of BostonUserGroups
BLU is a member of BostonUserGroups
We also thank MIT for the use of their facilities.

Valid HTML 4.01! Valid CSS!



Boston Linux & Unix / webmaster@blu.org