I've read that selecting the correct number of drives when building a raidz ZFS array is important if you are trying to maximize performance.
http://mail.opensolaris.org/pipermail/zfs-discuss/2010-September/044701.html
Of course the best way to maximize performance is to use a straight stripe or a mirror instead of a raidz, but that isn't always an option when you have price, power, or physical space restrictions.
Since I'm perpetually curious, I set up tests to compare the write speed of a 3, 4, 5, 6, and 7 drive raidz array.
I used my saturate.c program to put the arrays under heavy write load, repeated 6 times, and took avg and std deviation.
My results are not clean enough to post, but they would suggest that the number of drives is important - Follow the rules below;
RAIDZ1 vdevs should have 3, 5, or 9 devices in each vdev
RAIDZ2 vdevs should have 4, 6, or 10 devices in each vdev
RAIDZ3 vdevs should have 5, 7, or 11 devices in each vdev
With some luck, I hope to have the time to revisit the tests and obtain results that are postable.
http://mail.opensolaris.org/pipermail/zfs-discuss/2010-September/044701.html
Of course the best way to maximize performance is to use a straight stripe or a mirror instead of a raidz, but that isn't always an option when you have price, power, or physical space restrictions.
Since I'm perpetually curious, I set up tests to compare the write speed of a 3, 4, 5, 6, and 7 drive raidz array.
I used my saturate.c program to put the arrays under heavy write load, repeated 6 times, and took avg and std deviation.
My results are not clean enough to post, but they would suggest that the number of drives is important - Follow the rules below;
RAIDZ1 vdevs should have 3, 5, or 9 devices in each vdev
RAIDZ2 vdevs should have 4, 6, or 10 devices in each vdev
RAIDZ3 vdevs should have 5, 7, or 11 devices in each vdev
With some luck, I hope to have the time to revisit the tests and obtain results that are postable.