While building and testing what will be a 24 drive SAN running FreeBSD 8.1, ZFS, and NFS/iSCSI I discovered a problem with hot-swapping the SATA drives.
It seems FreeBSD 8.1 AMD64 was running the SATA drives in ATA mode, so they really didn't hot-swap. If I pulled a drive and reinserted, I couldn't make it understand that the drive was back, even if I fooled with atacontrol's attach and detach. I had to reboot, which is no way to run a SAN, no mater how ghetto it may be.
A bit of poking showed that while my BIOS was set to AHCI (Intel ICH7R chipset), FreeBSD was still running in ATA mode. (type 'atacontrol list' and if you see drives, you are too). camcontrol is the program you use once in CAM mode.
The answer was to put ahci_enable="YES" in /boot/loader.conf.
This changes your drives to ada's from ad's which causes a boot problem, but that's easily fixed.
I did a quick a dirty test with 'raidtest' to show the speed difference. Here's my raidtest creation command:
raidtest genfile -s 128 -S 512 -n 50000
Here's the results:
With CAM: (ahci_enable="YES" in /boot/loader.conf)
iscsi# raidtest test -d /dev/zvol/tank/vol
Read 50000 requests from raidtest.data.
Number of READ requests: 24831.
Number of WRITE requests: 25169.
Number of bytes to transmit: 3286713344.
Number of processes: 1.
Bytes per second: 37162958
Requests per second: 565
Without CAM
iscsi# raidtest test -d /dev/zvol/tank/vol
Read 50000 requests from raidtest.data.
Number of READ requests: 24831.
Number of WRITE requests: 25169.
Number of bytes to transmit: 3286713344.
Number of processes: 1.
Bytes per second: 6069384
Requests per second: 92
Big difference eh? It's night and day. It then lead to some ZFS issues when I turned CAM on, because ZFS likes to hit write/read stalls when you move a lot of data, but I cleared that up with some further ZFS tweaks that I will detail another day.
If you're not running with ahci_enable="yes" in your loader.conf, you may want to look at enabling it.
I"ll be doing more tests over the new few days on a few different FreeBSD machines, and we'll see what the general results are.
No comments:
Post a Comment