Sunday, January 9, 2011

Solaris 11 Express - Faster Speed for SAN-NFS-VM than FreeBSD

I love FreeBSD. It's been my go-to OS since I first switched to it from BSDi many many years ago. However, I hate to stick with something just because I've always used it. Technology changes too quickly to make lifelong bonds. While I prefer FreeBSD, I ultimately will choose what gets the job done the best.

ZFS under FreeBSD has always progressed nicely, but it is obviously behind Solaris, seeing how Sum/Oracle are the creators of ZFS. The ZFS v28 patch is currently being tested for FreeBSD-8.2 and 9, but not released yet.

While I realize these are pre-releases, I wanted to test the performance I could expect from FreeBSD vs Solaris for my SAN that I'm building. Once I put this SAN into regular use, I won't easily be able to switch to another OS, or do a lot of tweaking with it.

This is not just a test of ZFS implementations - It tests the hard disk subsystem, memory, NFS, ethernet drivers, and ZFS to give me a combined test result, which is what I really care about - How fast can my SAN accept and serve up data to my ESXi boxes.

I've seperately been benchmarking FreeBSD 8.2-PRE and 9.0-CURRENT with both the v28 ZFS patch, and with their native v15 ZFS implementations. I'll make that a seperate blog entry a bit later on, but for now, I'm choosing the fastest performer from the v28 portion of those tests, which was FreeBSD 9.0-CURENT.

My test environment is as follows:

ESX Box: Dell PowerEdge T710 w/96Gig
SAN Box: Dell PowerEdge Y710 w/24Gig
Network: Intel X510DA2 10Gbe Ethernet  in each box, direct attached
SAS Card: Areca 1880ix w/4G Cache
Enclosure: SuperMicro SC847-JBOD2 (Dual 6Gbps Backplanes)
Drives: Seagate 1.5 TB SATA, 5 disk raidz, no log or cache
Test Software: Performance Test 6.1 run on a Windows 2003 VM


Solaris 11 Express svn_151a

903 MBs - Fileserver
466 MBs - Webserver
53 MBs - Workstation
201 MBs - Database

FreeBSD-9.0 Current @ Dec 12th 2010 w/v28 Patch, all Debug off
95 MBs - Fileserver
60 MBs - Webserver
30 MBs - Workstation
32 MBs - Database


Wow. Notice the difference? I didn't believe it myself at first, but I tried it again and again, I watched the packet traffic accross the Intel card with iftop, and then I ran seperate benchmarks with iometer just to make sure it wasn't something silly in Performance Meter. I always received similar results - Solaris 11 Express can move some serious data in comparison to FreeBSD 9.0

At this stage, I don't know if it's a problem with the Intel driver holding back FreeBSD or not. You'll notice that the Workstation test is the only one that is compariable between FreeBSD and ZFS, which is made up of a lot of sync random read/writes that brings any ZFS implementation to it's knees.

The thing is: I don't care if it's the Intel driver, the memory sub-system, or whatever - In the end, I need the best performance delivered to my ESXi boxes, and Solaris 11 Express can deliver where FreeBSD isn't ready just yet.

I do fully expect FreeBSD to catch up, but for now, I'm strongly considering spendng the $$ on a Solaris 11 Express license so I can run it as my SAN OS. Solaris' easy CIFS implementation is another bonus for my SAN - it was delivering 78 MBs speeds when my FreeBSD/samba implementation was at 35MBs.

3 comments:

  1. Hi Chris,

    I am wondering about your setup with Solaris 11 Express using the supermicro mb and chassis + areca 1880.

    Have you run into any issues with that particular setup?

    We run a setup just like yours (same mb, and hba), but have the entire zpool loose/drop connection from time to time.

    do you have any weird areca warnings? or issues with the backplane?

    thanks!

    ReplyDelete
  2. Hi NCMIR,

    It went poorly to say the least. I've not had much time to post lately, but I'm back on a 100% FreeBSD setup, and I'm only using LSI 2008 based SAS controllers now - My Areca 1880 will be going up on ebay shortly.

    BTW, I also had problems with my Areca 1880 and the SuperMicro 847 chassis with SAS backplanes - I could never get it to work just right, but now that I'm on LSI 2008 controllers, it's fine. I know that's vague, I hope to have time to expand on my problems there soon.

    Back to Solaris: I could not get the Areca to be stable under Solaris, even when working with tech support. They released a few beta versions for me, which I was testing, but how often do you want to lose your pool before you just stop using the hardware?

    I transitioned to LSI 2008 based cards, and then I _still_ had a pool go corrupt on me (!!!). That wasn't the fault of the controller (I was used to the Areca hangs at this point) but I couldn't take another loss.

    I went back to FreeBSD 9-Current, and have been running it with _far_ less problems since then.

    One of the first things I did was hack the NFS server in FreeBSD to only do async writes - I use ESX-NFS-ZFS, and that's a bad combination. With that hack in place, my FreeBSD performance is close enough to Solaris that I don't mind it.

    I really should release that hack sometime soon, it's pretty easy, and got me thinking about rebuilding the NFS server for FreeBSD to be more friendly to ESX - Following Symlinks and ZFS mount-points for instance so we are not stuck with 64 max NFS mounts.. but I digress.

    So you need to decide but you can't have Areca 1880 and Solaris. Solairs and LSI is fine, FreeBSD 9 and Areca is fine, but not the two... not if you care about your data.

    I will post some followups on this shortly..

    ReplyDelete
  3. I experienced the same results as Christopher using 8.2-RELEASE,8.2-STABLE and 9.0-CURRENT. (All with all debugging disabled). I tried stock settings and "evil tuning guide" settings

    All tests were significantly faster on Solaris 11 express (completely stock settings) especially isci which was 95MB/s vs 70 MB/s on FreeBSD the only exception was the zfs send and receive over ssh which was faster on FreeBSD using the openssh portable and patches.

    ReplyDelete