I've just upgraded my 4.0.0 ESXi box to 4.1.
...Well, lets not call it upgrading, the process I used was to pull my existing boot drive, install a new 8 gig USB stick inside my Dell T710, and install a fresh copy of the Dell Installable 4.1 ESXi. This results in a fresh copy, and all I have to do is reattach my storage, import my .VMX files, and I'm back in business.
I once had a bit of a mess with the upgrade process from 3.0 to 3.5, so I settled on this method as being he safest. I even pull my drives when I do the upgrade, so there is no chance the installer will become confused and format my main datastore. Am I paranoid? It's happened before, and you only have to burn me once..
The process isn't that much more time consuming than an in-place upgrade. When your ESXi files are all on one drive/flash stick, and your datastores are all on another, you've got plenty of flexibility. I was able to do it within 30 minutes, because I really don't edit my ESXi configuration that much from default.
I'm very interested in 4.1 because it has some neat power management interfaces. Look here, I can now track my wattage being burned on this server:
This is for a Dell T710 with 24 gig DDR3 memory, 2 Xeon 5660's, 4 600Gig SAS drives, and 4 1TB nSAS drives. I've yet to check how accurate this is with a KilloWatt or similar meter, but this sounds about right.
I'm also interested in memory compression - When you building redundancy by having two ESX servers, you need to have very similar configurations for processor / memory on your backup server, or you take a large performance hit. That's not always easy to budget for. If I can get away with my Exchange servers still running for a couple of hours on a server that is only 1/2 or 1/4 the memory of my main ESX, then I'll be happy. It's got to be quicker to compress memory than to swap it to disk - We'll see. I'll be testing that later on.
Service Console is now no longer an unsupported hack.
But really, the bit that gets me very interested the most is that vmware is now putting it's full weight behind ESXi - There won't be any more ESX! And to make this transition easier, you can now access vMotion from an ESXi 4.1 server. You still need licenses, but now it's nearly $10k cheaper to access this technology.
With my Ghetto-SAN comming online any week now, I'm very excited about this development.
BUT - There seems to be a but of a trade-off for the new things that 4.1 brings - It's just a tad slower than 4.0.0.
Once again, I do quick and dirty benchmarks to get a feel for things - Performance Test 6.1 isn't the best tool, but it's quick and makes for nice easy graphs to compare. After pouring through reams of iozone stats in Excel, I sometimes like quick, easy, and pretty.
My process was simple: Take a Performance Meter tests from a running 2008 R2 server before the upgrade to 4.1, and one after (with the new vmware tools loaded).
While 4.1 had better graphic performance (who cares!), it was around 2% slower for memory and CPU performance.
That's a small price to pay for new features, and I'm hoping it's just a result of new technology being focused on stability first, performance second.
Anyone else run benchmarks that can confirm or deny this?
Reviews, rants, and misc thoughts on technology. I'm trying to give back to the internet community who's various blogs and forums have given me so much information in the past.
Thursday, September 16, 2010
Tuesday, September 14, 2010
ZFS and NFS performance, with ZIL disable and/or flushcache
I'm building my new FreeBSD 8.1 SAN, and one of the first tasks is trying to pull decent NFS performance from the box.
As you may be aware, NFS and ZFS don't mix well. It's because NFS is asking for a flush of the ZFS ZIL after each write, which is incredibly slow. It destroys all caching or other speed enhancements that ZFS can bring to the table.
The quick and dirty response is to disable the ZIL. While this won't lead to corruption, it's removing one of ZFS's protection mechanisms, and as I'm building a very large SAN, I really don't want to hamper ZFS's ability to save my ass on data corruption issues.
This is basically opening NFS in async mode, which is what iSCSI does. I want iSCSI performance with the stability of NFS sync.
I thought I'd play around with disabling the ZIL (makes me nervous), using loander.conf commands, and adding a SSD as the ZIL.
NFS Tests on a ZFS RAID10
Notes:
All numbers are MB/sec, tests run twice (thus 2 numbers) then rebooted.
Tests are from PerformanceTest 6.1, which is easy for quick-n-dirty testing.
Tests are on a Windows Server 2003 32 Bit, going to FreeBSD 8.1, but who cares? They are only valid as a comparison on my own machine.
Without ZIL
Fileserver: 64.25, 65.97 MB/sec
Workstation: 9.52, 12.99
Database: 56.31, 56.98
Decent speed for a RAID-10 ZFS on SATA drives without any tweaking. Beats my C: drive which is a SAS RAID-10 under ESXi, and is around the same speed as I was getting from iSCSI
With ZIL
Fileserver: 8.37, 6.53
Workstation: 2.51
Database:
Basically: Much, Much slower. I gave up after a few tests, as it was so tedious I didn’t want to continue. I did these tests before, so I know it’s slower across the board. At this speed iSCSI kicks NFS's ass.
NOTE: I did try with ZIL and vfs.zfs.cache_flush_disable=1 , but the speed is basically just as bad. Besides, why wouln't you want your ZIL on a SSD?
With ZIL on Intel X25-M SSD (32 Gig)
Fileserver: 61.38, 62.08
Workstation: 8.05, 7.66
Database: 23.07, 23.05
Hmm,this is faster. I wouldn't be too unhappy with this type of performance. Database still suffers though.
With ZIL on Intel X25-M SSD (32 Gig), vfs.zfs.cache_flush_disable=1
Fileserver 54.69, 62.57
Workstation 12.43, 9.54
Database 54.2, 54.69
Hey - That's pretty good. Just a tiny tad under ZIL-less operation.
Notes for SSD tests: The SSD as ZIL stayed around 50% busy. The ZIL does work.
So all we have to do is make ZFS lie and say it flushes the cache when it doesn't. Editer your /boot/loder.conf to include the vfs.zfs.cache_flush_disable=1 command, and you're off and running.
I believe this is an enhancement in newer ZFS pools anyway, so I'm really not too worried about it. If it's on the ZIL, why do we need to flush it to the drive? A crash at this point will still have the transactions recorded on the ZIL, so we're not losing anything.
BTW - It looks like ZFS v23 is comming to FreeBSD sooner than we expected - So this may all be moot, as it's included around v18 I seem to recall.
Final thoughts: Never, never run a ZIL that isn't mirrored. It dies, lots of bad things happen... although I was able to shut down the system, turn off my ZIL in loader.conf, and boot without the SSD, so I think you could recover.. I'll be testing how nasty things are with a destroyed ZIL during transactions tomorrow.
As you may be aware, NFS and ZFS don't mix well. It's because NFS is asking for a flush of the ZFS ZIL after each write, which is incredibly slow. It destroys all caching or other speed enhancements that ZFS can bring to the table.
The quick and dirty response is to disable the ZIL. While this won't lead to corruption, it's removing one of ZFS's protection mechanisms, and as I'm building a very large SAN, I really don't want to hamper ZFS's ability to save my ass on data corruption issues.
This is basically opening NFS in async mode, which is what iSCSI does. I want iSCSI performance with the stability of NFS sync.
I thought I'd play around with disabling the ZIL (makes me nervous), using loander.conf commands, and adding a SSD as the ZIL.
Notes:
All numbers are MB/sec, tests run twice (thus 2 numbers) then rebooted.
Tests are from PerformanceTest 6.1, which is easy for quick-n-dirty testing.
Tests are on a Windows Server 2003 32 Bit, going to FreeBSD 8.1, but who cares? They are only valid as a comparison on my own machine.
Without ZIL
Fileserver: 64.25, 65.97 MB/sec
Workstation: 9.52, 12.99
Database: 56.31, 56.98
Decent speed for a RAID-10 ZFS on SATA drives without any tweaking. Beats my C: drive which is a SAS RAID-10 under ESXi, and is around the same speed as I was getting from iSCSI
With ZIL
Fileserver: 8.37, 6.53
Workstation: 2.51
Database:
Basically: Much, Much slower. I gave up after a few tests, as it was so tedious I didn’t want to continue. I did these tests before, so I know it’s slower across the board. At this speed iSCSI kicks NFS's ass.
NOTE: I did try with ZIL and vfs.zfs.cache_flush_disable=1 , but the speed is basically just as bad. Besides, why wouln't you want your ZIL on a SSD?
With ZIL on Intel X25-M SSD (32 Gig)
Fileserver: 61.38, 62.08
Workstation: 8.05, 7.66
Database: 23.07, 23.05
Hmm,this is faster. I wouldn't be too unhappy with this type of performance. Database still suffers though.
With ZIL on Intel X25-M SSD (32 Gig), vfs.zfs.cache_flush_disable=1
Fileserver 54.69, 62.57
Workstation 12.43, 9.54
Database 54.2, 54.69
Hey - That's pretty good. Just a tiny tad under ZIL-less operation.
Notes for SSD tests: The SSD as ZIL stayed around 50% busy. The ZIL does work.
So all we have to do is make ZFS lie and say it flushes the cache when it doesn't. Editer your /boot/loder.conf to include the vfs.zfs.cache_flush_disable=1 command, and you're off and running.
I believe this is an enhancement in newer ZFS pools anyway, so I'm really not too worried about it. If it's on the ZIL, why do we need to flush it to the drive? A crash at this point will still have the transactions recorded on the ZIL, so we're not losing anything.
BTW - It looks like ZFS v23 is comming to FreeBSD sooner than we expected - So this may all be moot, as it's included around v18 I seem to recall.
Final thoughts: Never, never run a ZIL that isn't mirrored. It dies, lots of bad things happen... although I was able to shut down the system, turn off my ZIL in loader.conf, and boot without the SSD, so I think you could recover.. I'll be testing how nasty things are with a destroyed ZIL during transactions tomorrow.
Tuesday, September 7, 2010
FreeBSD, pf, and fwanalog
I think I've fixed this - if anyone was trying to run it and getting nonsense. There were some issues with the tcpdump date format, a couple small bugs, etc.
I'll eventually post a patch, but for now just ask if you're having the same problems, I'll send you the updated fwanalog script.
I'll eventually post a patch, but for now just ask if you're having the same problems, I'll send you the updated fwanalog script.
Subscribe to:
Posts (Atom)