Wednesday, March 27, 2013

Storage - software RAID performance test II

My new low-cost server arrived yesterday, and as promised earlier, I am taking it for a RAID!

The server will be used as an iSCSI target in a test environment, for 20+ iSCSI booting clients. It is a Fujitsu Primergy TX100 S3p, and perfect for the job, price considered.

I must say, I had love at first sight with this little server. It is amazingly inexpensive. It is almost silent, despite having 3 fans (not counting the PSU fan), and 2 of those are cooling my disks. It's got a 4 core Xeon processor (E3-1220V2 at 3.1GHz), 8GB memory (32 max) and an Intel C202 chipset. It's got dual GB Intel LAN and 6 x 6Gbit/s SATA connectors, and I need the highest read speed from my disks that I can get. The server comes with 2 disks installed, I added a Seagate ST3250312CS as my system disk. We'll see if the love persist through my test.

There are a few differences from my previous software RAID test. My new server does not offer any Intel Rapid Storage Technology, instead it has LSI MegaRAID onboard. Also, considering my test results from last time I will use CentOS also for my ZFS tests. It seems the Linux drivers get better raw speed from each disk, and I want to take advantage of that.

Unfortunately, I don't have 3 identical disks for testing a RAID-5 setup this time. The test results wouldn't be  proper if I put a random third disk in there. It would have been interesting to see how a RAID-5 array would perform, but that will have to wait for another time.

Also, this time I put the new Windows 2012 Storage Pool in the test, as it's another method of creating software RAID setups on Windows. It's got some nice new features that are not relevant for the test, I just wanted to see if it behaves differently from native Windows RAID when it comes to performance. I was kind of hoping MS had read some ZFS test reports, and had managed to implement simultaneous reading from mirrored drives.

To get a better reference, I put in an old hardware RAID controller that I earlier pulled from one of my ESXi servers due to terrible performance. It's a HP Smart Array E200 controller, that I hope will give me some good numbers with Windows drivers. I won't be using this controller in the end, it's just to add some numbers to the list.

Results on a single disk:

Win2012 AHCI ntfs 142,1
Win2012 LSI MegaRAID RAID ntfs 140,8
CentOS 6.4 AHCI ext4 141,3
CentOS 6.4 ZFS AHCI zfs 139,5
Win2012 E200 - ntfs 130,0

Results on striped disks:

Win2012 native RAID AHCI ntfs 279,0
Win2012 Storage Pool AHCI ntfs 274,0
Win2012 LSI MegaRAID RAID ntfs 274,0
CentOS 6.4 Linux Raid AHCI ext4 255,7
CentOS 6.4 ZFS AHCI zfs 243,5
Win2012 E200 - ntfs 239,7

Results on mirrored disks:

Win2012 native RAID AHCI ntfs 139,5
Win2012 Storage Pool AHCI ntfs 140,8
Win2012 LSI MegaRAID RAID ntfs 113,6
CentOS 6.4 Linux Raid AHCI ext4 140,5
CentOS 6.4 ZFS AHCI zfs 189,4
Win2012 E200 - ntfs 90,3

It should be obvious that the HP Smart Array E200 controller is crap. Not only is the raw read speed bad, but it even looses 30% speed on a mirror array compared to a single disk.That's even worse than the onboard MegaRAID, which looses about 20% speed.

Again, like in my first test, ZFS is the only raid software that increases read speed on mirrored disk, and it does so by 35%.

For striped disks, Windows seem to be the clear winner on this server. I am kind of lost as to why my Linux test fell behind here, as it performed good both on a single disk and on the mirrored disks, compared to Windows.

It is interesting that the new Windows 2012 Storage Pool performs on par with Windows native RAID setups, both on mirrored drives and on a stripe. It is definitively something to look into for the future.

For my iSCSI target server, I will choose Windows with striped disks. I do need as much throughput as I can get, to be able serve data to a dual Gbps LAN connection, that could possibly reach up to about 235 MB/s.