Monday, March 25, 2013

Storage - software RAID performance test I

This weekend I set out to find my best storage solution for a low cost iSCSI server. I ended up analyzing the read performance of mirrors and stripes provided by software RAID on SATA disks, as this would be the most important property of my new server.

I would more easily get the results I needed by investing in a proper hardware RAID controller, but this is not an option with this server.  A low cost server system has been ordered, but for my test I am limited to what I could find in the workshop, which includes an early i7 system with Intel P55 Express chipset, a small disk suitable as my system disk, and 2 Seagate ST3250312CS 5900 rpm drives to use for my iSCSI target luns.


I am also somewhat limited when it comes to OS choices. As I want to be able to snapshot and clone my iSCSI target disks, my options are to use either Microsoft iSCSI Software Target on Windows 2008R2/2012, or any target software running on a file system that supports snapshots and cloning. The latter could be LVM on Linux or ZFS on Linux or any Solaris-compatible OS.

I was kind of hoping that my results would point in the direction of using an Open Source solution, as I already have tested Microsoft iSCSI Software Target, and would love to jump into something different.

As most disk IO on an iSCSI server is read, not write, I have tested only this side of the story. I was hoping that the theory about higher read performance from a mirrored set of disks would come through, giving me the option of sacrificing the higher write performance from a striped set for the redundancy of using mirrored disks. If mirrored disks does not give me the performance I need, it will be a question of which software RAID that gives the highest read performance of a striped set of 2 drives.

I tested using a ~15GB file, doing memory copy. On Windows, I used readfile from Winimage, on *nix I used 'time cat filename /dev/null'. From the total time used to copy the file, I calculated thoughput, and repeated 3 times. Numbers below are average. File read is fully sequential, and any disk cache was cleared before read.

Results on a single disk:


BIOS FS MB/s
Win2012 AHCI ntfs 118,9
CentOS 6.4 AHCI ext4 120,8
OmniOS 5.11 AHCI zfs 117,1

Somewhat disappointing speed here on the Illumos based OpenSolaris fork. The test on Windows was performed using Intel disk controller drivers.

Results on striped disks:

Software BIOS FS MB/s
Win2012 native RAID AHCI ntfs 199,3
Win2012 Intel RAID RAID ntfs 196,7
CentOS 6.4 Linux Raid AHCI ext4 199,3
OmniOS 5.11 ZFS AHCI zfs 184,9

It is interesting to notice that the native RAID system in Windows performed better than Intel Matrix Storage Technology. However, I was dissapointed by the result for ZFS.

Results on mirrored disks:

Software BIOS FS MB/s
Win2012 native RAID AHCI ntfs 114,5
Win2012 Intel RAID RAID ntfs 104,4
CentOS 6.4 Linux Raid AHCI ext4 117,1
OmniOS 5.11 ZFS AHCI zfs 149,0

Now here lies the big surprises. ZFS was the only RAID system to increase read speed from mirrored disks compared to reading from a single disk. Again, Intel's software performed terribly.

The results was not as I expected, for my mirrored drives setup. I am aware that Intel has chipsets out that performs better, but why can't the other software RAID solutions manage to read from 2 drives at the same time, when ZFS obviously does?

I realize I will have to test this again on the actual server I will use, before I can make a choice. Perhaps I will throw in a 3rd disk and add RAID5 to the list. We'll see. Stay tuned.