An All SSD Rig and FreeNAS

Status
Not open for further replies.
J

jpaetzel

Guest
A few weeks ago HGST dropped off a box of SSDs at iXsystems. In the box of goodies was 16 STEC S842E800M2 6 Gbps SAS SSDs. We built up a rig to give them a try.

The system consisted of:

2x E5-2650 v2 CPUs
32 GB ECC RAM
2 Chelsio T4 dual port 10Gbe NICs
16 STEC SAS S842E800M2 800 GB SSDs
1 32 GB SATA boot device
Running FreeNAS 9.2.1.3

Pool config was an unusable in production 14 drive stripe. (But great for testing)
Notice we burned two drives for ZIL, cause we want to do some NFS testing as well.

[root@truenas] /mnt/tank/iozone# zpool status
pool: tank
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
gptid/b38a6eef-c99f-11e3-9f9c-0025902db07c ONLINE 0 0 0
gptid/b3aa2683-c99f-11e3-9f9c-0025902db07c ONLINE 0 0 0
gptid/b3caa6e3-c99f-11e3-9f9c-0025902db07c ONLINE 0 0 0
gptid/b3eab1a1-c99f-11e3-9f9c-0025902db07c ONLINE 0 0 0
gptid/b409e586-c99f-11e3-9f9c-0025902db07c ONLINE 0 0 0
gptid/b42970f6-c99f-11e3-9f9c-0025902db07c ONLINE 0 0 0
gptid/b44acbfd-c99f-11e3-9f9c-0025902db07c ONLINE 0 0 0
gptid/b469bc46-c99f-11e3-9f9c-0025902db07c ONLINE 0 0 0
gptid/b48a07b7-c99f-11e3-9f9c-0025902db07c ONLINE 0 0 0
gptid/b4aa7822-c99f-11e3-9f9c-0025902db07c ONLINE 0 0 0
gptid/b4cb95f9-c99f-11e3-9f9c-0025902db07c ONLINE 0 0 0
gptid/b4ec193a-c99f-11e3-9f9c-0025902db07c ONLINE 0 0 0
gptid/b50cc42b-c99f-11e3-9f9c-0025902db07c ONLINE 0 0 0
gptid/b52de704-c99f-11e3-9f9c-0025902db07c ONLINE 0 0 0
logs
gptid/f0b5fa20-c99f-11e3-9f9c-0025902db07c ONLINE 0 0 0
gptid/f5c624a8-c99f-11e3-9f9c-0025902db07c ONLINE 0 0 0

Pool config was set to compression off, primarycache set to metadata, checksums set to off (also, don't try that at home)

The purpose of my testing was to explore the differences in various controllers, so the settings I chose were designed to expose differences in the controllers.

First iteration consisted of 2 LSI 9217-8i HBAs with IT firmware, direct wired to the 24 2.5" drive slots in the chassis. This gave every drive a discreet 6Gbps controller, so 16 controller ports and 96 Gbps of total controller bandwidth.


I used iozone for all of the tests, the command line was

iozone -r 128 -s 40g -t 8 -i 0 -i 1 -i 2

This selected the sequential read and write tests and the random read and write tests, using 8 threads of 40GB each.

Sequential Write: 4398 MB/sec
Sequential Read: 2152 MB/sec
Random Write: 4074 MB/sec
Random Read: 1395 MB/sec

Notice write performance is higher than read performance. That's a definite WTF that will need more exploration, maybe related to checksums being off.

Next the chassis was switched to a 12 Gbps single expander backplane, and was wide ported to an LSI 9300-8i 12Gbps HBA. This provided 8 controller ports and 96 Gbps of bandwidth to the drives. This dropped controller ports by 1/3rd, but kept bandwidth the same. LSI claims this controller has a faster CPU than the 9207, but I wasn't sure if it enough to make up for an almost 50% drop in controller ports. (For local tests the ZILs aren't used, so 14 drives)

It turns out the faster controller ports were not able to make up in the drop in port count:

Sequential Write: 3763 MB/sec
Sequential Read: 2222 MB/sec
Random Write: 3522 MB/sec
Random Read: 1469 MB/sec

The 500MB/sec drop in write performance was unsurprising. The slight increase in read performance is a bit interesting.

I'm going to move the system to a 6Gbps expander tomorrow and wide port a 9207-8i to it. This will give 48 Gbps of bandwidth and 8 controller ports. Since my highest bandwidth readings so far have been 4398 MB/sec I shouldn't be bandwidth limited. (48 Gbps of controller bandwidth provides 4800 MB/sec of I/O bandwidth). I most likely will be CPU limited on the controller and will see the lowest results in this configuration.

I took a look at per drive utilization and was seeing about 290 MB/sec per drive, well below their link speed of 6 Gbps or 600MB/sec. It's clear to me that to expose the differences in controller bandwidth I'm going to have to get more drives.
 

aufalien

Patron
Joined
Jul 25, 2013
Messages
374
Thats interesting. Some guys on the Illumos forum have experienced very poor all SSD pool performance. A ZIL was suggested to boost IO. Something about possibly tuning sync_write_max_active for an SSD pool. At any rate thanks much for the post.

Funny thing, Mr Guru Richard Elling suggested striping a ZIL for better performance which I found curious. I had made a statement earlier on this forum about how the nature of a ZIL means its a single thread operation and would not benefit from such a thing. What do you think?
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
Will you be doing any benchmark testing on a single SSD to see what it's throughput is? I was reading that the maximum read is up to 500MB/sec while write speed is up to 300MB/sec and I'd assume that this was for sequential read/write.
 

Rand

Guru
Joined
Dec 30, 2013
Messages
906
Very interesting... especially the part with the write preference... really looking forward to an explanation for that- that has been up on the forums various times for non ssd pools and has never been checked properly.
 
Status
Not open for further replies.
Top