What performance increase should be expected from a striped pool?

Status
Not open for further replies.

resuni

Cadet
Joined
Aug 4, 2017
Messages
4
I'm preparing to use an all-SSD FreeNAS server I just built in a QEMU/KVM cluster for use as iSCSI storage. Before I do so, I've been running some performance tests on various types of ZFS pools.

The SSDs I'm using are 1 TB Crucial MX300s. According to Newegg, The read/write spec listed is 530/510 MB/s.

Here's how I'm testing read performance:
Code:
dd if=/dev/random of=ddtest bs=4K count=16M
dd if=ddtest of=/dev/null bs=4K

Here's how I'm testing write performance:
Code:
dd if=/dev/zero of=ddtest bs=4K count=16M

When I create the volumes, I make sure that compression and atime are disabled. I disable atime because this NAS will be used for iSCSI when put in production, and based on my reading, we won't need atime if we're using iSCSI. During my tests, disabling atime also showed an increase in performance. I disable compression simply because I don't need it.

I run run these commands in the directory in /mnt that's created after I create the volume/dataset in the FreeNAS web UI. For example, if I create a volume/dataset called "test_stripe", I run the dd commands in "/mnt/test_stripe". I run the commands three times and average each result.

I write a >64 GB file (twice the size of my total amount of RAM (32 GB)) to make sure I exceed ARC.

Here are my test results:

Standalone SSD:
  • Read: 513.8 MB/s
  • Write: 420.5 MB/s
2 mirrored SSDs:
  • Read: 907.4 MB/s
  • Write: 415.1 MB/s
2 striped SSDs:
  • Read: 897.3 MB/s
  • Write: 433.2 MB/s
These are all acceptable speeds based on the spec listed on Newegg, except for the write speed on the striped array. I would expect the write speed to nearly double, just like the read speed. Instead, the write speed in a striped array barely changes from the standalone SSD test.

Two questions:
  • What kind of performance should I expect from disks in a striped array? Are my current expectations wrong?
  • What can be done to improve write performance?
Thanks in advance.
 

resuni

Cadet
Joined
Aug 4, 2017
Messages
4
I performed an additional test in a RAID 10 with 4 SSDs, and these numbers still don't make sense.

Read: 850.7 MB/s
Write: 429.0 MB/s

The write speed still doesn't change, but the read speed is roughly the same as both the mirror pool and the stripe pool in my last post. Shouldn't I get nearly 4 times the read speed of a single SSD?
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Have you tried using a larger than 4KiB block size for your tests?

What are your hardware specs?
 

resuni

Cadet
Joined
Aug 4, 2017
Messages
4
Thanks for the suggestion. I modified my write test to the following:

Code:
dd if=/dev/zero of=ddtest bs=16K count=4M


And now I'm getting 649.3 MB/s.

This is drastically better, but I was hoping to see over 700 MB/s. Is there a recommendation for what block size to use in these tests? I read matthewowen01's thread on benchmarking and performance, but he recommends to use a very large block size. If I drastically increase the block size to 16M, the results stay roughly the same as the 16K results.

Here are my hardware specs:
 
Last edited:

websmith

Dabbler
Joined
Sep 20, 2018
Messages
38
I know this thread is kind of old, but you have to remember that all specs given by manufacturers are in optimal situations. I am sure you can run a test on those drives and only get 10-20 MB/s write speeds - it all depends on whether or not its sequential/random, how many concurrent threads and the block size.

I am no ZFS expert, but I would assume that if you had created your pool with a block size of 128k then you will get the fastest speed if you write in blocks of 128k. But I am not totally sure about this, there are also something called transaction groups that ZFS does, where it bundles your writes up into one transaction and then flushes that transaction to the disk to save the amount of I/O operations it has to do against the physical disks.

In terms of iscsi, I guess it depends on what file system you want to put on the block device and what size of blocks you use on that, again if you can use same size as ZFS I think you get the best performance, but if you have a filesystem with 128k block size, then storing tiny files is not very efficient from the OS's point of view.

So unless you already solved your problem, I would redo my testing with different block sizes on your pool and different block sizes on your dd test and then make sure your OS uses the same block size, if you are tuning your OS install for maximum write performance.
 
Status
Not open for further replies.
Top