resuni
Cadet
- Joined
- Aug 4, 2017
- Messages
- 4
I'm preparing to use an all-SSD FreeNAS server I just built in a QEMU/KVM cluster for use as iSCSI storage. Before I do so, I've been running some performance tests on various types of ZFS pools.
The SSDs I'm using are 1 TB Crucial MX300s. According to Newegg, The read/write spec listed is 530/510 MB/s.
Here's how I'm testing read performance:
Here's how I'm testing write performance:
When I create the volumes, I make sure that compression and atime are disabled. I disable atime because this NAS will be used for iSCSI when put in production, and based on my reading, we won't need atime if we're using iSCSI. During my tests, disabling atime also showed an increase in performance. I disable compression simply because I don't need it.
I run run these commands in the directory in /mnt that's created after I create the volume/dataset in the FreeNAS web UI. For example, if I create a volume/dataset called "test_stripe", I run the dd commands in "/mnt/test_stripe". I run the commands three times and average each result.
I write a >64 GB file (twice the size of my total amount of RAM (32 GB)) to make sure I exceed ARC.
Here are my test results:
Standalone SSD:
Two questions:
The SSDs I'm using are 1 TB Crucial MX300s. According to Newegg, The read/write spec listed is 530/510 MB/s.
Here's how I'm testing read performance:
Code:
dd if=/dev/random of=ddtest bs=4K count=16M dd if=ddtest of=/dev/null bs=4K
Here's how I'm testing write performance:
Code:
dd if=/dev/zero of=ddtest bs=4K count=16M
When I create the volumes, I make sure that compression and atime are disabled. I disable atime because this NAS will be used for iSCSI when put in production, and based on my reading, we won't need atime if we're using iSCSI. During my tests, disabling atime also showed an increase in performance. I disable compression simply because I don't need it.
I run run these commands in the directory in /mnt that's created after I create the volume/dataset in the FreeNAS web UI. For example, if I create a volume/dataset called "test_stripe", I run the dd commands in "/mnt/test_stripe". I run the commands three times and average each result.
I write a >64 GB file (twice the size of my total amount of RAM (32 GB)) to make sure I exceed ARC.
Here are my test results:
Standalone SSD:
- Read: 513.8 MB/s
- Write: 420.5 MB/s
- Read: 907.4 MB/s
- Write: 415.1 MB/s
- Read: 897.3 MB/s
- Write: 433.2 MB/s
Two questions:
- What kind of performance should I expect from disks in a striped array? Are my current expectations wrong?
- What can be done to improve write performance?