Notes on Performance, Benchmarks and Cache.

X7JAY7X

Dabbler
Joined
Mar 5, 2015
Messages
20
Robert,

Here you go:


Test 5 - Stripe (2) disks
[root@NAS] ~# dd if=/dev/zero of=/mnt/Data/temp.dat bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 369.684998 secs (290447768 bytes/sec)
[root@NAS] ~# dd if=/mnt/Data/temp.dat of=/dev/null bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 355.911251 secs (301688081 bytes/sec)


Test 6 - Mirror Extended (Raid 10 configuration) (4) disks
[root@NAS] ~# dd if=/dev/zero of=/mnt/Data0/temp.dat bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 374.395255 secs (286793652 bytes/sec)
[root@NAS] ~# dd if=/mnt/Data0/temp.dat of=/dev/null bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 224.078697 secs (479180680 bytes/sec)
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
Thanks. So striped mirrors did give you the best streaming read performance, but not the best streaming write. However, your streaming write for a pair of mirrors is just about 2x your streaming write for a single mirror. All your numbers make perfect sense when you consider the work that ZFS has to do and the resources it it has available.

For many workloads, IOPS are more important than streaming performance, but I'm not suggesting you do another batch of tests :)
 

X7JAY7X

Dabbler
Joined
Mar 5, 2015
Messages
20
Robert,

I can certainly do more tests. I would rather do them now before I get data on them. I am curious of the performance of all the configurations so I can pick the best one for my needs. What tests do you want me to run?


Also, here are some other boxes at work for some more comparison:

Storinator - 45Drives.com
i3-3240 3.4GHz
32GB RAM
(45) - WD RED 4TB 5400RPM NAS Drives
Volume Compression - None


Test 1 - RaidZ3 (11) disks
[root@ESCFreeNASBack] ~# dd if=/dev/zero of=/mnt/Data/temp.dat bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 198.100998 secs (542017373 bytes/sec)
[root@ESCFreeNASBack] ~# dd if=/mnt/Data/temp.dat of=/dev/null bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 110.350905 secs (973024936 bytes/sec)




Dell R720
(2)E5-2630L 2GHz
196GB RAM
(8) - Dell 4TB 7200RPM Enterprise Drives
Volume Compression - None


Test 1 - RaidZ1 Extended (4 disks extended with 4 more disks) (8) disks
[root@ESCFreeNAS] ~# dd if=/dev/zero of=/mnt/Data/temp.dat bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 410.829819 secs (261359272 bytes/sec)
[root@ESCFreeNAS] ~# dd if=/mnt/Data/temp.dat of=/dev/null bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 37.414948 secs (2869820438 bytes/sec)


Are these results what you would expect with these configurations?
 
Last edited:

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
I don't really have any expectations. You already ran the tests I was curious about. I didn't mean to suggest that I could offer an analysis that would tell you the 'best' configuration. That depends entirely on the workload and my expertise doesn't go beyond the widely known rules of thumb, e.g. wider stripes for higher capacity and faster streaming, narrower stripes or striped mirrors and more vdevs for better IOPS.
 

jtonthebike

Dabbler
Joined
Apr 17, 2015
Messages
10
Hi,

I'm not sure if this is correct.
FreeNAS 9.3

Supermicro X9, G2030 CPU, 16Gb RAM, 12x 3tb Toshiba disks, 3x LSI cards in IT mode in a Supermicro 846 chassis.

[xxxxx@freenas ~]# dd if=/dev/zero of=/mnt/Skywalker/temp.dat bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 31.175248 secs (3444212598 bytes/sec)

&

[root@freenas ~]# dd if=/mnt/Skywalker/temp.dat of=/dev/null bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 13.595353 secs (7897859164 bytes/sec)

This can't be right, surely. Any help on establishing an actual benchmark so I can start getting to the bottom of why I had a transfer at only 39/40MBs

thanks ;)
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
Turn off compression during the test :)
 

jtonthebike

Dabbler
Joined
Apr 17, 2015
Messages
10
cool. Thanks! just done that ;)

that seems a bit more like it ... edit .. crikey ... it seems like its a fairly brisk bit of kit.

write
[root@freenas ~]# dd if=/dev/zero of=/mnt/Skywalker/temp.dat bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 122.771681 secs (874584281 bytes/sec)

read
[root@freenas ~]# dd if=/mnt/Skywalker/temp.dat of=/dev/null bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 121.666627 secs (882527813 bytes/sec)

iperf is running at 890/900Mbs (Gb lan) ... time to explore 10G ;)
 
Last edited:

DataKeeper

Patron
Joined
Feb 19, 2015
Messages
223
Saw the update email for this thread and figured I'd add this in here since I was sitting here relaxing after nice turkey diner this evening.

[root@FileServ] /mnt/garage# dd if=/dev/zero of=/mnt/garage/temp.dat bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 105.864436 secs (1014261130 bytes/sec)

[root@FileServ] /mnt/garage# dd if=/mnt/garage/temp.dat of=/dev/null bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 93.633825 secs (1146745659 bytes/sec)

* I should add.. Raidz2 - zpool of 18 4TB Reds in 3 x 6 drive vdevs giving 63.1TB - Usable is 40.7TB.

System Config from sig and current Stable:

* My Build Thread *
SUPERMICRO CSE-846E16-R1200B 1200W PSUs <|> SUPERMICRO MBD-X10SRL-F <|> Intel Xeon E5-1650 v3 Haswell-EP 3.5GHz
Noctua NH-U9DX i4 Cooler <|> 64GB SAMSUNG SDRAM ECC Reg DDR4 M393A2G40DB0-CPB <|> 18x4TB WD Reds WD40EFRX RAIDz2
2 Mirrored SUPERMICRO SSD-DM064-PHI SATA DOM <|> IBM ServeRAID M1015 <|> Intel 10GbE Network Adapter X540-T1
APC Smart-UPS SUA2200RM2U
 
Last edited:

RegularJoe

Patron
Joined
Aug 19, 2013
Messages
330
It sounds like we need a benchmark button in the GUI, one for the individual disks before we create a ZFS set, store the results in the local DB and another one for our ZFS pool.
 

beatmix01

Cadet
Joined
Aug 25, 2015
Messages
4
[root@xxl] /mnt/data# dd if=/dev/zero of=tmp.dat bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 880.468381 secs (121951208 bytes/sec)
[root@xxl] /mnt/data# dd if=tmp.dat of=/dev/null bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 916.852838 secs (117111687 bytes/sec)

Slow and steady....
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
@beatmix01

Your other post mentions a Celeron with 4GB of RAM. We know that Celeron's perform poorly with ZFS (their limited L1 and L2 cache is a performance killer for ZFS), and we know that 4GB of RAM is below the minimum for all versions of FreeNAS released in the last 2 years. Even if you were at 8GB of RAM, we don't really entertain arguments that performance is slow since that is the minimum. Generally if you want good performance you should consider having more than the minimum RAM. :P

So yeah.. I think this is a "get better hardware for better performance" type of scenario.

If you're just posting a data point and not actually having a problem, ignore my post. ;)
 

beatmix01

Cadet
Joined
Aug 25, 2015
Messages
4
@beatmix01

Your other post mentions a Celeron with 4GB of RAM. We know that Celeron's perform poorly with ZFS (their limited L1 and L2 cache is a performance killer for ZFS), and we know that 4GB of RAM is below the minimum for all versions of FreeNAS released in the last 2 years. Even if you were at 8GB of RAM, we don't really entertain arguments that performance is slow since that is the minimum. Generally if you want good performance you should consider having more than the minimum RAM. :p

So yeah.. I think this is a "get better hardware for better performance" type of scenario.

If you're just posting a data point and not actually having a problem, ignore my post. ;)

My FreeNAS needs are far from production. I have a Drobo on USB 2.0 and it sucks, I needed some space for cold storage of data, and maybe a VM or two over iSCSI for testing. I'm not complaining, but I am really enjoying FreeNAS far more than I thought I would. Never ended up pulling the trigger on the additional HBA card I mentioned in the other thread though. Good bit of info about the Celeron. I have an old Core 2 Duo E7500 laying around, which has bigger L1 and L2. Is it worth going through the upgrade? I already swapped PROCs.
 
Last edited:

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
My FreeNAS needs are far from production. I have a Drobo on USB 2.0 and it sucks, I needed some space for cold storage of data, and maybe a VM or two over iSCSI for testing. I'm not complaining, but I am really enjoying FreeNAS far more than I thought I would. Never ended up pulling the trigger on the additional HBA card I mentioned in the other thread though. Good bit of info about the Celeron. I have an old Core 2 Duo E7500 laying around, which has bigger L1 and L2. Is it worth going through the upgrade? I already swapped PROCs.

Eh, FSB CPU's aren't really recommended because the FSB is a limitation. Of course if you replaced the Celeron with the C2D then it's probably an 'upgrade'. :P
 

helloha

Contributor
Joined
Jul 6, 2014
Messages
109
Did a lot of testing yesterday, also some older disks.

SEAGATE ST4000DM000

Code:
1 DISK SEAGATE 4TB
10GB    10737418240 bytes transferred in 79.170663 secs     (135623700 bytes/sec)
50GB    53687091200 bytes transferred in 416.791412 secs     (128810454 bytes/sec)

2 DISK SEAGATE 4TB - STRIPE
10GB    10737418240 bytes transferred in 41.380148 secs     (259482355 bytes/sec)
50GB    53687091200 bytes transferred in 220.159250 secs     (243855714 bytes/sec)


3 DISK SEAGATE 4TB - STRIPE
10GB     10737418240 bytes transferred in 33.149625 secs     (323907683 bytes/sec)
50GB     53687091200 bytes transferred in 161.944467 secs     (331515440 bytes/sec)
150GB     161061273600 bytes transferred in 471.708476 secs     (341442399 bytes/sec)

3 DISK SEAGATE 4TB - RAIDZ1
10GB    10737418240 bytes transferred in 42.420869 secs     (253116413 bytes/sec)
50GB    53687091200 bytes transferred in 279.777942 secs     (191891794 bytes/sec)
150GB    161061273600 bytes transferred in 791.528986 secs     (203481207 bytes/sec)


HITACHI H3IKNAS40003272SE

Code:
1 DISK HITACHI 4TB
10GB    10737418240 bytes transferred in 54.512020 secs     (196973406 bytes/sec)
50GB    53687091200 bytes transferred in 402.581019 secs     (133357234 bytes/sec)
150GB    161061273600 bytes transferred in 1356.690919 secs     (118716261 bytes/sec)

2 DISK HITACHI 4TB - STRIPE
10GB    10737418240 bytes transferred in 64.549780 secs     (166343220 bytes/sec)
50GB    53687091200 bytes transferred in 361.722559 secs     (148420633 bytes/sec)
150GB    161061273600 bytes transferred in 1114.905597 secs     (144461804 bytes/sec)


SEAGATE ST4000DM000 + HITACHI H3IKNAS40003272SE

Code:
5 DISK 4TB - RAIDZ1
10GB    10737418240 bytes transferred in 41.917293 secs     (256157243 bytes/sec)
50GB    53687091200 bytes transferred in 320.151640 secs     (167692695 bytes/sec)
150GB    161061273600 bytes transferred in 952.982059 secs     (169007666 bytes/sec)

5 DISK 4TB - STRIPE
10GB    10737418240 bytes transferred in 31.463739 secs     (341263261 bytes/sec)
50GB    53687091200 bytes transferred in 221.270983 secs     (242630509 bytes/sec)
150GB    161061273600 bytes transferred in 687.978145 secs     (234108125 bytes/sec)


When mixing the two types of disks performance started to do funny things. When looking at iostat you could tell that the two Hitachi disks where faster and the Seagates had trouble keeping up. The writes/s on the Hitachis were always in the 100 Mb/s. While the Seagate disks hovered around 60Mb/s. Writes to the Hitachi disks would come to a stop so the Seagate disks could catch up.

This was interesting since the individual performance of the disks was about the same. Not sure what is going on but would like to know if anyone can comment?

Code:
HITACHI-SEAGATE-5x4TB-16TB              16.4G  18.1T      0  1.93K      0   244M
  raidz1                                16.4G  18.1T      0  1.93K      0   245M
    gptid/bfb4a1f9-5fb8-11e5-accd-001f2954783c      -      -      0  1.19K      0   152M
    gptid/c0ab27bf-5fb8-11e5-accd-001f2954783c      -      -      0  1.25K      0   160M
    gptid/c190c1dc-5fb8-11e5-accd-001f2954783c      -      -      0    494      0  61.6M
    gptid/c34c1f25-5fb8-11e5-accd-001f2954783c      -      -      0    491      0  61.2M
    gptid/c582bf8e-5fb8-11e5-accd-001f2954783c      -      -      0    491      0  61.2M

HITACHI-SEAGATE-5x4TB-16TB              49.7G  18.1T      0  1.92K      0   244M
  raidz1                                49.7G  18.1T      0  1.92K      0   244M
    gptid/bfb4a1f9-5fb8-11e5-accd-001f2954783c      -      -      0      0      0      0
    gptid/c0ab27bf-5fb8-11e5-accd-001f2954783c      -      -      0      0      0      0
    gptid/c190c1dc-5fb8-11e5-accd-001f2954783c      -      -      0    483      0  60.3M
    gptid/c34c1f25-5fb8-11e5-accd-001f2954783c      -      -      0    496      0  61.9M
    gptid/c582bf8e-5fb8-11e5-accd-001f2954783c      -      -      0    495      0  61.8M


HITACHI 3TB - HDS723030ALA640

Code:
6 DISK HITACHI 3TB - RAIDZ1
10GB    10737418240 bytes transferred in 55.990810 secs     (191771082 bytes/sec)
50GB    53687091200 bytes transferred in 257.676067 secs     (208351097 bytes/sec)
 

VictorR

Contributor
Joined
Dec 9, 2015
Messages
143
Also, here are some other boxes at work for some more comparison:

Storinator - 45Drives.com
i3-3240 3.4GHz
32GB RAM
(45) - WD RED 4TB 5400RPM NAS Drives
Volume Compression - None

In the testing phase of a new Storinator Q30
SuperMicro X10DRL motherboard
Dual Xeon E5-2620 v3 @ 2.4GHz
256GB RAM
2x 125GB SSD "backup" drives
3 x dual Intel X540T2BLK 10Gbe NICs
28 x 4TB WD Re drives (it holds 30, but one drive died and waiting on a replacement)

RAIDZ2 (2 x 14 drives)

Write
[root@Q30] ~# dd if=/dev/zero of=/mnt/Q30/temp.dat bs=2048k count=102k
1048576+0 records in
1048576+0 records out
2199023255552 bytes transferred in 2345.738209 secs (937454677 bytes/sec)

Read
[root@Q30] ~# dd if=/mnt/Q30/temp.dat of=/dev/null bs=2048k count=1024k
1048576+0 records in
1048576+0 records out
2199023255552 bytes transferred in 1883.265874 secs (11667664792 bytes/sec)

MNCelqD.png


Will try RAIDZ2 (3 x 10) when replacement drive arrives in the next few days. Then create a new volume 3 x 10 mirrors, maybe.
 
Last edited:

enemy85

Guru
Joined
Jun 10, 2011
Messages
757
Have you disabled compression on your test dataset? it looks too fast...
 

VictorR

Contributor
Joined
Dec 9, 2015
Messages
143
Have you disabled compression on your test dataset? it looks too fast...

Compression is turned off on the volume/dataset
XeHUBvR.png


The NAS does have 2x125GB SSD's for backup/boot. Maybe those are getting used as cache. But, I doubt it.

Although, I've got a intermittent SATA cable or Rocket 750 HBA card problem that causes a single drive slot to go offline and drop the pool into degraded mode. Switching cable ports on the card seems to have solved it (for now).

Getting back to your point of speeds being too high... created a new 2x15 vdev RAID10 pool since the 30th drive shows now. I wanted to get some benchmark results to compare other RAID configs to. RAID10 should be faster than RAIDZ2, right? And yet write speed is significantly slower than above. But, look at the read speed - 7+GB/sec. That can't be accurate.

sdJLczS.png


Run began: Tue Dec 29 10:48:23 2015

Record Size 1024 KB
File size set to 52428800 KB
Command line used: iozone -t 1 -i 0 -i 1 -r 1M -s 50G
Output is in Kbytes/sec
Time Resolution = 0.000001 seconds.
Processor cache size set to 1024 Kbytes.
Processor cache line size set to 32 bytes.
File stride size set to 17 * record size.
Throughput test with 1 process
Each process writes a 52428800 Kbyte file in 1024 Kbyte records

Children see throughput for 1 initial writers = 775523.56 KB/sec
Parent sees throughput for 1 initial writers = 722689.08 KB/sec
Min throughput per process = 775523.56 KB/sec
Max throughput per process = 775523.56 KB/sec
Avg throughput per process = 775523.56 KB/sec
Min xfer = 52428800.00 KB

Children see throughput for 1 rewriters = 754924.88 KB/sec
Parent sees throughput for 1 rewriters = 687461.70 KB/sec
Min throughput per process = 754924.88 KB/sec
Max throughput per process = 754924.88 KB/sec
Avg throughput per process = 754924.88 KB/sec
Min xfer = 52428800.00 KB

Children see throughput for 1 readers = 7171365.50 KB/sec
Parent sees throughput for 1 readers = 7170736.16 KB/sec
Min throughput per process = 7171365.50 KB/sec
Max throughput per process = 7171365.50 KB/sec
Avg throughput per process = 7171365.50 KB/sec
Min xfer = 52428800.00 KB

Children see throughput for 1 re-readers = 7302568.00 KB/sec
Parent sees throughput for 1 re-readers = 7300859.28 KB/sec
Min throughput per process = 7302568.00 KB/sec
Max throughput per process = 7302568.00 KB/sec
Avg throughput per process = 7302568.00 KB/sec
Min xfer = 52428800.00 KB
iozone test complete.

root@Q30 /mnt/Q30]# dd if=/dev/zero of=/mnt/Q30/temp.dat bs=2048k count=1024k
1048576+0 records in
1048576+0 records out
2199023255552 bytes transferred in 2922.820840 secs (752363342 bytes/sec)

[root@Q30 /mnt/Q30]# dd if=/mnt/Q30/temp.dat of=/dev/null bs=2048k count=1024k
1048576+0 records in
1048576+0 records out
2199023255552 bytes transferred in 1774.721528 secs (1239080735 bytes/sec)
 
Last edited:

Visseroth

Guru
Joined
Nov 4, 2011
Messages
546
Write...
Code:
 dd if=/dev/zero of=/ddfile bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 28.827803 secs (3724674502 bytes/sec)


Read...
Code:
dd if=/ddfile of=/dev/null bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 18.365531 secs (5846505750 bytes/sec)


That's equal to...
3.72 GB/s or 29.79 Gb/s Write
5.84 GB/s or 46.77Gb/s Read

System
SuperMicro SC847
SuperMicro X8DTH-iF
Intel Xeon X5690 6 Core w/HT 3.46Ghz
48GB of RAM
LSI 9211-8i
2 vDevs X 6 Drives each HGST 4TB NAS
Chelsio T420
 
Last edited:
Top