Disk speed check?

ZippyZoodles

Dabbler
Joined
Jul 8, 2016
Messages
10
So I just setup my freenas 9.10 system and cross flashed my IBM to LSI and have P20 running. Created a mirrored raid pool and everything looks good there. All 4 - 4TB WD red drives were added that were attached to the HBA.

I did use some old SAS cables from my older SAS card and wondering if there is any way in Freenas to check if all the WD Red drives are running in SATA 3 mode (6Gbps)?

thanks
 

snaptec

Guru
Joined
Nov 30, 2015
Messages
502
Even If not its 3gbs. You won't get anything near 380mb/s out a red drives.


Gesendet von iPhone mit Tapatalk
 

ZippyZoodles

Dabbler
Joined
Jul 8, 2016
Messages
10
So what drives do you suggest? I am using this for Esxi datastores. I need around 6TB of space. 10K SAS drives or flash?
 
Last edited:

snaptec

Guru
Joined
Nov 30, 2015
Messages
502
Use mirrors and more of them for VM Storage.
The other question is how much iops you Need.
Could you post the Rest of the Hardware?


Gesendet von iPhone mit Tapatalk
 

ZippyZoodles

Dabbler
Joined
Jul 8, 2016
Messages
10
This is a homelab, migrating from a Synology 1813+ over 1GB Isci

FreeNAS System is Supermicro X10SLL-F
FreeNAS-9.10-STABLE-201606270534 (dd17351)
Platform Intel(R) Xeon(R) CPU E3-1230 v3 @ 3.30GHz
Memory 32697MB DDR3 ECC
IBM M-1015 flashed to P20 IT mode
6 Intel NIC interfaces

Two ESxi Hosts - 8GB FC via Isci (all setup and working on hosts).

I am not sure what kind of Iops I really need for best bang/buck hard drive wise in my FreeNAS build. Right now I am only using the FreeNAS for testing so if I have to nuke the drives I have no issue with that.
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
Set up your drives in mirrors and do since performance testing. dd is usually my drive of choice.
 

sfcredfox

Patron
Joined
Aug 26, 2014
Messages
340
I posted this looking for better ways to gauge max performance, but you can use it to see one possible way to setup for VMware. The screen shots show six VMs all being tested at once.
https://forums.freenas.org/index.php?threads/methods-for-performance-max-out-testing.44610/

I'm currently researching going with a supermicro chassis to hold 3.5" drives and buying regular 7200RPM multi-TB drives instead of old SAS drives. I'm trying to determine if the performance will be good enough with the L2ARC making up for the higher seek times.

Mirrors are the way to go for VMware, also, you need to keep a significant amount of free space to help preventing fragmentation of data. Research posts from jgreco on that topic.
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
dd is a tool that writes and read data to disk. Good for testing raw streaming performance.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
To use dd here is an example...

1) Create a dataset which has compression turned off. This is important because compression will give you a false reading.
2) Open up a shell window.
3) Type "dd if=/dev/zero of=/mnt/pool/dataset/test.dat bs=2048k count=10000"
4) Note the results.
5) Type "dd of=/dev/null if=/mnt/pool/dataset/test.dat bs=2048k count=10000"
6) Note the results.
7) Lastly cleanup your mess and "rm /mnt/pool/dataset/test.dat" to delete the file you just created.

Note: /mnt/pool/dataset will depend on your specific pool name and dataset name.
 

ZippyZoodles

Dabbler
Joined
Jul 8, 2016
Messages
10
Ok awesome info... I have a ubuntu server on my vmware host I can try this on. will give it a go.

Just questioning again whether I should get WD Black Drives for a 4 disk stiped mirror array (raid 10), go 10k SAS or perhaps purchase some SSD's?
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
Just questioning again whether I should get WD Black Drives for a 4 disk stiped mirror array (raid 10), go 10k SAS or perhaps purchase some SSD's?
That depends on your IOPS you need and storage size and how much money you can spend. If you have some hard drives already available then I'd recommend you try those first to give you an idea and then base your purchase off of that.

Are you looking for more performance than the Synology 1813+ ? If so, you may need to figure out the IOPS of your Synology so you have a baseline and then using the exact same test, see what FreeNAS tests at. You will likely need to run this from a client computer over the Ethernet and there are several programs out there.

Let me also clear up something... The "dd" test is strictly the internal capability within the FreeNAS system. It has nothing to do with data flowing through the Ethernet port but it will give you a good idea if your hard drives and their configuration are a bottleneck. It also does not provide an IOPS value unfortunately.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
Just a followup on one method to check IOPS directly on a FreeNAS machine... You could install "bonnie++" in a jail, create a temporary place to write those files like "/test", then run "bonnie++ -u 0:0 -d /test -s 64000" and eventually it will finish. The IOPS value will be Random Seeks value. My FreeNAS implementation yields a value of 354.0 (with compression turned on which is the default). However, with compression turned off you will see the true limitations of the hard drives and mine reported 92.5 IOPS for the same system. The WD Red drives are not designed for high IOPS but depending on how you setup your pool, you can achieve higher values.
 

snaptec

Guru
Joined
Nov 30, 2015
Messages
502
You could also have a look at fio for iops testing.
I use that for comparing different systems / setups to get an idea how good it performs.


Gesendet von iPhone mit Tapatalk
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
So what drives do you suggest? I am using this for Esxi datastores. I need around 6TB of space. 10K SAS drives or flash?

You need 6TB of usable space but you only have 4 x 4TB drives? In mirror vdevs, that means you only have an 8TB pool, and that puts you at ~75% full. You don't want to be there, that will over time become painfully slow as fragmentation pwns your pool. Add at least two more 4TB drives (another 4TB mirror) if not two more... or larger drives... or both...
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
The WD Red drives are not designed for high IOPS but depending on how you setup your pool, you can achieve higher values.

It's worth pointing out that this is the understatement of the year. If you play by ZFS rules, in most cases you can design for a workload and get a pool that delivers a lot more IOPS than the underlying hardware is theoretically capable of, or that artificial benchmarks report. Lots of free space on the pool, lots of ARC, lots of L2ARC, you can make the thing fly at 5x-10x what a conventional array would provide on the same raw hard disks.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
It's worth pointing out that this is the understatement of the year. If you play by ZFS rules, in most cases you can design for a workload and get a pool that delivers a lot more IOPS than the underlying hardware is theoretically capable of, or that artificial benchmarks report. Lots of free space on the pool, lots of ARC, lots of L2ARC, you can make the thing fly at 5x-10x what a conventional array would provide on the same raw hard disks.
I of course completely agree with your statement but I'd like to point out that I was only focusing on the raw hard drive performance. My numbers would have been much larger if I had picked a file size value of something that would have fit in my ARC but I specifically desired to see what my real drive performance was. On my Test Rig which has a mirrored pair, the IOPS was at 198 IOPS (no compression) however the throughput was terrible and the test just took forever to run. So it's a balancing act of IOPS and throughput. Like my friend said, lots of ARC (RAM) and L2ARC can make it fly but they are no substitute for a well designed pool in the first place in my opinion.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I of course completely agree with your statement but I'd like to point out that I was only focusing on the raw hard drive performance. My numbers would have been much larger if I had picked a file size value of something that would have fit in my ARC but I specifically desired to see what my real drive performance was. On my Test Rig which has a mirrored pair, the IOPS was at 198 IOPS (no compression) however the throughput was terrible and the test just took forever to run. So it's a balancing act of IOPS and throughput. Like my friend said, lots of ARC (RAM) and L2ARC can make it fly but they are no substitute for a well designed pool in the first place in my opinion.

While I agree that it is nice to understand what your pool may be capable of, I think it is somewhat deceptive because it might suggest, especially to a beginner, that there is a way to draw a parallel between normal "IOPS" calculations and ZFS behaviours, which are freakishly different in many cases. For example, someone used to the conventional behaviours might understand that IOPS implies seeking which in turn implies random I/O, but in ZFS it also turns out that what is conventionally thought of as sequential I/O may also be causing seeking, especially on a highly fragmented filesystem, which turns out to be a subject of much confusion for many people who can't understand why their sequential accesses are so damn slow.

The underlying pool IOPS are only one facet of a complex set of factors that work together to deliver real-world ZFS performance, and it turns out that it is wicked hard to quantify this. Not a big fan of the benchmarks. :-/
 
Top