My FreeNAS performance review

Status
Not open for further replies.

Adam Tyler

Explorer
Joined
Oct 19, 2015
Messages
67
Hello FreeNAS forum. I've just assembled my first FreeNAS box "on a budget" and I was hoping to review the specifications and current performance metrics to see if there is any obvious small or inexpensive tweak that can be made to increase performance. Here is a bit about the hardware:

Hardware:
OS boot volume: Mirrored desktop class 60Gb Patriot SSDs ($30 each)
SLOG volume: Mirrored 128 Gb Samsung EVO SSDs. desktop class ($94 each)
I know, I know, I should use enterprise class SSDs... I'll get there, but this was in the budget for now.
8, WD RED drives. The 5400 RPM model (About $80 each, $640)

16Gb DDR3 RAM ($120)
i5 2nd gen processor ($40)
Supermicro C7Q67 motherboard ($45)
LSI 9207-8i controller. (only RED drives are connected to this. All other SSDs are conected to SATA on motherboard. SLOG disks are connected to only two motherbaord SATA III ports.) ($122)
800w Corsair PSU ($85)
8 bay case with trays.. ($121)

Total Estimated Cost: $1,421.00 or $781 without the RED drives... I actually had a few of these components on hand so only ended up having to shell out about $473 in total.. This also influenced some of the hardware choices..



ZFS Config:
8, WD RED drives. The 5400 RPM model. Configured in a single ZFS pool as 4, mirrors. Mirrored SLOG on the Samsung SSDs.

Code:
  pool: zPool01
 state: ONLINE
  scan: none requested
config:

		NAME											STATE	 READ WRITE CKSUM
		zPool01										 ONLINE	   0	 0	 0
		  mirror-0									  ONLINE	   0	 0	 0
			gptid/9144804f-2d70-11e8-8f45-003048b3d5b8  ONLINE	   0	 0	 0
			gptid/926ecc51-2d70-11e8-8f45-003048b3d5b8  ONLINE	   0	 0	 0
		  mirror-1									  ONLINE	   0	 0	 0
			gptid/3336bb1e-2d74-11e8-8f45-003048b3d5b8  ONLINE	   0	 0	 0
			gptid/34124ad5-2d74-11e8-8f45-003048b3d5b8  ONLINE	   0	 0	 0
		  mirror-2									  ONLINE	   0	 0	 0
			gptid/6f4654c3-2d74-11e8-8f45-003048b3d5b8  ONLINE	   0	 0	 0
			gptid/701c8b6f-2d74-11e8-8f45-003048b3d5b8  ONLINE	   0	 0	 0
		  mirror-3									  ONLINE	   0	 0	 0
			gptid/7adbb06f-2d74-11e8-8f45-003048b3d5b8  ONLINE	   0	 0	 0
			gptid/7bbe348a-2d74-11e8-8f45-003048b3d5b8  ONLINE	   0	 0	 0
		logs
		  mirror-4									  ONLINE	   0	 0	 0
			gptid/811b011e-2d7c-11e8-8f45-003048b3d5b8  ONLINE	   0	 0	 0
			gptid/99df32b2-2d7c-11e8-8f45-003048b3d5b8  ONLINE	   0	 0	 0



Drive layout for reference:
da0 through 7 are the WD REDs.
ada0 and ada1 are the Samsung SSDs I am using as a log mirror
ada2 and ada3 are the boot mirror.. (These SATA ports were only 2.0 and I wanted to save the 3.0 SATA ports for the log mirror drives <SLOG>)


So, I finally have brought this box up and connected via iSCSI to a couple of VMware hosts (round robin/Multipath IO - 2, 1 Gb links). Running a single VM I transferred a large file back and forth to an external system and captured the attached performance metrics. <See attached PDF>

A lot to go through, I realize. Again just interested in having someone look at this with more experience than I to let me know if there is one tweak that can be made that might unlock huge gains.

I was thinking it may be a next best move to add another 16 Gb RAM, but I can't tell from looking at these metrics myself if that is even a point of performance congestion.. I also think that adding an L2ARC read cache drive would be helpful at some point too.. Perhaps enterprise class drives to replace the SLOG mirror at some point. Any other suggestions? If you were to put these upgrades in a list of priority, what would you list first?

Regards,
Adam Tyler
 

Attachments

  • Performance.pdf
    1.9 MB · Views: 655
Last edited:

Green750one

Dabbler
Joined
Mar 16, 2015
Messages
36
To me, it's all kind of relative. What are you using the box for? Does it deliver to an acceptable real world end user experience? Does it meet your expectations?
If the answer is yes then don't worry, if it's no then again kind of depends on what the box is doing. You could be memory bound - can you ever have too much ram? Or cpu bound, and ultimately everything is network bound.

Sent from my G3221 using Tapatalk
 

Zredwire

Explorer
Joined
Nov 7, 2017
Messages
85
I would not go spend a ton of money purchasing upgrades that may not help. Get a test plan together that can isolate some of the different operations so you can see where the problem is. The very first thing I would do would be to test the disk without involving an external system. You can use software to test. Something like this: https://www.aja.com/products/aja-system-test

Also you can do some slog testing to rule that out: https://forums.freenas.org/index.php?threads/testing-the-benefits-of-slog-using-a-ram-disk.56561/

You may also want to use a single iSCSI link so you can rule out any problems with the multipath.
 

Adam Tyler

Explorer
Joined
Oct 19, 2015
Messages
67
To me, it's all kind of relative. What are you using the box for? Does it deliver to an acceptable real world end user experience? Does it meet your expectations?
If the answer is yes then don't worry, if it's no then again kind of depends on what the box is doing. You could be memory bound - can you ever have too much ram? Or cpu bound, and ultimately everything is network bound.

Sent from my G3221 using Tapatalk

So far my performance testing has been pretty good really. I am able to completely soak a 1 Gb link reading from the array. That is plenty fast enough for me. I'll be running a handful of VMs on this for personal use, but will be doing lab and training work with VMs too. For example, using HDTune's built in "Benchmark" test with my old AMD based NAS4Free 6 drive array, I only got a sustained/average transfer speed of 329 MB/s. This new build went all the way up to 1591 MB/s.. I am a little disappointed with the write performance, but honestly I haven't put it under any kind of load really.

I Used a different tool to gauge performance with a VM running directly on the storage array I just built vs an i5 QNAP array. Didn't expect the sequential write to be so different. Especially with the SSD SLOG in place on the FreeNAS server. See Attached..

QNAP Disk Sequential 64.0 Write: 190 MB/s
FreeNAS Disk Sequential 64.0 Write: 35 MB/s
See attached screenshots for more info....

The QNAP chassis has 6 drives, but they are the 7200 RPM REDs. Where the FreeNAS box is full of the 5400 RPM REDs.
 

Attachments

  • New FreeNAS Build.jpg
    New FreeNAS Build.jpg
    160.1 KB · Views: 474
  • QNAP Compare.jpg
    QNAP Compare.jpg
    127.4 KB · Views: 450

Adam Tyler

Explorer
Joined
Oct 19, 2015
Messages
67
I would not go spend a ton of money purchasing upgrades that may not help. Get a test plan together that can isolate some of the different operations so you can see where the problem is. The very first thing I would do would be to test the disk without involving an external system. You can use software to test. Something like this: https://www.aja.com/products/aja-system-test

Also you can do some slog testing to rule that out: https://forums.freenas.org/index.php?threads/testing-the-benefits-of-slog-using-a-ram-disk.56561/

You may also want to use a single iSCSI link so you can rule out any problems with the multipath.


Wow, So I used that AJA tool to do a write test before and after disabling the SSD SLOG.. It went from 39 MB/s to 114 MB/s without the SLOG enabled. Yikes.. Looks like I have an issue with utilizing SLOG. Wonder why... Hmm.. Will need to pull up the model of those Samsung SLOG drives and figure out what the capabilities are supposed to be.
 
Status
Not open for further replies.
Top