New NAS for Home/Lab use

Status
Not open for further replies.

tahoward

Dabbler
Joined
Jan 7, 2018
Messages
24
Been running a small lab on a Xeon E3 machine for a while and ready for a significant upgrade. Plan is to split storage and compute with storage running FreeNAS.

Storage Machine:
Chassis: Rosewill RSV-L4500
Chassis Cooling: Noctua NF-F12 x3
Chassis Cooling: Noctua NF-A8 x2
HDD Bays: Icy Dock FatCage MB155SP-B x3
HDD Cooling: Noctua NF-A9 FLX x3
Motherboard: Supermicro X9SRI-F (Re-purposed Hardware) Supermicro X9SRL-F (Better expansion options)
CPU: Intel Xeon E5-1620 v2 (Re-purposed Hardware)
CPU Cooling: Noctua U9DXi4
RAM: Kingston KVR16R11D4K4/64 (Re-purposed Hardware)
HBA: LSI 9211-8i x2
NIC: Intel X520-DA1
HDD: Toshiba N300 4TB x12
Boot SSD: Samsung 850 Pro 256GB
SLOG SSD: Intel DC S3700 400GB Intel DC P3700 400GB
PSU: Seasonic SSR-850PX

Compute Machine:
Chassis: Rosewill RSV-L4500
Chassis Cooling: Noctua NF-F12 x3 Nidec Servo GentleTyphoon D1225C12B6ZPA-64 x6
Chassis Cooling: Noctua NF-A8 x2
Motherboard: Supermicro X9DRI-F
CPU: Intel E5-2667 v2 x2 (Found two of these for $260 each on Amazon! Ebay has similar deals.)
CPU Cooling: Noctua U9DXi4 x2
RAM: Kingston KVR16R11D4K4/64 x4
NIC: Intel X520-DA1
Boot SSD: Samsung 850 Pro 256GB
PSU: Seasonic SSR-850PX

Compute and Storage will be linked directly by Intel X520-DA1 NICs. System board gigabit ports will run to a Linksys LGS318 which attaches to home network.

Storage will provide storage capacity both Lab (Docker, LXC, KVM) and Home (Plex, NFS) services. Old E3 system will be re-purposed for build and management services (Kubernetes, Jenkins, Terraform, etc).

Not 100% sure I need L2ARC. Going to start with 64GB of RAM for Storage and add more if necessary...

Storage disk drives will start as 4x striped vdevs under a single volume with datasets broken out for Plex, Kubernetes and NFS, possibly more...
Would like to be able to expand up to 12 disks leaving 3 hotswap bays for boot drive, L2ARC?? and SLOG.

Will update with more info as necessary! Opinions, thoughts, ideas are appreciated!
 
Last edited:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Chassis Cooling: Noctua NF-F12 x3
On the FreeNAS node:
I don't know where you plan to mount those. This chassis doesn't have a mid-chassis fan wall like some of the other rackmounts. You take the included drive bays out to put your Icy Dock cages in and the 120mm fan mounts go with.
For the SAS controller, these are less expensive and give the same functionality, I use them in three servers:
https://www.ebay.com/itm/Dell-H310-...0-IT-Mode-for-ZFS-FreeNAS-unRAID/162834659601
For the chassis, you might want to go with something like this and toss the system board that is in it in favor of the one you have:
https://www.ebay.com/itm/Supermicro...-W-X8DAH-F-BPN-SAS2-846EL1-RAILS/382243380103
I didn't do the math but it might actually be less expensive and you get redundant power supplies that way.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080

tahoward

Dabbler
Joined
Jan 7, 2018
Messages
24
That would be a terrific option! My only concern would be noise as this will sit inside my home office which is near the master bedroom. My personal experience was with a 1u chassis that was intended for colo and it was quite the noise maker! Would a 3u/4u w/ redundant PSUs fair better?

Regarding mid-chassis fan wall, not 100% on what you mean, but there are 120mm fans that are also behind the cages.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
That would be a terrific option! My only concern would be noise as this will sit inside my home office which is near the master bedroom. My personal experience was with a 1u chassis that was intended for colo and it was quite the noise maker! Would a 3u/4u w/ redundant PSUs fair better?
I have two 3u and one 4u servers in my office and I find it tolerable but I did make a modification to slow the stock fans. The stock fans are quite good quality, high static pressure, so there is no point replacing them, but the ones I got defaulted to 5000 RPM, so I used some of these to slow them down:
https://www.ebay.com/itm/5pcs-4pin-...Noise-Reduction-Cable-Controller/281726136831
That brought the speed down to around 1200 RPMs and it keeps the drives cool enough, but I did still need active CPU coolers to keep that temp down. The power supply fans are small 40mm jobs that are about 3 times thicker than what you can buy normally and you can't change those, but the high pitched wine from them is easy to baffle with some noise absorbing material so I don't even hear them sitting 8 feet away in my office.
 
Last edited:

tahoward

Dabbler
Joined
Jan 7, 2018
Messages
24
Compute machine is put together!

Used E5-2667 v2 chips are performing in range throwing a 24422 CPU Passmark score and system memory shows stable after a 19 hour memtest86 run. Temps are looking OK. CPUs run up to about 75C but the system memory bank near PSU is getting a bit hot for my own comfort at 78-79C while running Prime95. Suspecting the 120m fan configuration is not adequate. Currently, front chassis fans are stock Rosewill and mid chassis are NF-F12s that cap at 1500rpm. Going to try swapping all 6 fans out with Nidec Servo GentleTyphoons with a 2150rpm cap.

Also swapping X9SRI-F with X9SRL-F for better expansion options for Storage machine. Found used for $85 bucks.

Compute_Server_Small.jpg
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
It doesn't look like there are any drives in there yet. The real test is when the drives are in. You need a high static pressure fan to force the air between the drives and get enough airflow to keep them cool. That is where your heat will come from. On FreeNAS your CPU and RAM will never get as hot as the synthetic tests do. My CPU hardly ever gets above 32c even when I am trans-coding video for Plex.
 

tahoward

Dabbler
Joined
Jan 7, 2018
Messages
24
It doesn't look like there are any drives in there yet. The real test is when the drives are in. You need a high static pressure fan to force the air between the drives and get enough airflow to keep them cool. That is where your heat will come from. On FreeNAS your CPU and RAM will never get as hot as the synthetic tests do. My CPU hardly ever gets above 32c even when I am trans-coding video for Plex.
That's just the compute node which will be linked to the FreeNAS system via 10Gbe.

Still waiting for some parts to come in to finish the other box. Might be another week before I can start running tests.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
That's just the compute node which will be linked to the FreeNAS system via 10Gbe.
I didn't think that chassis had a way to mount a mid-chassis fan wall. Does that come with the system or is it something you added?
 

tahoward

Dabbler
Joined
Jan 7, 2018
Messages
24
I didn't think that chassis had a way to mount a mid-chassis fan wall. Does that come with the system or is it something you added?
That's stock, it comes fully populated with fans as well:
6x BDM12025S
2x BDM8025S

Edit: So I was curious where the fan wall confusion was coming from and noticed that the common Rosewill RSV-L4412 w/ 12 hotswap bays does not have a mid fan wall. What I purchased, Rosewill RSV-L4500, does have a mid chassis fan wall. I'm just swapping out the stock front cages with Icy Dock units.
 
Last edited:

tahoward

Dabbler
Joined
Jan 7, 2018
Messages
24
So the HDD Cages came in and there was a small problem. Rosewill's cages install with guide rails which mount to the cage's pre-drilled mount holes. The Icy Dock cages I purchased had these holes, however, they were oriented in a way where they could only be mounted "upside down." So I made a few small modifications!

Drilled new 3/32" holes on top and bottom of cage. 7/64" holes only needed on top side.
HDD_Cage_Mod_Small.jpg

Cage guide rail installed:
HDD_Cage_Rail_Small.jpg

All the cages installed!
HDD_Cage_Installed_Small.jpg

Shot of the back side:
HDD_Cage_Back_Plane_Small.jpg
 
Last edited:

tahoward

Dabbler
Joined
Jan 7, 2018
Messages
24
Storage Node is put together!

Have four more hard drives coming in thanks to a Newegg sale on those 4TB N300 for $100 a pop.
Memtest passed overnight!
FreeNAS 11.1 installed!

Here's a shot from the top:
Storage_Server_Small.jpg

Boot drive is attached to system board while the rest of the cages are striped between the two HBA controllers. If cage 1 or 2 were to fail the system /should/ keep running. Cage 3 could partial fail as long as it does not affect the boot drive. If a HBA controller were to fail the system would also keep going. Loss of HBA controller 0 would drop SLOG.
Storage_Server_Cage_Diagram.jpg

Temps look good while badblocks runs:
Code:
storage# for i in $(seq 0 12); do TEMPC=$(smartctl -A /dev/da$i | grep "Temperature_Celsius"); echo "da$i: $TEMPC"; done
da0: 194 Temperature_Celsius	 0x0022   100   100   000	Old_age   Always	   -	   41 (Min/Max 21/43)
da1: 194 Temperature_Celsius	 0x0022   100   100   000	Old_age   Always	   -	   40 (Min/Max 21/43)
da2: 194 Temperature_Celsius	 0x0022   100   100   000	Old_age   Always	   -	   40 (Min/Max 22/44)
da3: 194 Temperature_Celsius	 0x0022   100   100   000	Old_age   Always	   -	   42 (Min/Max 20/42)
da4: 194 Temperature_Celsius	 0x0022   100   100   000	Old_age   Always	   -	   43 (Min/Max 21/43)
da5: 194 Temperature_Celsius	 0x0022   100   100   000	Old_age   Always	   -	   42 (Min/Max 22/42)
da6: 194 Temperature_Celsius	 0x0022   100   100   000	Old_age   Always	   -	   41 (Min/Max 21/44)
da7: 194 Temperature_Celsius	 0x0022   100   100   000	Old_age   Always	   -	   39 (Min/Max 21/42)
da8: 194 Temperature_Celsius	 0x0022   100   100   000	Old_age   Always	   -	   39 (Min/Max 22/42)
da9:
da10: 194 Temperature_Celsius	 0x0022   100   100   000	Old_age   Always	   -	   42 (Min/Max 21/42)
da11: 194 Temperature_Celsius	 0x0022   100   100   000	Old_age   Always	   -	   42 (Min/Max 22/42)
da12: 194 Temperature_Celsius	 0x0022   100   100   000	Old_age   Always	   -	   42 (Min/Max 22/42)
 
Last edited by a moderator:

tahoward

Dabbler
Joined
Jan 7, 2018
Messages
24
Ran a few light tests over network and can successfully saturate 10Gbe @ 1.15GB/s over SMB share both ways. Forced sync=always on ZFS and ran write test to same share only to get about 200MB/s. Was hoping for something closer to 400MB/s since the 400GB S3700 is rated up to 460MB/s sequential. Pulled the drive out of the pool and ran Crystal Mark. Found it's capping at 269.490 MB/s sequential write. This doesn't seem normal. Anyone else running a 400GB+ S3700 have similar performance?

If its a dud then I'll find another to try out. Otherwise I'll be inclined to upgrade to a P series NVME unit.
 

tahoward

Dabbler
Joined
Jan 7, 2018
Messages
24
The speed rating of the S3700 is kinda meaningless.

https://forums.freenas.org/index.php?threads/some-insights-into-slog-zil-with-zfs-on-freenas.13633/

Read the section on "Laaaaaaaaatecy" to understand why.

If you want to check your S3700, use the Intel utility to blow it away and then create a small partition at the start of the disk. Betcha you'll see that 460MB/s.

Went ahead and over-provisioned the drive to 20GB after nuking it. Here are the before/after Crystal mark numbers:

400GB:
Code:
-----------------------------------------------------------------------
CrystalDiskMark 6.0.0 x64 (C) 2007-2017 hiyohiyo
						  Crystal Dew World : https://crystalmark.info/
-----------------------------------------------------------------------
* MB/s = 1,000,000 bytes/s [SATA/600 = 600,000,000 bytes/s]
* KB = 1000 bytes, KiB = 1024 bytes

   Sequential Read (Q= 32,T= 1) :   282.747 MB/s
  Sequential Write (Q= 32,T= 1) :   269.490 MB/s
  Random Read 4KiB (Q=  8,T= 8) :   201.452 MB/s [  49182.6 IOPS]
 Random Write 4KiB (Q=  8,T= 8) :   175.593 MB/s [  42869.4 IOPS]
  Random Read 4KiB (Q= 32,T= 1) :   201.191 MB/s [  49118.9 IOPS]
 Random Write 4KiB (Q= 32,T= 1) :   178.493 MB/s [  43577.4 IOPS]
  Random Read 4KiB (Q=  1,T= 1) :	30.476 MB/s [   7440.4 IOPS]
 Random Write 4KiB (Q=  1,T= 1) :	69.102 MB/s [  16870.6 IOPS]

  Test : 8192 MiB [D: 0.0% (0.1/372.5 GiB)] (x5)  [Interval=5 sec]
  Date : 2018/01/22 16:28:54
	OS : Windows 10 Professional [10.0 Build 16299] (x64)


Cleared and OP to 20GB:
Code:
-----------------------------------------------------------------------
CrystalDiskMark 6.0.0 x64 (C) 2007-2017 hiyohiyo
						  Crystal Dew World : https://crystalmark.info/
-----------------------------------------------------------------------
* MB/s = 1,000,000 bytes/s [SATA/600 = 600,000,000 bytes/s]
* KB = 1000 bytes, KiB = 1024 bytes

   Sequential Read (Q= 32,T= 1) :   282.779 MB/s
  Sequential Write (Q= 32,T= 1) :   269.992 MB/s
  Random Read 4KiB (Q=  8,T= 8) :   201.908 MB/s [  49293.9 IOPS]
 Random Write 4KiB (Q=  8,T= 8) :   173.466 MB/s [  42350.1 IOPS]
  Random Read 4KiB (Q= 32,T= 1) :   201.206 MB/s [  49122.6 IOPS]
 Random Write 4KiB (Q= 32,T= 1) :   178.075 MB/s [  43475.3 IOPS]
  Random Read 4KiB (Q=  1,T= 1) :	31.074 MB/s [   7586.4 IOPS]
 Random Write 4KiB (Q=  1,T= 1) :	72.258 MB/s [  17641.1 IOPS]

  Test : 8192 MiB [D: 0.3% (0.1/18.5 GiB)] (x5)  [Interval=5 sec]
  Date : 2018/01/22 19:36:27
	OS : Windows 10 Professional [10.0 Build 16299] (x64)


Since this was a first ZFS build not sure what I was expecting.

Looking at S3700 average latency it appears to hit around 30ms which didn't seem too bad. Since you brought up how large of an impact there is on that measurement I took a closer look at the P3700 which measures in the 'ones' of ms vs 'tens.'

Over a factor of 10! Now I feel sheepish.

Edit: Just for shits and giggles I assigned the SLOG to a ramdisk which thereafter was able to saturate the 10Gbe link on writes! Woo! So now on the hunt for a suitable slog. It seems P3700 is about the only reasonable choice. Are there other more cost effective options?

On the flip side leaving the ZIL on disks sustains about 100MB/s sync write speeds so the S3700 is a substantial improvement.
 
Last edited:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Trying to puzzle out the S3700 perf issue, is that attached to the mainboard SATA, or is it on one of the HBA's? Try the mainboard SATA. You should be able to hit max speed.

The HBA performance can vary based on things like slot location and normally it's maybe as much as 10% slower than mainboard SATA. A standard DC S3500 behind a 9211-8i, without being cleared first,

Code:
[jgreco@storage1] /mnt/storage1# dd if=/dev/zero of=/dev/da15 bs=1048576
^C6681+0 records in
6680+0 records out
7004487680 bytes transferred in 20.786805 secs (336967981 bytes/sec)


337MB/sec on a drive rated at peak 410MB/sec. Reasonable.

The other thing is that Crystal is kinda crappy for many of its benchmarks. Stick the thing directly in the machine and test from UNIX, this exercises the entire device driver and hardware stack, without introducing other issues.

You're not going to find an appropriate SLOG that's cheap. The features that make a proper SLOG are power loss protection and low latency. If you don't have the PLP, then the SLOG is meaningless, and if you don't have the low latency, it's real slow. These features are generally not consumer-level features, so Intel and others charge big bucks. Intel does make the 750, which works pretty well but doesn't have endurance. I've talked in the past about abusing the write cache of a RAID card, which works well with certain caveats. You really do need to go to something like the P3700 or possibly one of the Optane devices for truly low latency high quality SLOG.
 

tahoward

Dabbler
Joined
Jan 7, 2018
Messages
24
The crystal mark scores were done from mainboard SATA3 ports on my compute machine.

Here is result from storage machine's HBA:
Code:
storage# dd if=/dev/zero of=/dev/da9 bs=1048576
dd: /dev/da9: short write on character device
dd: /dev/da9: end of device
19074+0 records in
19073+1 records out
20000000512 bytes transferred in 85.405934 secs (234175771 bytes/sec)


And again from storage machine's mainboard SATA3-1 port:
Code:
storage# dd if=/dev/zero of=/dev/ada1 bs=1048576
dd: /dev/ada1: short write on character device
dd: /dev/ada1: end of device
19074+0 records in
19073+1 records out
20000000512 bytes transferred in 83.811000 secs (238632168 bytes/sec)


The boot drive on SATA3-0 ports for both machines are SAMSUNG SSDs which are performing reasonably within advertised performance specs.

Another item to note is this is a DELL version of S3700, but I wouldn't expect any significant differences other than needing a LSI controller to update firmware (already done).

Also browsing around for possible replacement SLOG devices I came across Intel's Optane 900p series. However, there is confusing data available over Intel Optane being PLP capable due to lack of DRAM buffer.
 
Last edited:

tahoward

Dabbler
Joined
Jan 7, 2018
Messages
24
That seems off.

Ya, I'm not sure how to proceed with this S3700... Intel tools and SMART info look OK as well. Seeing how hungry this system is for a lower latency SLOG device via ramdisk test; going to look at options for P3700 900p or P4800x...
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Yeah, I'm not sure what to tell you. I don't have any S3700 in stock and the S3710's are all in service, even the S3500 I posted upstream was doing better than your S3700, and that was on the slightly-slower HBA-and-expander setup, so I can only really come back to that same "it's off" opinion.
 
Status
Not open for further replies.
Top