Proposed FreeNAS Hardware

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
Hiya,
I am a long time Synology User (with a brief foray in QNAP) but its time to upgrade my very ageing hardware for my home lab which is approaching 9+ years old in most cases. Most of it was bought in 2011 or 2010 so I guess it doesn't really owe me anything.

Reasons for upgrade:
1. More speed, who can say no to more speed.
2. The VM Host Servers (and that a very generous description of two crappy old PC's) are running out of memory and can't take any more. 32GB limit on the boards. AMD Bulldozer CPU's
3. I want 10Gb to my desktop from the filestore. I probably won't get 10Gb, but I should get a lot more than 1Gb

I am sorting out a couple of ESXi hosts - which is easy. The NAS, not so much - much more of a challenge
Use Case:
1. High Speed / Low Latency SSD based iSCSI LUN/Pool/Whatever its called. My plan is 4 SSD's of 0.5TB each in R5 (or is that Z1)
2. Bulk Storage. A number of 6TB drives probably in RAID6 or Z2

Proposed Hardware. Some new, some refurb
SuperMicro CSE-GS5A-753K Case. Supermicro Website
SuperMicro MBD-X10SRi-F M/B. Supermicro Website
Intel E5-2660V3 (10 Cores at 2.6)
32-64GB of 2133 ECC
10Gb NIC - Currently expected to be a Intel Z520-DA2 SFP+
LSI 9211-8i expansion card
A bunch of HDD's and SSD's yet to be defined

I have a 10Gb SFP+ based switch already. Just have to run some fibre around the house
I'll probably ebay the old NAS's after decommissioning and see if I can recoup some of this expense

Booting probably from USB

Points I have noted already:
1. I may need to flash the LSI Card with different firmware - not done that before
2. Some of the WD Red's I have as spares, and I was planning on using may be an issue. The ones I have in use are all EFRX. I have two spares that are a more modern buy and are EFAX (which I think means shingled). Anyone know if I can return them as unsuitable for a NAS?

Would anyone care to critique my current plan?

For backups I intend to create a second similar machine and replicate snapshots between them. Most of the data doesn't matter if I did lose it - it would be a nuisance though. The really important stuff is in the cloud as well. The biggest problem is figuring out where to put all this stuff in the house
 

Frank Collins

Explorer
Joined
Apr 10, 2019
Messages
53
As can be seen, I use a small HP Microserver with 16G of RAM with 7 active jails and it works well, without more than about 20% CPU usage.
This, however, is a problem; it is recommended to boot from an SSD. I do this with a 16G SSD attached to a USB to SATA adaptor. I haven't had any problems but I will try to replace it with a PCIe SATA card which is on it's way if Australia Post hasn't lost it.
Booting probably from USB
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
Frank,
Thanks for the bit about USB - which I shall take on board. I shall either use a small SSD or get a SATA DOM and use that
I take it you have no issues with rest of the hardware

Sean
 

Frank Collins

Explorer
Joined
Apr 10, 2019
Messages
53
Sean,
No, I haven't had any issues with any of the hardware. The Gen8 came with a puny Celeron so it benefits from the far more powerful Xeon. 16G of RAM is the maximum it can take and I haven't ever run out of RAM. Most of the time the RAM is used as cache.
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
Things have moved on abit
I now have a 24 Bay 4U Case for FreeNAS.
2 LSI Boards in IT mode (hopefully real ones). Not sure how to tell
A Twin 10Gb NIC (Chelsio)
1 Port for ESXi Access. I may optionally switch on Jumbo frames at some point
1 Port for other Access
An Intel Optane 900P 280GB Drive
4 * 1TB MX500 SSD's
6 * Seagate 12TB HDD's
Motherboard X10SRi-F
CPU E5-2660 v3
128GB DDR4 2133 Memory (8*16)
FreeNAS 11.3-U2.1

Server is currently running memtest for a while

I have basically one user - me! This is a home play system.

The reason for the change in twin single NIC to a twin NIC was that I ran out of PCIe 3.0 8X Slots. The motherboard has two of these plus an X16 and an X4. With two NICS that was one more X8 than I had available - so adjustments required. The 900P fits nicely in the X4 slot, with the NIC in the X16 and the HBA's in the X8 slots.

I am planning the SSD's as a twin 2 drive vdev added together as a single pool give 1.8TB (ish) available space
The HDD's will probably be also mirrored, but might be Z2, haven't decided yet.

I use ESXi as a hypervisor with probably 6 machines running 24*7 and a few more from time to time. Currently these work of an iSCSI target on a QNAP with 4*0.5TB in RAID 5. I currently backup using Archiware Pure to a Synology NAS.

The SSD's are for the VM's
The HDD's are for data hoarding and a secondary VM store for VMWare
I will have a tertiary iSCSI/NFS store for VMWare on a Synology in case I need to reboot / fix the FreeNAS
Initially I will backup using Duplicati and Archiware Pure to a Synology NAS but may build a much smaller FreeNAS later on to act as a snapshot target

It occurred to me to use NFS rather than iSCSI. Why?
1. iSCSI is thick provisioned. The VM's may be thin, but the iSCSI dataset seems itself to be thick
2. NFS is just a normal DataSet and is thin provisioned. Also each server can have its own dataset whilst with iSCSI its all one homogenised mess
3. NFS is readable elsewhere and if each server has its own dataset then each server can be snapshot'd individually with associated benefits.

BUT
1. Writes to NFS is by default abysmally slow

Now I have done a lot of ready about ZIL, SLOG (an off pool ZIL AIUI), ARC, L2ARC and I happen to have this 900P available.......
Interesting viewing: https://www.youtube.com/watch?v=OAe3_8qoCWo

So how about this?
1. Partition the 900P into two small partitions (say 20GB each - subject to change based on calculations), leaving the rest empty
2. Use partition 1 as a SLOG for the SSD's and Partition 2 as a SLOG for the HDD's
3. Optionally use a chunk of the rest for L2ARC - however I think this is a step too far. I have 128GB of RAM
4. Ensure that async writes are on (or sync writes are off, whichever is correct). This shouldn't matter as long as the SLOG is working. If the SLOG fails however then it also doesn't matter until there is an emergency in which case all bets are off.

I also suspect that I am but scratching the surface here.

Thoughts:
Actually, looking at FreeNAS I think I am wrong when I say iSCSI is thick provisioned. I just set up a 100GB block iSCSI device and my available diskspace did not go down. I guess it will expand as necessary until it reaches 100GB but won't shrink even if I delete a machine.

I guess I could set up a block iSCSI device for each VM Guest on VMWare. Whilst its possible - it doesn't really scale

No iSCSI - no VAAI
 
Top