Moving Away from Synology 30TB setup -> Rackmount Build Confirmation

Status
Not open for further replies.

kars85

Cadet
Joined
Dec 20, 2012
Messages
8
Hello! First post (lurker on and off for years wanting to move this direction eventually!) and while I have enough lab knowledge to be dangerous, I would like to shoot my thoughts out to the community for ideas/confirmation. I have a DS1515+ with two expansion units - (13) 3TB drives and 2 100GB Intel S3500 SSD's for R/W cache. I plan on decommissioning an old DS213+ I have offsite with this.

I'm tearing down my old lab in order to simplify, so I have some decent equipment to work with (aside from capacity drives).

With file storage/serving (NFS/SMB) as my primary use case, here's my proposed build:

Case: NORCO RPC-4308 4U Shortdepth case
Motherboard: Supermicro MBD-X10SLH-F-O
CPU: Intel Xeon E3-1241v3
Memory: 32GB ECC RAM (4 8GB DIMMS)
Spinning Drives (plan to buy): Qty (6) or (8) 8 or 10TB Seagate Ironwolf or WD Reds
SSD drives (I have these available): (3) 200GB Intel S3700 SSD's, (3) Samsung 250GB 850 EVO, (3) Samsung 120GB 850 EVO
Networking: Mellanox ConnectX3 SFP+ PCIe card
HBA (available if needed): Dell Perc H310 (will confirm it's flashed to IT mode, I think it is)

Current storage usage (backup size will be reduced dramatically with my new lab setup, so I'd estimate an additional 3TB free):
Trdil3a.png


How does this build sound and how can I optimize my drive setup? ZIL on S3700's? My main desire is utilizing what I have on hand, but if anything falls obviously short/wrong will spend the money to do it right. If I went with a Rackstation, I'd be out $1100 just for the NAS, then another $1000 for drives :( With this proposed build, I should have a respectable FreeNAS box (albeit on an older 1150 socket) with money left over to correct any issues with my build.
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
I see you have a VM Template folder. Will any VMs be running from this array? I also see you have a few media folders. Do you have any plans for jails list Emby or Plex?
 

kars85

Cadet
Joined
Dec 20, 2012
Messages
8
I see you have a VM Template folder. Will any VMs be running from this array? I also see you have a few media folders. Do you have any plans for jails list Emby or Plex?

Hello @kdragon75 - Thank you for your reply. I have no plans on virtualizing anything anymore, nor do I have any plans for jails. Part of the reason I have parts available is decommissioning my vSphere lab I had with VSAN shared storage - it got to the point that it got to be more work messing around with vibs for ESXi compatibility than the worth on the return was. That's not to say that I won't ever run a VM or jails in this FreeNAS build, but at this time this is a pure storage device.

Part of my simplifying of my lab has been containerizing things within Docker, and I have a SFF Dell Optiplex with an i7-6700 that I have been quite pleased with.
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
Looks like more memory than you would need then. I would start with 16 and add more as needed. I will say that's an odd choice of a 4U case as it only holds 8 drives and leaves no bays open for the SSDs or HDD expansion. Otherwise, it looks like it will be a nice box.

If you plan to use drives as cache, ZFS works differently than Synology's or other solutions. By default (for async writes) ZFS will buffer them in RAM (as a transaction group) and write the data out in an optimized pattern essently it will take a bunch of small writes and make them sequential. This reduces/eliminates the need for a write cache especially if you plan to run 8 drives. SSDs can still be used for read cache but generally that only makes sense for highly loaded, LARGE, or VM storage servers where latincy can be an issue.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
First post (lurker on and off for years wanting to move this direction eventually!) and while I have enough lab knowledge to be dangerous, I would like to shoot my thoughts out to the community for ideas/confirmation.
Some "required reading" for you, if you have not already done it:

Slideshow explaining VDev, zpool, ZIL and L2ARC
https://forums.freenas.org/index.ph...ning-vdev-zpool-zil-and-l2arc-for-noobs.7775/

Terminology and Abbreviations Primer
https://forums.freenas.org/index.php?threads/terminology-and-abbreviations-primer.28174/

Proper Power Supply Sizing Guidance
https://forums.freenas.org/index.php?threads/proper-power-supply-sizing-guidance.38811/

Don't be afraid to be SAS-sy
https://forums.freenas.org/index.php?resources/don't-be-afraid-to-be-sas-sy.48/

Building, Burn-In, and Testing your FreeNAS system
https://forums.freenas.org/index.php?resources/building-burn-in-and-testing-your-freenas-system.38/

Github repository for FreeNAS scripts, including disk burnin
https://forums.freenas.org/index.ph...for-freenas-scripts-including-disk-burnin.28/

solnet-array-test (for drive / array speed) non destructive test
https://forums.freenas.org/index.php?resources/solnet-array-test.1/

Useful Commands
https://forums.freenas.org/index.php?threads/useful-commands.30314/#post-195192

The ZFS ZIL and SLOG Demystified
http://www.freenas.org/blog/zfs-zil-and-slog-demystified/

Testing the benefits of SLOG using a RAM disk!
https://forums.freenas.org/index.ph...s-of-slog-using-a-ram-disk.56561/#post-396630
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
With file storage/serving (NFS/SMB) as my primary use case, here's my proposed build:
Were you looking for suggestions or just a sanity check of what you had already picked out?
 

kars85

Cadet
Joined
Dec 20, 2012
Messages
8
Looks like more memory than you would need then. I would start with 16 and add more as needed. I will say that's an odd choice of a 4U case as it only holds 8 drives and leaves no bays open for the SSDs or HDD expansion. Otherwise, it looks like it will be a nice box.

If you plan to use drives as cache, ZFS works differently than Synology's or other solutions. By default (for async writes) ZFS will buffer them in RAM (as a transaction group) and write the data out in an optimized pattern essently it will take a bunch of small writes and make them sequential. This reduces/eliminates the need for a write cache especially if you plan to run 8 drives. SSDs can still be used for read cache but generally that only makes sense for highly loaded, LARGE, or VM storage servers where latincy can be an issue.

Appreciate the tip on the RAM. Although I have the DIMMS already, so might just throw them in? The case is odd - yes. I would love to have a full length chassis with a more elegant/featured drive layout, but I am kind of shoehorned into short depth chassis with my Tripp Lite 12U wall mount rack. I can get a couple 5 1/4 -> 3 1/2 adapters to get a total of 10 drives in that unit, and by that point I'll already have at least one HBA in play already (board only has 6 SATA3 ports). Beyond that, I'm looking at a DAS, and that is likely to be some time down the road.

Based on my current utilization, I've got about 10TB of media, and another 3TB in "seeding" data. 250GB/month is probably a good growth estimate. Soon I'll get hardlinks figured out between my docker containers and storage to further optimize my growth.

I am still understanding the terminology, but is what you're saying is to use the SSD's for ZIL? Also, with 64-80TB of raw capacity (not including two drive fault tolerance protection), how would you go about creating volume(s)? One big one, or spread it out a little?
 

kars85

Cadet
Joined
Dec 20, 2012
Messages
8
Were you looking for suggestions or just a sanity check of what you had already picked out?

Thanks! Suggestions are welcomed! Also, I appreciate the guides - I will take some time to read them en lieu of the questions I've already asked a minute ago. Sorry!

In terms of a sanity check, I more concerned about volume layout - with large amounts of data like this, it's more difficult/impossible to change volume allocation once live data is on it.
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
In ZFS there is a read cache (L2ARC) and write "cache" (SLOG). The SLOG only comes into play with synchronous writes. As of (I think) FreeNAS version 11.1U5, we honor sync writes from SMB so anything that's asking for it will get it. There is no hard and fast rule anymore. With that said, MOST SMB clients won't ask for sync writes. It's more common on the Mac side from what I have seen on the forum. The idea of the SLOG is that if something requests an synchronous write, we can save it to the SSD faster than the pool of hard drives and move onto the next one. From there, the writes are written to the pool from RAM as each transaction group flushes. This is why I don't like calling the SLOG a cache, its not inline. Its more of a backup of sync writes in RAM before they hit the disks. In fact, the only time you read from a SLOG is when things go wrong like a power loss mid sync write. Only then does ZFS read the SLOG and finish writing the data to disk.
Check out the links from @Chris Moore for more/better information.

As fo the pool layout, 8 drives of that size is on the border... Some would say one pool with two raidz2 vdes others might say one pool with one wide raidz2.
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
with large amounts of data like this, it's more difficult/impossible to change volume allocation once live data is on it.
I'm glad to see your trying to plan this out in a meaningful way.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Appreciate the tip on the RAM. Although I have the DIMMS already, so might just throw them in?
Use it if you have it. Not going to hurt anything.
I would love to have a full length chassis with a more elegant/featured drive layout, but I am kind of shoehorned into short depth chassis with my Tripp Lite 12U wall mount rack.
That will certainly limit your options. You might want to change to something different in the future.
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Get a vertical mount for the case ;)
Have you used that before? I have considered something like that, but I was worried about the weight of my servers. The 48 bay Chenbro is very heavy with all the drives installed.
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
Have you used that before? I have considered something like that, but I was worried about the weight of my servers. The 48 bay Chenbro is very heavy with all the drives installed.
I have not. I was looking at them when I was thinking about downsizing my lab... But then we bought a house... :D
 
Status
Not open for further replies.
Top