First FreeNAS Build – Feedback Please!

Status
Not open for further replies.

gman2017

Dabbler
Joined
Nov 22, 2017
Messages
13
I am a newbie to FreeNAS and this is my first FreeNAS build. I’m currently using a NAS Server I built several years ago using Windows Server 2012 Storage Spaces to serve up storage for datastores in ESXi as well as file shares for my movies being streamed via KODI. I can provide more details on this setup if needed.

I would like your opinion on the components being used in the FreeNAS build. These are components I acquired over the years and was just laying around the house. Here is the list:

Chassis/Enclosure:
Supermicro SC836 series 16 bay with redundant 1000 watt power supply

Motherboard:
Supermicro X9DRD-7LN4F-JBOD

CPU: 2x E5-2650 v2

Memory: 128GB DDR3 ECC

SAS HBA: Onboard LSI SAS 2308 flashed in IT Mode, PCIe LSI 9211-8i flashed in IT Mode. All 16 drives are connected to the HBAs.

Network Adapters: 1x IPMI, 4x 1G NICs, 1x 10G Intel x520-da2 (two 10G ports)

Boot Drive: 2x Intel SSD 530 120G (mirrored)

L2ARC Drive: 1x Micron SSD M500DC 800GB

ZIL/SLOG Drive: 1x Intel SSD DC S3700 200GB

Hard Drives: 4x 1TB drives (various manufacturers), 4x 8TB WD White label drives (255MB cache, 5400RPM – same as 8TB WD Red), 4x 6TB WD Gold label enterprise datacenter drives (128MB cache, 7200RPM)

Please provide some feedback on the components and any suggestion for improvements. As I stated before, I’m not sure what I going to use the new NAS for, but I welcome ideas. Additionally, as I learn more about zvol, vdev, storage pool, ISCSI zvol, RAIDZ, etc., I will probably have some setup questions.
 
Last edited by a moderator:

gamedude9742

Dabbler
Joined
Oct 1, 2014
Messages
26
From a quick review, and I am by no means an expert...
Intel site shows it supports AES-NI which means encryption can be enabled w/o a large performance impact. So thats good.

Network Adapters: 1x IPMI, 4x 1G Nics, 1x 10G Intel x520-da2 (two 10G ports)
From what I have read here on the forums Chelsio 10g adapters are top notch for FreeNAS as they are natively supported in version 11. Someone else might have additional information / input on this as I'm unsure if the Intel you mentioned above will work w/o issue or if you should grab a Chelsio. I only know the Chelsio was recommended to me on these fourms.

Some quick ballpark math on your PSU. 1000 watts available. 80w TDP processors so take 160 off, lets round up to 250 to include your MoBo and RAM. Leaves 750 watts for drives and fans. Based on below form I mention figure about 35w per hard drive. You have 12. So 12 * 35 = 420. 420w (drives) + 250w (proc's, MoBo, RAM) = 670 total usage of the 1000w.

The 1000w power supply should be fine for powering your hard drives, however this thread might be worth a read based on the parts about 12v rails just to make sure you don't run into any funky issues.

As for how to setup the vdevs and zpools, etc... Check out this thread. Taught me a bunch about ZFS. Actually I can't find it. Was written by Cyberjock one of the MVP's around here from what I understand. If you see his name on a post, click on his signature where it mentions guide

Another thought is with how much RAM you have in your build you may not actually need a L2ARC, and it could actually decrease performance.

Again. I'm no expert, but my 2 cents is it all looks pretty solid / normal to me. Best of luck with the build!
 

gman2017

Dabbler
Joined
Nov 22, 2017
Messages
13
From a quick review, and I am by no means an expert...

Intel site shows it supports AES-NI which means encryption can be enabled w/o a large performance impact. So thats good.


From what I have read here on the forums Chelsio 10g adapters are top notch for FreeNAS as they are natively supported in version 11. Someone else might have additional information / input on this as I'm unsure if the Intel you mentioned above will work w/o issue or if you should grab a Chelsio. I only know the Chelsio was recommended to me on these fourms.

Some quick ballpark math on your PSU. 1000 watts available. 80w TDP processors so take 160 off, lets round up to 250 to include your MoBo and RAM. Leaves 750 watts for drives and fans. Based on below form I mention figure about 35w per hard drive. You have 12. So 12 * 35 = 420. 420w (drives) + 250w (proc's, MoBo, RAM) = 670 total usage of the 1000w.

The 1000w power supply should be fine for powering your hard drives, however this thread might be worth a read based on the parts about 12v rails just to make sure you don't run into any funky issues.

As for how to setup the vdevs and zpools, etc... Check out this thread. Taught me a bunch about ZFS. Actually I can't find it. Was written by Cyberjock one of the MVP's around here from what I understand. If you see his name on a post, click on his signature where it mentions guide

Another thought is with how much RAM you have in your build you may not actually need a L2ARC, and it could actually decrease performance.

Again. I'm no expert, but my 2 cents is it all looks pretty solid / normal to me. Best of luck with the build!

Thanks for the feedback. Do you know the model number of the Chelsio 10g adapters?
 

gamedude9742

Dabbler
Joined
Oct 1, 2014
Messages
26
2x E5-2650 v2 CPUs
Those chips also support the AES-NI for encryption so you are still good there.

As for the model # I don't know to be honest. The card I purchased is posted below, I haven't actually gotten everything running in my LAB just yet, still waiting on PSU, RAM sticks, and WD Reds... but anyways here's what I bought. Once I know 100% if it works / doesn't I will post back for ya.
Chelsio NetApp Dual Port SFP+ 10GBE PCIe 111-00603-A0
 
Last edited by a moderator:

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Your components look fine--as @gamedude9742 mentioned, Chelsio tends to be preferred for 10G NICs, but Intel is pretty well-supported as well. Mirrored boot SSDs are overkill, but won't hurt anything. If iSCSI is going to be a significant part of your workload, you'll probably be wanting to configure your pool in mirrors.
 

gamedude9742

Dabbler
Joined
Oct 1, 2014
Messages
26
Thanks for the clarification @danb35. Given that information. I would stick with the Intel, and know you have a backup option thats cheap if required. Give it a try and see what happens xD
 

gman2017

Dabbler
Joined
Nov 22, 2017
Messages
13
Your components look fine--as @gamedude9742 mentioned, Chelsio tends to be preferred for 10G NICs, but Intel is pretty well-supported as well. Mirrored boot SSDs are overkill, but won't hurt anything. If iSCSI is going to be a significant part of your workload, you'll probably be wanting to configure your pool in mirrors.

Yes iSCSI will be used for my ESXi datastores. I also want to create Windows file shares for my movies so I can stream via KODI. Please provide more detail on the setup of the iSCSI mirrored pool.
 
Last edited by a moderator:

gman2017

Dabbler
Joined
Nov 22, 2017
Messages
13
Your components look fine--as @gamedude9742 mentioned, Chelsio tends to be preferred for 10G NICs, but Intel is pretty well-supported as well. Mirrored boot SSDs are overkill, but won't hurt anything. If iSCSI is going to be a significant part of your workload, you'll probably be wanting to configure your pool in mirrors.

The Chelsio adapters are so cheap that I can switch if you think it will provide better performance.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Please provide more detail on the setup of the ISCSI mirrored pool.
Striped mirrors are pretty straightforward to set up through the GUI. Basically, you want as many vdevs as possible to give you as many IOPS as possible--that means mirrors.
 

gman2017

Dabbler
Joined
Nov 22, 2017
Messages
13
Striped mirrors are pretty straightforward to set up through the GUI. Basically, you want as many vdevs as possible to give you as many IOPS as possible--that means mirrors.

Ok.. I do some research on how to do this. I will be executing the setup in the next couple of days. If I have questions, I will post on the forum.. I am open to any other recommendation as well.
 

gman2017

Dabbler
Joined
Nov 22, 2017
Messages
13
Any feedback on the L2ARC and ZIL/SLOG drives? Do I need them? Is the make and model of the drive ok? Is the size of the drive OK as well?

L2ARC Drive: 1x Micron SSD M500DC 800GB

ZIL/SLOG Drive: 1x Intel SSD DC S3700 200GB
 

gamedude9742

Dabbler
Joined
Oct 1, 2014
Messages
26
Any feedback on the L2ARC and ZIL/SLOG drives? Do I need them? Is the make and model of the drive ok? Is the size of the drive OK as well?

L2ARC Drive: 1x Micron SSD M500DC 800GB

ZIL/SLOG Drive: 1x Intel SSD DC S3700 200GB
With the amount of RAM you have, you will have to wait and see as for if an L2ARC will benefit your use case. There are CLI commands you can execute to determine if a L2ARC would be beneficial. If you do a search for L2ARC on the forums here you should find a good read on them.

As for the ZIL drive you will see a good speed increase in ESXi. ESXi writes all data as a Sync write.
Meaning it requires confirmation from the storage device (NAS, HD, etc..) that the data has been written to disk successfully before it will continue with sending additional data. The ZIL device can keep all the writes and immediately say back "they are on disk" so that ESXi can continue. Then ZFS can write from the ZIL to your pools at its set intervals. The ZIL drive you have is good for writes and battery backed for protection against power loss (You should still grab a UPS though)

I'm not sure how quality a Micron drive is, but for an L2ARC it shouldn't matter as much unless your systems performance is of critical nature.
If a ZIL drive dies, you can lose data. If a L2ARC drive dies, you lose no data as its for read caching only, only impact would be performance.
Also, if your systems performance is of a critical nature, you would likely want to setup 2x ZIL devices in a mirror, to protect against drive failure.
 

gman2017

Dabbler
Joined
Nov 22, 2017
Messages
13
With the amount of RAM you have, you will have to wait and see as for if an L2ARC will benefit your use case. There are CLI commands you can execute to determine if a L2ARC would be beneficial. If you do a search for L2ARC on the forums here you should find a good read on them.

As for the ZIL drive you will see a good speed increase in ESXi. ESXi writes all data as a Sync write.
Meaning it requires confirmation from the storage device (NAS, HD, etc..) that the data has been written to disk successfully before it will continue with sending additional data. The ZIL device can keep all the writes and immediately say back "they are on disk" so that ESXi can continue. Then ZFS can write from the ZIL to your pools at its set intervals. The ZIL drive you have is good for writes and battery backed for protection against power loss (You should still grab a UPS though)

I'm not sure how quality a Micron drive is, but for an L2ARC it shouldn't matter as much unless your systems performance is of critical nature.
If a ZIL drive dies, you can lose data. If a L2ARC drive dies, you lose no data as its for read caching only, only impact would be performance.
Also, if your systems performance is of a critical nature, you would likely want to setup 2x ZIL devices in a mirror, to protect against drive failure.

Thanks for the feedback. As far as the ZIL drive(s), is the make, model, and size appropriate? Please advise.
 

gamedude9742

Dabbler
Joined
Oct 1, 2014
Messages
26
Yeah the s3700 is a fine zil drive. As for the size there is a calculation I just can't remember what it is. Somewhere on the forums here it says the zil should be like 1/4 the size of available storage or something like that. Its not 1/4 though I don't think as that would be huge.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
It is based on the potential for write to the pool within 5 seconds, so it isn't large. There is a good discussion of the setup in thread by @Stux
I will look for the link.

Sent from my SAMSUNG-SGH-I537 using Tapatalk
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Yeah the s3700 is a fine zil drive. As for the size there is a calculation I just can't remember what it is. Somewhere on the forums here it says the zil should be like 1/4 the size of available storage or something like that. Its not 1/4 though I don't think as that would be huge.
Thanks for the feedback. As far as the ZIL drive(s), is the make, model, and size appropriate? Please advise.
Here is the link to the build log I mentioned: https://forums.freenas.org/index.ph...node-304-x10sdv-tln4f-esxi-freenas-aio.57116/
@Stux talks about how to over provision the drive and gives a lot of advice I think will serve you well if you take the time to read it.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
These are components I acquired over the years and was just laying around the house.
I wish the crap I have around the house was this good...
 

Rick Arman

Dabbler
Joined
Jan 5, 2017
Messages
32
I am a newbie to FreeNAS and this is my first FreeNAS build. I’m currently using a NAS Server I built several years ago using Windows Server 2012 Storage Spaces to serve up storage for datastores in ESXi as well as file shares for my movies being streamed via KODI. I can provide more details on this setup if needed.

I would like your opinion on the components being used in the FreeNAS build. These are components I acquired over the years and was just laying around the house. Here is the list:

Chassis/Enclosure:
Supermicro SC836 series 16 bay with redundant 1000 watt power supply

Motherboard:
Supermicro X9DRD-7LN4F-JBOD

CPU: 2x E5-2650 v2

Memory: 128GB DDR3 ECC

SAS HBA: Onboard LSI SAS 2308 flashed in IT Mode, PCIe LSI 9211-8i flashed in IT Mode. All 16 drives are connected to the HBAs.

Network Adapters: 1x IPMI, 4x 1G NICs, 1x 10G Intel x520-da2 (two 10G ports)

Boot Drive: 2x Intel SSD 530 120G (mirrored)

L2ARC Drive: 1x Micron SSD M500DC 800GB

ZIL/SLOG Drive: 1x Intel SSD DC S3700 200GB

Hard Drives: 4x 1TB drives (various manufacturers), 4x 8TB WD White label drives (255MB cache, 5400RPM – same as 8TB WD Red), 4x 6TB WD Gold label enterprise datacenter drives (128MB cache, 7200RPM)

Please provide some feedback on the components and any suggestion for improvements. As I stated before, I’m not sure what I going to use the new NAS for, but I welcome ideas. Additionally, as I learn more about zvol, vdev, storage pool, ISCSI zvol, RAIDZ, etc., I will probably have some setup questions.


what did you get for a SAS expander?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Status
Not open for further replies.
Top