Building a Premium FreeNas system - Am I on track?

dvc9

Explorer
Joined
May 2, 2012
Messages
72
I have built now 7 Freenas based systems that are running at several Video Post Production houses.

One thing I have learned is that you should really consider to buy a keystone solution, either from iXsystems or from other vendors like pixit with their pixstor.

Not kidding, as you must be on master level of storage to manage, and fully utilize the system.

but, putting that aside .

As you are using mixed windows and Mac environments, then you should optimize your build for samba. Drop nfs, and afp.

Do you use Active directory for security, then you also must be sure about how you use ACL’s and ACL parenting will be an pain, if you are used to an symlink strategy, then that should also go.

Regarding your raid setup,
So streaming multiple 4K video for more that 5 editing stations will be pain with those few disks. This is because of IOPS. You should at least have 48 drives, use RaidZ2 and 12 raid groups with 4 drives each. That might give you something... you still need 4TB of L2ARC here you should use 2 mirrored M.2. Drives.
Pump it up with RAM, more than 256GB.

It’s better with more smaller drives than few big ones ^^

Network will not be an issue. But if you have an decent budget, then get a true Mellanox 40 GB card, and get an Arista switch with 40GB interface and 48 Rj45 interfaces. LAG will be an painful experience with video.

Good luck!

But seriously, do consider a fully paired solution with support, or you might find yourself deep down the rabbit hole in some years.
 

journich

Cadet
Joined
Mar 25, 2019
Messages
8
My thought process is that I am going to build my server, rather that buy one.
I'm looking at either the SuperMicro X11SSH-CTF with has dual 10Gbase-T inbuilt LAN ports (Intel x550) which according to this post and this excellent overview seems to be fully supported in FreeNas (please correct me if I am wrong).

The (potential) downside is the 64GB limitation. I would start with 8 drives and I believe 64GB of ram should cover this, but if I add an external disk container with 10 or more additional drives in the future, will I have to replace the motherboard? I only ask because I've read in some places that 1GB of ram for 1TB of storage is the general rule. But others have said this is not the case.

Another option is the ASRock Rack EP2C612D8-2T8R motherboard which has dual 10GBase-T lan ports (Intel x540) and dual CPU support and can go up to 256GB or 512GB (more than I should even need). I can't find much out about it though, other than this post where there is a vague recommendation for it for FreeNas (they are talking about the single CPU model, but that does not have 10G Lan ports). Obviously its an ATX board compared to the Super Micro being a

Another board that looks good is the Super Micro X11SPH-nCTF which should work if I am reading this post correctly.

Any thoughts welcome. I am trying to future proof myself to a degree to avoid having to upgrade in the near future.

Thanks
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
The (potential) downside is the 64GB limitation. I would start with 8 drives and I believe 64GB of ram should cover this, but if I add an external disk container with 10 or more additional drives in the future, will I have to replace the motherboard?
The thing about RAM in FreeNAS, it is used for caching with something called ARC (Adaptive Replacement Cache) and it is hard to determine how much is enough, but it affects performance. Here is a series of informative articles that talks about it, good reading:

A Complete Guide to FreeNAS Hardware Design, Part I: Purpose and Best Practices
by Josh Paetzel; iXsystems Director of IT; Feb 3, 2015
http://www.freenas.org/blog/a-compl...are-design-part-i-purpose-and-best-practices/

A Complete Guide to FreeNAS Hardware Design, Part II: Hardware Specifics
by Josh Paetzel; iXsystems Director of IT; Feb 5, 2015
http://www.freenas.org/blog/a-complete-guide-to-freenas-hardware-design-part-ii-hardware-specifics/

A Complete Guide to FreeNAS Hardware Design, Part III: Pools, Performance, and Cache
by Josh Paetzel; iXsystems Director of IT; Feb 10, 2015
http://www.freenas.org/blog/a-compl...-design-part-iii-pools-performance-and-cache/

A Complete Guide to FreeNAS Hardware Design, Part IV: Network Notes & Conclusion
by Josh Paetzel; iXsystems Director of IT; Feb 12, 2015
http://www.freenas.org/blog/a-compl...ware-design-part-iv-network-notes-conclusion/
I only ask because I've read in some places that 1GB of ram for 1TB of storage is the general rule. But others have said this is not the case.
It really depends on how much performance you expect from the system. The more of your working set of data can be held in cache, the better your performance will be. For some people, it is not possible to have enough RAM for this, so they extend ARC with what is called L2ARC by using SSDs.

So, I can't recall if I shared this with you before, but you should take a look at these guides of known working hardware:

FreeNAS® Quick Hardware Guide
https://forums.freenas.org/index.php?resources/freenas®-quick-hardware-guide.7/

Hardware Recommendations Guide Rev. 1e) 2017-05-06
https://forums.freenas.org/index.php?resources/hardware-recommendations-guide.12/

10 Gig Networking Primer
https://forums.freenas.org/index.php?resources/10-gig-networking-primer.42/

40Gb Mellanox card setup - infiniband
https://www.ixsystems.com/community/threads/40gb-mellanox-card-setup.51343/

Fibre Channel on FreeNAS 11.1u4
https://www.ixsystems.com/community/resources/fibre-channel-on-freenas-11-1u4.93/
 

journich

Cadet
Joined
Mar 25, 2019
Messages
8
I have built now 7 Freenas based systems that are running at several Video Post Production houses.

One thing I have learned is that you should really consider to buy a keystone solution, either from iXsystems or from other vendors like pixit with their pixstor.

Not kidding, as you must be on master level of storage to manage, and fully utilize the system.

but, putting that aside .

As you are using mixed windows and Mac environments, then you should optimize your build for samba. Drop nfs, and afp.

Do you use Active directory for security, then you also must be sure about how you use ACL’s and ACL parenting will be an pain, if you are used to an symlink strategy, then that should also go.

Regarding your raid setup,
So streaming multiple 4K video for more that 5 editing stations will be pain with those few disks. This is because of IOPS. You should at least have 48 drives, use RaidZ2 and 12 raid groups with 4 drives each. That might give you something... you still need 4TB of L2ARC here you should use 2 mirrored M.2. Drives.
Pump it up with RAM, more than 256GB.

It’s better with more smaller drives than few big ones ^^

Network will not be an issue. But if you have an decent budget, then get a true Mellanox 40 GB card, and get an Arista switch with 40GB interface and 48 Rj45 interfaces. LAG will be an painful experience with video.

Good luck!

But seriously, do consider a fully paired solution with support, or you might find yourself deep down the rabbit hole in some years.

Thanks for this advice, much appreciated. One thing I may have not made clear is that all access is remote, so no local users (other than me). So performance will important, is not critical.

That's not to say it won't be a problem in a few years time. I've looked at buying a keystone solution, but the quotes I have seen just seem way over the top, even if I don't have to work to a specific budget.

I think right or wrong that I will be building this first server.
 

dvc9

Explorer
Joined
May 2, 2012
Messages
72
Thanks for this advice, much appreciated. One thing I may have not made clear is that all access is remote, so no local users (other than me). So performance will important, is not critical.

That's not to say it won't be a problem in a few years time. I've looked at buying a keystone solution, but the quotes I have seen just seem way over the top, even if I don't have to work to a specific budget.

I think right or wrong that I will be building this first server.

Ok, that changes things.

Then I wouldn’t recommend FreeNAS, but a simple 1U Centos server, with a descent raid and network card, and a DAS disk shelf with 12 disks.

The trick now, is to format it right. I usually use a Mirrored, Raid5 setup, similar to what Autodesk and their Flame family setup does.

More info here :
https://knowledge.autodesk.com/sear...-Volume-for-Flame-Media-Storage-Overview.html

That will give you at least 1.2 GB r/w, and great iops. You can also grow your storage, simple by adding a new 12 bay, DAS shelf multiple times.

When you share the drive to your machine, use NFS.

This can be used in a single or dual setup, not a enterprise setup, then you’re back to advanced cashing features and all that zfs stands for.
 

c32767a

Patron
Joined
Dec 13, 2012
Messages
371
I should warn you that I didn't say I'd used them with FreeNAS. They seem great for ESXi though. Apparently @depasseg and @c32767a have used the SFN5162F in the past on FreeNAS. It does appear to be a vendor-provided/supported driver. But really for $30-$40 you can't go wrong.

During the dark days of intel 10G support, there was a period where we used the Solarflare cards in builds, but we got hit by a bug in the driver that led to kernel panics. It seems to be fixed now, but at the time we just shifted Chelsio and haven't looked back..
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
During the dark days of intel 10G support, there was a period where we used the Solarflare cards in builds, but we got hit by a bug in the driver that led to kernel panics. It seems to be fixed now, but at the time we just shifted Chelsio and haven't looked back..

Yeah, Chelsio's still probably the best choice.

The big problem with FreeBSD and networking is that you're putting a substantial strain on the networking subsystem. I spent years having to twiddle NMBCLUSTERS and learning about jumbo clusters, KVA settings, and all sorts of other exciting-and-impactful things. The overall problem is that the problems aren't along the lines of "I put the card in the server and it failed to recognize it". That's easy pass/fail stuff. It's also pretty clear that Chelsio, Intel, Mellanox, Solarflare, and others make stellar hardware. The problem is that networking occasionally turns into an immense juggling act. You have a smoothly operating system. Suddenly one disk hangs on a write for ten seconds. The number of queued mbufs explodes, memory gets tight, and a code path that isn't normally exercised comes into play. Something goes awry. There's a scenario where you could have a kernel panic. It's likely to be a situation where you don't understand the exact dynamics that took you off the deep end.

This is really where having an appliance like FreeNAS excels, because there has been some default tuning work by iXsystems to create a system that works well.

But this is also a good reason to stay within the recommended hardware list, because other people are successfully using those bits of hardware, probably in much more taxing situations, without a problem. This definitely applies to I/O devices including networking and HBA's.
 

ZiggyGT

Contributor
Joined
Sep 25, 2017
Messages
125
I have used Chelsio 10Gb cards in my FreeNAS systems, but I recently was told that these worked well for another forum member:
https://www.ebay.com/itm/Solarflare...rt-10G-Ethernet-10GbE-PCIe-w-SFP/113540380624
I ordered two of them myself to do some testing with, but I have not had a chance to install them yet.
If you can get Chelsio cards at a good price, like these:
https://www.ebay.com/itm/CC2-N320E-...LP-Server-Adapter-Card-w-SFP-QTY/352569766559
It might be more about personal preference.
I could not get FREENAS to see my Solarflare cards. I am usin Melanox cards not seeing full 10gb but still playing around with doing benchmarks. Anyone have a recommendation for a good network benchmark?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I think the word's still out on the Solarflare in FreeNAS. They work great in ESXi hypervisors, we've mostly switched to them.
 
Top