What Hardware for TrueNAS Server with 10Gbps

Fireball81

Explorer
Joined
Apr 24, 2016
Messages
51
Hello,

i use a QNAP TS-2483XU-RP in my post production environment that is coming close to its capacity limits. Therefor, i would like to think about my options for storage expansion. (or buying something new rather) At the moment we have two independent R6 raidsets, one consists of 12x12TB ST12000NM0007 and the other one consists of 12xMG08ACA16TE. The NAS unit is connected over 10GbE (SFP+) to our core switch.

We also have a TrueNAS server as a backup target for our QNAP unit.

As you can probably imagine, we need capacity as much as we need speed for our workflow. There are only 2 editing workstations accessing the storage unit simultaneously but it would be desirable if both editing machines get fast read/write performance to the storage. (thats already a bottleneck with the QNAP appliance)

I was thinking instead of going with a JBOD unit for the QNAP, i might as well consider a DIY TrueNAS server as my 2nd production storage unit.
I guess my question basicially boils down to, how to set up a TrueNAS machine in terms of hardware choice when the goal is high capacity with reasonable performance (at least one workstation should be able to saturate its 10GbE connection to the server)

To give you an example:
What kind of avg read/write performance can i, for example expect from a single raidz2 vdev with 12x18TB Exos X in a TrueNAS server with a decent CPU and at least 64GB of RAM. The pool would consists mostly of large media files. Enough to get decent sequential read/writes that could come close to 10GbE or far from it?

Thank you in advance for your help. :)
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Be certain to select hardware from the 10 Gig Networking Primer; don't just try grabbing some janky rando 10G card and expecting it to work well. Also be sure to look at the 10/25/40G tuning guide in the resources section.

Your target system may be a bit shy of performance; ZFS consumes a lot of resources to do its tricks. A single RAIDZ2 vdev may not offer sufficient I/O capacity; RAIDZ is optimized towards lower performance single consumer sequential access of large files. Write speeds in ZFS are largely tied to how easy it is to find free space on the pool, and having gobs of free space has a dramatic impact on write speeds. If you really want fast writes, keeping usage below maybe 50% is a great performance boost. Seeks kill sequential performance for both reads and writes, so a system with 128-256GB RAM and four RAIDZ2 vdevs of 6x18TB drives will be substantially faster especially if you keep it around that ~112TB usable number that your single RAIDZ2 vdev would have provided. This is because of the substantially greater amount of free space in such a design.
 

Fireball81

Explorer
Joined
Apr 24, 2016
Messages
51
@jgreco

thank you for all that information jgreco.
If i understand you correctly, that practically means we need to scale things up quite a bit if we want good performance out of the system, increasing costs significantly. I hear you, i totally do but i think i can`t afford creating a storage pool with ~280TB of usable space just to get ~120TB of high performance storage.

Just to have a starting point and even if i eventually don`t got the DIY route, am i correct to assume that i should avoid a system with expander backplanes and instead, direct-attach each HBA channel to the backplane directly, if performance is a priority?
(for example 24p (non expander) backplane = 6x Mini SAS HD SFF-8643)

Does an expander backplane affect performance for systems with 3,5" 7200rpm drives or is that something that only comes into play when i would use SATA/NVMe SSDs?

Thank you.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
If i understand you correctly, that practically means we need to scale things up quite a bit if we want good performance out of the system,

It's a good possibility. I don't try to predict how exactly a given hardware manifest is going to perform, but I can give you what the factors involved are. You can try a smaller configuration and be reasonably comfortable that if it is not acceptable, you can remediate it with more hardware. The root issue here is that ZFS can be blazing fast, but it involves compsci trickery to do so -- as an example, if you have two "random" writes (which would involve seeks on a conventional filesystem), ZFS may pick two contiguous ("sequential") blocks to write that data to, because it is a copy-on-write filesystem. Thus when that data is written, maybe it doesn't involve ANY seeks at all.

But there's a lot of moving parts and it is hard to know exactly how things will perform.

i think i can`t afford creating a storage pool with ~280TB of usable space just to get ~120TB of high performance storage.

Well, I know it is painful, but on the other hand, look at how much it could cost for a vendor to provide a high grade NAS. And when I say vendor, let's even include iXsystems in there. Get a quote from EMC, Vantara, NetApp, iXsystems, etc., and then once you've changed your pants, come back and we can have a more realistic discussion of what is affordable. The basic fact of the matter is that you can do it on your own for much less than a commercial vendor, but it isn't like you're going to be able to hook up 5 20TB HDD's to a Raspberry Pi and get a high performance 100TB NAS. Either you need what you need and then you should comparison shop what it is going to cost you to get there, or you're not being serious about solving your problem. If that sounds harsh? Sorry. ZFS is all about tradeoffs. We use CPU instead of hardware RAID. We use system RAM rather than expensive cache cards. We waste disk space in order to reduce seeks and increase performance. It's harsh reality.

am i correct to assume that i should avoid a system with expander backplanes and instead, direct-attach each HBA channel to the backplane directly, if performance is a priority?

Expander backplanes are a mild tax when used appropriately. For example, if you have a 12-bay shelf of 12Gbps SAS hard drives, you may have an expander in the mix without it hurting you, because a modern HDD peaks out at about 3Gbps, and 3 x 12 = 36Gbps which is less than the 48Gbps that an SFF-8643 link is capable of. And realistically you can probably get away with 24 HDD's on that, because you won't be maxxing out every drive for 100% sequential I/O all the time. However, if you tried to put a 24-bay shelf of SSD's on there, you'd have a problem.

Eventually you run out of PCIe slots in a server. You can probably cram no more than ... five? HBA's into a server, assuming you probably need a slot or two for 100GbE and/or Optane NVMe adapters for SLOG etc. Five HBA's might get you 10 SFF-8643's which would be ten disk shelves if you don't daisy-chain. This would still give you very high perfomance I/O to over 100 HDD's.
 

Fireball81

Explorer
Joined
Apr 24, 2016
Messages
51
Thank you for your thorough explanations. I appreciate the honesty and i understand where you coming from.
I do want to solve the problem at hand, but i have to be reasonable and eventually need to make compromises, if things won`t fit my budget.

With that being said, i did some digging on my own and maybe buying a refurbished server would be cost-conscious option.

The server in question is:

Supermicro CSE-847B X11DPH-T 4U Server 36x 3,5"
*24-port 4U SAS3 12Gbps single-expander backplane (front) SAS3-846EL Rev1.01
*12-port 2U SAS3 12Gbps single-expander backplane (rear) SAS3-826EL1
Supermicro X11DPH-T Motherboard
2x Xeon Gold 6134 (8x3,2GHz)
128GB 16GBx8) DDR4 ECC
LSI SAS9300-8i HBA (IT-Mode)
2x 10GbE Intel X722 (are those working fine under TrueNAS?)
2 x Supermicro 1280W PSU

The server sells refurbished from a credible source for 3200€ (after taxes)
Obviously the server offers way more drive bays i was initially planning for but its not bad to have some expansion headroom, i guess.

What really makes me thinking going the DIY route became evident over the last couple of days of my research. Tell me if i am wrong.
For example: For a similar investment, i can purchase a Synology DS3622xs+ 12bay NAS with a Xeon-D-1531 and 16GB of RAM.
Even under less than optimal conditions for ZFS, i should be able to beat the Synology DS3622xs+ in terms of performance, wouldn`t you agree?

I was thinking to start with 2 raidz2 vdevs, each with 8x20TB Seagate Exos X X20 in the server mentioned above for roughly 190TiB usable space.
Maybe i will be able to increase that to 3 vdevs for 150TiB of "fast storage" but that would stretch it a little bit.
What do you think how that would compare to the Synology NAS mentioned earlier in terms of performance?

Thank you. I am grateful for all your insight. :smile:
 
Last edited:

Fireball81

Explorer
Joined
Apr 24, 2016
Messages
51
I need to decide this week, if i purchase the server and i am really leaning towards it.
Do you guys think the comparison i made in my earlier post, between a Synology DS3622+ (or any Xeon D/W based NAS really) that leverages ZFS and the SuperMicro Server mentioned above is fair in terms of performance? I just said to myself: Wait, if the NAS uses ZFS and i need to pay a good amount of money for a system that uses rather underwhelming hardware (not that it couldn`t do the job properly) and gives me a limited amount of expansion capability, why not go the DIY route instead?
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
You didn't mention RAM - but that tends to be reasonably cheap and easy to increase anyway (until you run out of slots). ZFS wants lots of RAM
That should perform well.
As to how it would perform against the DS - well Synology use BTFRS which like ZFS is a COW filing system. Put loads of memory into the TN box and I think it will beat the Synology. Just be prepared to add RAM and other hardware tweaks to get the most performance. You are on the right track for using the correct gear
 
Joined
Jun 15, 2022
Messages
674
Supermicro CSE-847B X11DPH-T 4U Server 36x 3,5"
*24-port 4U SAS3 12Gbps single-expander backplane (front) SAS3-846EL Rev1.01
*12-port 2U SAS3 12Gbps single-expander backplane (rear) SAS3-826EL1
Supermicro X11DPH-T Motherboard
2x Xeon Gold 6134 (8x3,2GHz)
LSI SAS9300-8i HBA (IT-Mode)
2x 10GbE Intel X722 (are those working fine under TrueNAS?)
2 x Supermicro 1280W PSU
  • 36 drives vs Synology's 12: That'll scream with decent drives.
  • I don't have experience with those expanders but given the rest of the system specs I'd guess they should be fine.
  • That's a lot of MB/CPU power. If you don't do crazy things with it it'll be underwhelmed (I don't know your intended workload, so your mileage will vary, but needless to say I'd expect it to smoke the Synology).
  • Awesome HBA. I run the 16i and they're really fast. There are faster cards, but for most workloads these will outrun the workload by a longshot.
  • If I remember the Intel X722 are top of the line, do a search here as @jgreco just posted about Intel cards.
  • That's a great power setup.
Synology will run quieter and draw less power, have more add-ins, and be easier to use.

TrueNAS can, in general and on the system you speced out, eat the Synology for lunch, just that it'll require you learn how to administrate it properly for your needs. The TrueNAS box can also outgrow the Synology. Remember this isn't necessarily an easy task depending on your experience level, but in my opinion "easy" is overrated.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
  • If I remember the Intel X722 are top of the line, do a search here as @jgreco just posted about Intel cards.

The FreeBSD ixl driver supports the X710, XL710, XXV710, and X722. However, I do not believe that the driver supports RDMA. Since RDMA support is the major differentiator between the X710 and X722, due to the X722 being based on the Intel C628 controller, it may be worth noting that selecting the X722 is not really a guarantee of RDMA capability.

If you want RDMA, you might want to consider the X810-C or X810-XXV, where Intel has official FreeBSD driver support.

In TrueNAS, RDMA is not currently supported but is expected to be someday. This will probably have something to do with how much demand there is for this, so be sure to tag in on

 

Fireball81

Explorer
Joined
Apr 24, 2016
Messages
51
@NugentS
you re absolutely right, i forgot to write it down in my post but was planning with 128GB of ECC DDR4 (16GBx8)
The big downside is obviously power draw (acoustics dont bother me much because i have server cabinet with AC, so thats not much of an issue)
Maybe i can get away with a different Xeon SKU with a lower TDP or less power drawn in general.

The actual shares will be SMB shares only and Samba likes high clock speeds and doesn`t care about core count, does that still apply?
But picking a significantly lower TDP SKU, like the Xeon Silver 4112 would sacrifice a lot of clockrate and its only a quad core SKU.

@WI_Hedgehog
thank you for your reply. That sounds re-assuring for sure. :)

@jgreco
Correct me if i am wrong but for RDMA to give me any kind of performance benefit, it would need to be supported on both ends, of the network connection. So bascially the NICs of my workstation would need to support is as well, to see any kind of performance benefit, am i right?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Correct me if i am wrong but for RDMA to give me any kind of performance benefit, it would need to be supported on both ends, of the network connection. So bascially the NICs of my workstation would need to support is as well, to see any kind of performance benefit, am i right?

I'm not really on the whole RDMA bandwagon but I can understand wanting not to preclude the possibility of someday doing it. The opportunity to do forklift upgrades of a network may only happen every five or ten years, so I'm mainly warning you about the possibility that the X722 might not do what you're hoping for. Since I primarily need NAS devices in the role of NAS, I'm not currently buying into RDMA as I don't see how it is advantageous in the current state of technology. I've just finished some upgrades from Solarflare to Intel X710-DA4 and am perfectly happy with that, having gained VF capabilities. I couldn't find a chokepoint in the networks here where the quad 10G setup we use was a limiting factor, so I just couldn't justify the jump to 25Gbps at this time. Perhaps something interesting will evolve in the next few years that will convince me RDMA is useful or interesting, at which point hopefully X810 gear will convince me to upgrade to 25Gbps as well. I'm sorry if this is less than optimistic sounding.
 
Joined
Jun 15, 2022
Messages
674
@NugentS
you re absolutely right, i forgot to write it down in my post but was planning with 128GB of ECC DDR4 (16GBx8)
The big downside is obviously power draw (acoustics dont bother me much because i have server cabinet with AC, so thats not much of an issue)
Maybe i can get away with a different Xeon SKU with a lower TDP or less power drawn in general.

The actual shares will be SMB shares only and Samba likes high clock speeds and doesn`t care about core count, does that still apply?
But picking a significantly lower TDP SKU, like the Xeon Silver 4112 would sacrifice a lot of clockrate and its only a quad core SKU.

USER=115]@jgreco[/USER]
Correct me if i am wrong but for RDMA to give me any kind of performance benefit, it would need to be supported on both ends, of the network connection. So bascially the NICs of my workstation would need to support is as well, to see any kind of performance benefit, am i right?
  • RAM: Looks good. If you minimize the number of modules used there will be room for more RAM if you need it, and ZFS loves, loves, loves RAM.
  • Power Consumption: There are power-saving systems out there, the Resources List has links to building them.
  • SMB uses only one CPU core to transfer a file on both the server and the client. This can be a bottleneck on low-power CPUs, it depends on your usage case.
  • RDMA needs support on both ends, though is inherently insecure.
 
Joined
Jun 15, 2022
Messages
674
Joined
Jun 15, 2022
Messages
674
No sarcasm detector, huh. How about "RPC: The Next Generation", or ... I'll think about it and probably come up with some more. Too tired right now.
You didn't include an emote as a signal flag, and people here, while incredibly smart--way smarter than me, are dry as a f@rt.

(No complaints, I'm thankful to hoover up the brain beans.) :grin:
 

Fireball81

Explorer
Joined
Apr 24, 2016
Messages
51
Just to let you fine people know what we ultimately decided on. (in case you re interested^^)
After a lot of talking with colleagues, friends and looking for other options we decided against the supermicro server and purchased a
QNAP TVS-h1688X. Main argument was power consumption.
We are located in germany and as many of you probably know, nowadays, energy costs can become a deciding factor for a NAS that runs
24/7, esspecially when the difference in power consumption is rather large.

The h1688X doesn't corner us into a walled garden like the Synology DiskStation DS3622xs+ does.
(Synology almost only validates their own, in my opinion, overpriced HDDs for the DS3622xs+)

That decision is hopefully a fair compromise between budget/performance/power consumption.

We use TrueNAS on two 4U backup server for years now and its such a great product.
Don`t know if what i am about to say reaches the ears of those who need to hear it, but thanks for all the hard work and i am sure the future for TrueNAS Core and Scale will be a bright one. :)

Thank you guys.

Cheers
Dennis
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Don`t know if what i am about to say reaches the ears of those who need to hear it, but thanks for all the hard work and i am sure the future for TrueNAS Core and Scale will be a bright one. :)
I'll make sure it gets to those who've worked hard to help make TrueNAS what it is today.

We're sorry it didn't work out to our favor in this case, but we're happy you've found a place for it on your backup servers!
 
Joined
Jun 15, 2022
Messages
674
Top