60 X 14TB Drive TrueNas backup solution.

Gagik

Dabbler
Joined
Feb 21, 2023
Messages
27
Hello,

I am new to TrueNAS and would like to ask the experts for the best way to set up and deploy a backup server.

My goal for this setup is to have daily backups to this NAS as the primary backup source. The drives will be placed in a 60-bay chassis and connected with a SAS card to a dual CPU Xeon server with a SAS card and 128 or 256GB of RAM.

Are there any steps I should follow to ensure that this setup is done correctly? Which RAIDZ configuration should I use? Are there any specific considerations I should be aware of?

This server will not be used for data pulling or streaming; it will strictly serve as a backup until it's needed to retrieve data in the event of a main server failure.

I would like to make it as fail-safe as possible and will have multiple spare drives on hand in case of drive failures. Additionally, I would like to be able to expand the volume by adding more drives in the future, such as adding 10 drives at a time.

Any help would be greatly appreciated as this setup is intended for a high-end client in the media and entertainment industry.

Thank you,
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
You have lots of reading to do. I don't have a Resource list or sticky threads list handy.

Here are some hints:
  • 10 drives in a RAID-Z2/3 vDev would be fine. More than 12, not so much.
  • If you can't take down time easily, nor hot swap failed disks, then a warm or hot spare disk or 2 might be helpful.
  • With >8 drives on a SAS card, you usually need a disk back plane with SAS Expander. Or a separate card with the SAS Expander function, (then run a mess of cables to the disk back planes).
  • A backup server with that much storage implies that multiple clients may use it at the same time. Thus, consider 10Gbps Ethernet or even 2 x 10Gbps in LACP. (Or even higher speed.)
  • The suggested RAM rule about 1GB of RAM for every 1TB of disk does not apply. But, you do want enough RAM, like 64GBs at a minimum, (in my opinion). More probably would not help because this is a write mostly device.
  • If using NFS, consider a proper SLOG, (Separate ZFS intent LOG), mirrored. Even though many people get away without mirroring, you are going to burn through SLOG devices because the NAS is write mostly. That's not a problem as long as the SLOG fails during normal operation. Then ZFS resorts to using in data vDev ZIL, (ZFS Intent Log).
  • Should be on UPS, only to prevent long boot times.
  • Make sure you understand ZFS scrubbing and it's tuning. Large ZFS pools can take quite a while to scrub. Their are tuning options to speed it up or slow it down. Thus, if backups are not running during certain days / times, you can speed up the scrubbing speed. Then back it down during backup windows.
  • Setup SMART tests
  • Setup E-Mail notifications
  • Last, test, test & test, both backups and restore functions.
Good luck.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
For a production backup system of this size I strongly recommend using TN CORE instead of TN SCALE.
 

Gagik

Dabbler
Joined
Feb 21, 2023
Messages
27
Thanks for the replies. I forgot to mention one thing. The initial setup will consist of 60 14TB drives. I will be copying 500TB of data onto this setup, and then I will be expanding it by adding an additional 60 14TB drives from the old NAS. I will clear the data from the old NAS before adding it to the new setup. Any thoughts on this?
 
Joined
Jul 3, 2015
Messages
926
Can I ask why?
TrueNAS Core aka FreeNAS based on FreeBSD has been around for more than 10 years with millions of downloads and users whereas TrueNAS SCALE is still a baby by comparison based on Linux and having been around for approx a year.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
Can I ask why?
Without any Linux bashing intended, the memory management of CORE is still noticeably superior and this is no small system you are planning to build. So I recommend the most stable and reliable option.

The downside is that you need to pick your hardware with FreeBSD support in mind. But this is an enterprise class project, anyway, so I guess that will be ok. Regulars on this forum can help you pick components.
 

c77dk

Patron
Joined
Nov 27, 2019
Messages
468
Thanks for the replies. I forgot to mention one thing. The initial setup will consist of 60 14TB drives. I will be copying 500TB of data onto this setup, and then I will be expanding it by adding an additional 60 14TB drives from the old NAS. I will clear the data from the old NAS before adding it to the new setup. Any thoughts on this?
You need to get 500TB onto those 60 disks? You'll be quite close to recommended limits with 6x 10-disk raidz2, and with raidz3 it'll be quite close to full pool.
 

Gagik

Dabbler
Joined
Feb 21, 2023
Messages
27
You need to get 500TB onto those 60 disks? You'll be quite close to recommended limits with 6x 10-disk raidz2, and with raidz3 it'll be quite close to full pool.
Yes, once the main pool is complete, I would need to copy 500TB of data onto the newly set up TrueNAS. After the data transfer is complete, I would then add the existing 60 disks to the newly created NAS and expand the pool or volume. Does this make sense?
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
Whatever LSI 3008-based card is of the right form factor (if proprietary chassis with dedicated spot for the HBA) or has the right kind of connectors in the right place for the cables which will feed the backplane (if using a regular PCIe card).

Yes, once the main pool is complete, I would need to copy 500TB of data onto the newly set up TrueNAS. After the data transfer is complete, I would then add the existing 60 disks to the newly created NAS and expand the pool or volume. Does this make sense?
It makes sense. Just be aware that you cannot remove drives or vdevs in a raidz-based pool, so once it's done, your pool will forever require 120 disks—or more.
 

Gagik

Dabbler
Joined
Feb 21, 2023
Messages
27
Whatever LSI 3008-based card is of the right form factor (if proprietary chassis with dedicated spot for the HBA) or has the right kind of connectors in the right place for the cables which will feed the backplane (if using a regular PCIe card).


It makes sense. Just be aware that you cannot remove drives or vdevs in a raidz-based pool, so once it's done, your pool will forever require 120 disks—or more.
Yes, we will not be reducing the drive size. We will keep adding to it as the backup gets bigger. That being said, what's the best setup configuration for the initial 60 drives and then to add the additional 60 drives?
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
There is no such thing as "the best" configuration. You need an enterprise grade server. I have come to swear by Supermicro in the last couple of years. We used to be almost exclusively a "Fujitsu/Siemens shop" before.

If you are going for external shelves anyway, one rack unit should be sufficient. For power efficiency as well as saving some money I would look for 1 RU single socket systems. 2 PCIe slots for HBAs would be a requirement just to be sure to cover your expansion needs in the future. Since in 1 RU you can rarely have more than 2 addon cards 10 Gbit Ethernet with SFP+ onboard would be another thing I would look for.

Redundant power supplies, ECC memory ... granted, right?

And I would aim for 1 RU single socket specifically to save on budget and have a second spare system on the shelf.

In general this has been how we tick. Instead of expensive vendor support/replacement contracts build e.g. a HA system from two Cisco routers and have a third one of the same model on the shelf.

As always YMMV - and a 120 disk system is honestly larger than anything I ever build, so others might have a thing to add or two.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
All the recommended hardware supports FreeBSD and Linux equally. This isn't really a problem.
Agree on HBAs and systems in general. With network cards FreeBSD seems to be a bit more picky. And I'm not talking about Realcrap.
 

Gagik

Dabbler
Joined
Feb 21, 2023
Messages
27
We have a 2U Intel server with dual Xeon processors and adequate space for adding cards and RAM. Regarding that aspect, we are in good shape. However, my main concern and thought process revolve around setting up the initial RAID configuration. I want to ensure that I make the right choices now to avoid any regrets in the future, as it would be nearly impossible to redo the setup once it's done..

Thanks,
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
I find 10 disk wide RAIDZ2 too wide, already. But that is just a gut feeling, nothing I can backup with hard data. I happily run e.g. 24 disks in a 4x 6 disk wide RAIDZ2. I would go to 8. 10 just looks ... like stretching things a bit.

But again there are some people with more experience with large setups here.

I'm just answering because given the size of your deployment I want to throw DRAID into the ring and ask what others think of it. It's officially supported and supposedly stable in OpenZFS. The TrueNAS UI does not yet support it but I run a DRAID pool in my SCALE toy system. No hickup so far. And I do expect TrueNAS to catch up UI-wise. It's one of the central recent features of OpenZFS. I could outline the steps to create one in a "UI and middleware compatible" manner if needed.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Agree on HBAs and systems in general. With network cards FreeBSD seems to be a bit more picky. And I'm not talking about Realcrap.

All of the recommended cards work fine in both. If you want to go off and use random cards pulled out of the bottom of the spare parts bin, sometimes Linux will support them better, but if you are buying a system to host 120x HDD, you're probably also willing to pair it with a high performance card rather than some crappy also-ran B-grade ethernet card, well, I just don't really understand that too much, but I guess you could do it. Wrong tool for the job wins the day.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
I'm just reading about problems with newer Intel cards over in the OPNsense forums. Then there's - if I'm not mistaken - certain models by Chelsio and Mellanox that are generally considered ok, but the FreeBSD drivers are not quite up to the task. That's all.

I cannot remember when I last put a network *card* in a server. I just use what's on board and Supermicro generally delivers.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
I would not use a 1U rack server, as it would likely not have enough PCIe slots. You more or less want a new SAS 12Gbps SAS 8e card for each 60 disk external JBOD. So plan ahead for PCIe slots. And, if your system board does not have enough Ethernet performance, then you want a PCIe slot for an Ethernet card or 2 as well.

A 2U server might suffice, depending on it's base I/O and amount / speed of PCIe slots.


And I agree with others, TrueNAS Core would likely serve you better. Unless you need some special local app that is supplied in SCALE via it's Apps, Core will likely be more stable and probably slightly faster. TrueNAS SCALE is just too much of a work in progress and will likely remain so for another year.


On the subject of DRAID, their is a quirk in DRAID that involves integrated hot spares. Well, not so much quirk as it's main feature. But, using 60 disks, in multiple stripe groups and hot spares, less is known how to configure DRAID for different uses when compared to RAID-Zx.

And questions about DRAID remain, (at least for me), like can DRAID have unintegrated hot spares?
 
Top