Register for the iXsystems Community to get an ad-free experience and exclusive discounts in our eBay Store.

Build for veeam backup repository

jerryxlol

Junior Member
Joined
Sep 30, 2020
Messages
13
Hi guys,

i came here because i have been looking for backup enterprise solutions and also tried truenas in lab which comes much cheaper with use of high end DC stuff than enterprise solution offers.

What i have tried:
R710 - 2x 6c @3GHz
90G ddr3 ecc rdimm
LSI-9405w trimode
8x8TB raid z1
1x 2tb nvme p4500 for slog
2x960G s4510 for dedup .. (i had some spare so i needed to try to eliminate slow parts of spinners )
10G network.


With dataset i have copied about one third of datas from main repository (synology 10x10TB and 2x960G ssd 0 ) and i have reached dedup ratio about 2~3.
Due to 10G network and secondary lab storage i could not pass to more than 700MB/s of write. (DD zero to file on disk went about 1.2GB/s -> i believe limitation of slog drive)

What i need to build (REQUIREMENTS) :

Reqs 2-3GB/s stabil troughput of writes.
Daily is written about 2-4T of new datas.
500TB of deduplicated and compressed datas in final. (Now i have about 120TB storage and buying another ) Backup is with one month retention.

4x nvme 4TB drives with 3dwpd idk if samsung or intel.

special volume for instant recovery VM 10TB would satisfy. (veeam it will mount as datastore and you can run VM from there and do live vmotion to main cluster storage)

budget is about 20-30k usd.

I was thinking about :

supermicro 847BE1C4-R1K23LPB 36x sata/sas with optional nvme
1x AMD EPYC™ Rome 7542 - 32 Core - 64 Threads - 2.9GHz - Boost 3.4GHz - 225W - 128MB Cache
256GB of ram
26x10TB wd sas drives HC510 or smth like that in raid z3 (for sure 3 drives can die, i can manage new in about 1 week and change instantly as it arrives )
i think broadcom (lsi) hbas 9400 or 9500 series.
4x nvme 3-4TB drives like intel p1725b or intel p4610. (for sure 2x in mirror for slog)

The main question is ... what performance to expect? Will it handle my reqs? What would you look after ? What would you change to have more reliability / perf stability ? How to tweak zfs for large files ? (one veeam backup is like 50G and incremental 5-10G, and hundreds of VMs)

I dont trust yet truenas for production environment so backup is really my first try. I have built two labs (one mentioned) so i want to give it a shot.

Thanks for help and sorry for my broken english :)
 

blanchet

Senior Member
Joined
Apr 17, 2018
Messages
329
TrueNAS is a really good solution to host Veeam backup repositories.
It would be better to have two backup repositories
  • a main (small and fast) backup repository for Veeam (retention 1 week)
  • a secondary (larger but slower) backup repository for Veeam (retention 1 month)
You can schedule a backup copy job in Veeam to feed the secondary backup repository from the main one.

Buying an AMD EPYC Rome 7542 - 32 Core to host backup on TrueNAS is a waste of money. Pick an AMD EPYC 7252 8-core to get money to buy
larger disks (HGST 14 TB HC530).
 

jerryxlol

Junior Member
Joined
Sep 30, 2020
Messages
13
Hello,

sorry for late answer. But if i would like to have it in one chassis it would be still possible? I have not found chassis that can manage 24 hdds and more than 4x nvme.

So how can get from this out ? (no way for two devices)

I agree that 7542 is overkill. But the minimum i would go for is 16c 7302p ... price between 7302p and 7252 is neglible and performance difference huge.
 

jgreco

Resident Grinch
Moderator
Joined
May 29, 2011
Messages
13,526
Hello,

sorry for late answer. But if i would like to have it in one chassis it would be still possible? I have not found chassis that can manage 24 hdds and more than 4x nvme.
Virtually any 24 drive chassis can handle "more than 4 NVMe".

Supermicro SC846. Then find a mainboard that has sufficient capacity to run some NVMe M.2 cards like the ASUS Hyper M.2 X16 NVMe. Put two in. Seriously not-difficult.
 

jerryxlol

Junior Member
Joined
Sep 30, 2020
Messages
13
uhh, i am sorry to tell you this, but this is not a PRO solution. Anything that is not hot swappable is useless when we are talking to using it in enterprise. Less downtimes, more reliability. I dont want to explain to boss why backup went down when changing drive. Turning it off because changing drive is not good reason for maintenance.
 

jgreco

Resident Grinch
Moderator
Joined
May 29, 2011
Messages
13,526
uhh, i am sorry to tell you this, but this is not a PRO solution. Anything that is not hot swappable is useless when we are talking to using it in enterprise. Less downtimes, more reliability. I dont want to explain to boss why backup went down when changing drive. Turning it off because changing drive is not good reason for maintenance.
That's why you put in spares. Some of us who do this professionally spare SSD's at a 50% level in servers. The petty "enterprise" use case where you merely don't want to shut down a server pales in comparison to those of us who operate networks flung across thousands of miles, where the logistical challenges of getting staff on-site are significant. It's easier and cheaper to just have stuff ready to go in place, spare disks and SSD's online, and even spare servers ready to spin up at a moment's notice. This is how the big boys play. You put in racks of servers, and as they fail, they're just dead. Eventually the entire installation is decommissioned and replaced. There's a certain amount of wisdom there, and those of us who run far-flung networks typically embrace the best ideas. One of them is to maintain spare capacity to cope with failures. That is, plan for failures.

Anyways, if you don't want to do that, and you're stuck in the "enterprise" mindset, you can go and pick any NVMe server like this Supermicro 2U 48 bay system and hook a SC846 up to it as a JBOD. This is normally the way a larger NAS is built. It gives you 48 bays of NVMe and then really as many bays of 3.5" SATA/SAS as you'd like to add, easily up to 96, a little less easily beyond that. You can use the 847 JBOD if you don't mind some drives being on the backside of the chassis, something I don't personally care for.

This just feels like you're not trying that hard to me, but I do this stuff professionally, so my opinion may be a bit slanted. If you want a single chassis and you want an unusual set of options such as "lots of NVMe PLUS lots of SAS/SATA" then you will find your selection seriously constrained, maybe to as little as "no options."

That's why those of us who do this professionally end up building hundred-plus-drive NAS units out of multiple chassis, or finding ways not to need such density (using multiple servers). Where there is an edge case such as wanting 24 bays PLUS a bunch of NVMe, and you really want it in a single chassis, if the goal was to have six NVMe, I'd look at two of those four-M.2 cards and just spare in the two M.2's right away. Then you just replace the drive at your next maintenance window.

But what do I know.
 

blanchet

Senior Member
Joined
Apr 17, 2018
Messages
329
Unfortunately NVMe hotswap does not work yet on FreeBSD, if you want to hotswap your SSD, you have to use SAS or SATA SSD.
In this case you can have a single 4U chassis with 36-drives or 60-drives that would be ideal for Veeam Backup.
 

jerryxlol

Junior Member
Joined
Sep 30, 2020
Messages
13
Hmm, so now the solution forks :)
1) drop out nvme idea for all ssd volumes (use only nvme for slog) and do sas ssd as "fast tier". That would not be bad opinion because sas ssd with high endurance are cheaper than NVME. A big pros is that it will be in one chassis.
2) FULL nvme chassis and JBOD chassis for spinners. As mentioned there is a problem with nvme hotswap. Do you know any view from roadmap that when it will be possible to use that feature? Haven`t study that problem yet, is it problem with direct attached nvme or is it even problem trough LSI tri mode adapters ? For me it has cons that extra chassis, but for spinners i think it would not be that bad.
 

blanchet

Senior Member
Joined
Apr 17, 2018
Messages
329
  1. TrueNAS does not support autotiering. It supports only Fusion pools, that helps when you have millions of files. But it is probably useless for a Veeam Backup repository, because you will have only few thousand of big files. Nevertheless you can setup Veeam for autotiering.
  2. NVMe hotswap is a real challenge. Nobody knows when FreeBSD will support NVMe hot swap, it may take years to arrive.
For storage, it is safer to use only mature technologies to minimize the amount of potential issues to solve.
 

jerryxlol

Junior Member
Joined
Sep 30, 2020
Messages
13
fast pool :) i meant fast pool ... one SSD pool and one spinners + ssd for slog.
But how to achieve values i have mentioned in the first post ? Is there calculator for pool speed in zfs ? Technologies that i can use for veeam proxy is iscsi or cifs.
 

blanchet

Senior Member
Joined
Apr 17, 2018
Messages
329
According to iXsystems, an TrueNAS-X20 All-flash-Array can reach 2GB/s, so you have to build something similar.

Compact Storage with Unbeatable Value
Designed for small/medium businesses, the TrueNAS X-Series is a compact storage appliance that can deliver speeds over 2 GB/s and scale up to 1 PB of raw capacity in 6RU. With high-availability options and top to bottom data protection, the entry-level X-Series ensures maximum uptime while providing the lowest Total Cost of Ownership (TCO).
Nevertheless, you need also a very fast VMware datastore and one Veeam Proxy per ESXi node, otherwise you will have a bottleneck at the source.
What is your VMware datastore and how many ESXi nodes do you have ?
 

jerryxlol

Junior Member
Joined
Sep 30, 2020
Messages
13
I am aware of that. I have two sites to install this device.

First one is about 20esxi nodes. 12 cisco ucs with san transport mode and 8 is going to be dell vxrail allflash + dell compellent allflash.

Second is 10 esxi nodes dell vxrail allflash + hybrid compellent.
Speeds of the sources are more than specified in requirements... (at least what dell gave in the specification in order)
 

blanchet

Senior Member
Joined
Apr 17, 2018
Messages
329
That is a very nice ESXi cluster, I understand why you need very serious gear for your backup.
I think that there is only a few users on this forum that operate Veeam at such a scale.
Therefore you should try the Veeam Forums. Maybe you can find a technical solution with Veeam Scale-Out Backup Repository
 

jerryxlol

Junior Member
Joined
Sep 30, 2020
Messages
13
Thanks, I have contacted sales from veeam and their system engineer and he kinda outraged with truenas yet they have them as veeam verified solutions. So i am trying my POC with this ;).

Veeam forum is useless when we are trying to build truenas with what they have nothing to do. (not their business what you use as repository, your problem is to guarantee the speeds)

That is why i am still have the question unanswered. How to build the fast pool to have it quickly backed up ?
 

blanchet

Senior Member
Joined
Apr 17, 2018
Messages
329
I think that you need to build an all-flash pool + spinning disk pool, but it will cost you more than your $30K budget.

Fast pool
  • 1x 2U high performance FreeNAS server
  • 24 x 3.84 TB SATA III Samsung SSD 3D-NAND MLC 2.5" (SM883)
  • 1 x Broadcom (LSI/Avago) SAS III HBA 9300-8i for the internal pool
  • 1 x quadport 10G low-profile NIC (Intel or Chelsio) or something faster
  • you configure the fast pool with 10 x 2-way mirror vdev + 2 hotspare
This POC will cost $40K for 38 TB (to save money on the POC you can start with only 12 SSD in stripe)

Slow pool
If the POC succeeds then you can buy a large JBOD for the spinning disks, and you configure a Veeam Backup Copy Job to archive the oldest backup file on the slowest pool
  • 1 x Broadcom (LSI/Avago) SAS III HBA 9300-8e to connect an additional JBOD
  • 1 x 60-drives SuperMicro JBOD with a single expander
  • 60 x hard disks 14 TB SATA Western Digital Ultrastar DC HC530
  • you configure a second pool with 10 x 6-wide raidz2 vdev or 5 x 12-wide raidz3 vdev
The JBOD and the spinning disks will cost you $20K

The total price is $60K for a high-end hybrid system with 38TB of flash, and 600 TB of spinning disks.
 

jgreco

Resident Grinch
Moderator
Joined
May 29, 2011
Messages
13,526
That is why i am still have the question unanswered. How to build the fast pool to have it quickly backed up ?
SSD's and mirrors. This isn't difficult. What's more interesting is how to make a fast HDD pool.

The first thing you do is ditch the RAIDZ, because RAIDZ tanks your IOPS.

The second thing to do is to ditch the 8TB HDD's and replace them with something larger, 12TB's are quite affordable, but you should use the largest disks you can justify, in mirror pairs.

The third thing is not to fill all your space. ZFS write speeds are tightly coupled to how easy it is for ZFS to find free space. ZFS will make a HDD perform almost like SSD, even for totally random data, if you keep occupancy low and have sufficient ARC and L2ARC.

What you're going to find is that all the Veeam FE's and forum users who are hating on FreeNAS/TrueNAS have either directly built, or tried to help someone who has built, a RAIDZ based storage pool with one or MAYBE a handful of vdevs, filled it to the gills, and discovered that it was great for the first few runs and then it began to suck bigtime, worse and worse as time went on, until one day it suffered catastrophic baseball bat failure.

All the stuff I talk about on these forums about block storage and fragmentation apply to backups, though at a somewhat less intense level. ZFS writes hella-fast if it has large chunks of contiguous space available. As a Copy-on-Write filesystem, it does NOT seek to an existing block within a file and overwrite it. It instead allocates new space and writes it there. So if you keep your pool at 30% capacity, you get much better performance than if it is at 60%. See almost any post on the forum where I discuss fragmentation, a bunch of them give you a picture of pool occupancy vs steady state performance.
 

jerryxlol

Junior Member
Joined
Sep 30, 2020
Messages
13
Hi guys, reviving this topic because it is time to solve this issue :)

Supermicro AS-2124US-TNRP
2x epyc rome 7302
16x32G ddr4
4pcs P4610 3,2TB for vdevs
2x lsi 9300-8e
2x 2x25gbe mellanox cx4
5x Micron 9300 MAX 12.8TB NVMe (for fast pool)

JBOD:
847E2C-R1K23JBOD
4x Supermicro CBL-SAST-0573
44x HGST/WD 3.5" 12TB SAS 12Gb/s 7.2K 256M 0F29560 4Kn

Price is around 50k$

I need around 200TB on veeam O365 and 130TB on backups from Veeam vsphere.

How is that working multipath sas on truenas core? I have contacted ixsystems for enterprise pricing on own hardware and they dont do that.
Bsd natively can work with multipath sas, but i am not sure how the app can limit that.
 

blanchet

Senior Member
Joined
Apr 17, 2018
Messages
329
According to other threads on this forum, multipath SAS should work on TrueNAS like on FreeBSD,
but if you do not have a HA storage server, multipath SAS will just add more complexity for no real benefit.
 

ChrisRJ

Senior Member
Joined
Oct 23, 2020
Messages
306
With these requirements I would be a bit careful with a solution that is not turn-key. I am definitely not the "you always need official support"-person. But we are in league here, where I would feel uncomfortable without a single party being responsible for the entire solution. Alternatively, if my practical experience would cover the performance figures, that would be ok as well. But spending 50k on hardware alone and not being 100% certain that it will do the job, is quite risky.
 

jerryxlol

Junior Member
Joined
Sep 30, 2020
Messages
13
@ChrisRJ I get it. But there is a situation that i need 3 backup devices with a huge amount of data to store. Truenas i have labbed in several devices and except of speed i know what to expect. Still it is a Proof of Concept design. If not plausible i can always buy from dell device for 200k$ each. (That is not the way i would like to going to).
And when we are going to this point. With hardware i can work even without support. Support on software is crucial when things go bad.
 
Top