Build for veeam backup repository

jerryxlol

Dabbler
Joined
Sep 30, 2020
Messages
19
@blanchet yes, that is the old way how it is being done. I can always fall back into this by changing lsi to raid controllers and redefine what will it run ;)
 

firesyde424

Contributor
Joined
Mar 5, 2019
Messages
154
Here's what I can tell you about TrueNAS NVME support from our experience so far. For reference, our NVME server is a Dell PowerEdge R740xd with 17 x 15.36TB Micron 9300 Pro U.2 NVME drives in a 2 x 8 RAIDZ2 config with a single hot spare.



Do not use anything older than TrueNAS core 12.0. The nvd driver will error out and give you boot issues if you use more than 12 NVME devices. That one was a right ugly thing to track down. We thought we had configuration or hardware issues.

As mentioned before, hotswap support for NVME devices isn't quite there yet. It's better in 12u1 than the older versions but even then, while you can hotswap the drives, you will not be able to perform any storage operations with it such as creating a new pool or replacing a failed drive. The system will see the drive, you can query SMART status and pull other information, but you can't do anything else until a reboot.

Since it appears that your NVME pool is just for quick restore of VM files, you don't actually NEED a RAIDZ implementation. However, I would still suggest using one, with a hot spare unless you are pushing for maximum performance. Those Micron 9300 drives are so fast that, while you do take a performance hit when using RAIDZ, you likely won't notice the difference unless you are really pushing those drives. In our testing, we were able to extract 4 million read IO and just over 100Gb/sec of bandwidth from a 16 drive zpool set up as 8 mirrored vdevs. That dropped by about 1/4 when configured as 2 x 8 drive RAIDZ2. However, even at that speed, we still ended upgrading from 10Gb to 25Gb NICs because we were able to easily saturate a 10Gb NIC.

As far as Veeam pools, I would suggest creating 2 or 3 separate pools and using them as separate backup targets. Veeam loves to run wide and that's generally how you scale performance for Veeam workloads.
 

jerryxlol

Dabbler
Joined
Sep 30, 2020
Messages
19
@firesyde424 Wow, thanks for great info. I was aware that hotswap is not working yet. Two or three pools will be used.
Will lab as soon as it arrives :)

I was not even thinking that this machine will hit more than 75gb/s so my rack interconnect has only 100gbit. (separate rack on different floor)
 

Louise1

Cadet
Joined
Feb 24, 2021
Messages
1
At the point when you make scale-out archives, veeam allows you to pick both direct-appended capacity and organization joined capacity as the presentation level and item stockpiling as the limit level.
 

jerryxlol

Dabbler
Joined
Sep 30, 2020
Messages
19
So it finally arrived :)

Now i am bit confused and thinking about designing pools.
NVME microns are for sure z1 ... 5x12,3TB 3DWPD nvme.

And how to cut theese pools for veeam and office 365 backups (minio).

How long will it rebuild when ill do Z3 over 22 drives? (I have good experience with netapp 24drives with Raid-tec )
I have changed the P4610 for P5510 -> Newer, faster. For vdevs. So I have 4xP5510 and 44 12TB drives. (And thinking to buy another jbod cause of insufficient space)

Is it bad idea to create 2 pools one with 24 drives and 2 nvme for vdev. And rest 20 drives with 2 nvme for common veeam backup?


What would you recommend ? I have never used the O365 veeam backup and i have counted that it needs 200TB for backups.

I have read about veeam backup scaling where you can use veeam backup device and all of its lun as one logical unit. (never tried, will do POC)


Thanks in advance. ( I will post final configuration after ProofOfConcept with veeam)
 
Joined
Mar 29, 2021
Messages
2
My previous post
"Consider 45drives.com
We are happy with the density/performance/cost/ "

Ya, I'm no longer happy. We have been 'resilvering' for weeks from two drive failures.
45Drives had us replace the molex power cable. lots of labor.
I'm going back to a raid controller for the next storage target.
Possibly Dell as we have several already.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,702
It seems nobody jumped in to comment on the high number of drives in a VDEV point above...

The recommended maximum for drives in a RAIDZ VDEV is 12. Technically there's nothing stopping you from going to 45, but as noted above, resilvering is the limiting point.

Making 4 VDEVs in a 45drives array would seem the suitable workaround (of course losing more drives to parity, but resilvers would be more manageable.
 

kiriak

Contributor
Joined
Mar 2, 2020
Messages
122
who said about 45 drives?
even 45drives.com wouldn't expect to sell so many for a VDEV :smile:
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,702
who said about 45 drives?
These posts:
Consider 45drives.com
We are happy with the density/performance/cost/reliability
Ya, I'm no longer happy. We have been 'resilvering' for weeks from two drive failures.
45Drives had us replace the molex power cable. lots of labor.

And to a lesser extent:
How long will it rebuild when ill do Z3 over 22 drives? (I have good experience with netapp 24drives with Raid-tec )
 

kiriak

Contributor
Joined
Mar 2, 2020
Messages
122
I thought 45drives was the name of the shop.
Anyway never mind, thanks for your insight about the VDEVs.
 

jerryxlol

Dabbler
Joined
Sep 30, 2020
Messages
19
Its interesting. 1x24drives raid-z3 is much slower (about half ) than 2x12 raid-z2 in one pool. When changed topology i am able to achieve >1GBps write sustained. (500G vm backed up in about 8minutes)

Will see how the POC continues.

What really bothers me that i cannot manage more than what is shown in the picture from that 5x12TB nvme raid-z1. ( still solving problem with aggregation of 2x25GBe ethernet card, when shit comes real will swap for 100gbe - veeam proxy for direct san access)

Drives are https://media-www.micron.com/-/medi...df?la=en&rev=b6908d03082d4fd7b022a2f40d1b731e

Micron 9300 MAX - 5x12,8TB with 3,5G/3,5G speeds. (no special vdevs)

Iscsi performance:

micronpool.png


Iscsi vs SMB - i have chosen iscsi due to the not working network aggregation and about stucked performance about 2GB/s (like it would go over one network port)
 

jerryxlol

Dabbler
Joined
Sep 30, 2020
Messages
19
Hmm, i got into the situation as many users of ZFS. When i modified jobs from veeam, write performance went to 1MBps. So direct backup to the 2xraidz2 POOL not possible. (Dedup on). (one job backed up 2GBps and it has one VM - large database ~ 700GB)

I have moved all jobs to the nvme pool (Micron-pool) and speeds are usable (1.2-2GBps) when 4 parallel task are running. I can work with that.

IS there any way to improve write speeds on the spinners? I have 2xp5510 4TB as slog on them but nothing more.

I have noticed one another thing and that zfs arc ate 450GB of RAM. Is it possible that arc went over the DDT tables and that was why the performance dropped?

Thanks
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
Yes, the use of dedup is generally catastrophic unless you design the pool and size ARC specifically for it.
 

jerryxlol

Dabbler
Joined
Sep 30, 2020
Messages
19
So are you saying that 512GB of RAM and 2x12hdd 12TB z2 with 2x intel P5510 3.8TB as slog is not enough for dedup on?

So should i buy more nvme for dedup vdev ?
 
Last edited:

a.dresner

Explorer
Joined
Dec 10, 2022
Messages
75
SSD's and mirrors. This isn't difficult. What's more interesting is how to make a fast HDD pool.

The first thing you do is ditch the RAIDZ, because RAIDZ tanks your IOPS.

The second thing to do is to ditch the 8TB HDD's and replace them with something larger, 12TB's are quite affordable, but you should use the largest disks you can justify, in mirror pairs.

The third thing is not to fill all your space. ZFS write speeds are tightly coupled to how easy it is for ZFS to find free space. ZFS will make a HDD perform almost like SSD, even for totally random data, if you keep occupancy low and have sufficient ARC and L2ARC.

What you're going to find is that all the Veeam FE's and forum users who are hating on FreeNAS/TrueNAS have either directly built, or tried to help someone who has built, a RAIDZ based storage pool with one or MAYBE a handful of vdevs, filled it to the gills, and discovered that it was great for the first few runs and then it began to suck bigtime, worse and worse as time went on, until one day it suffered catastrophic baseball bat failure.

All the stuff I talk about on these forums about block storage and fragmentation apply to backups, though at a somewhat less intense level. ZFS writes hella-fast if it has large chunks of contiguous space available. As a Copy-on-Write filesystem, it does NOT seek to an existing block within a file and overwrite it. It instead allocates new space and writes it there. So if you keep your pool at 30% capacity, you get much better performance than if it is at 60%. See almost any post on the forum where I discuss fragmentation, a bunch of them give you a picture of pool occupancy vs steady state performance.
Rather than start a new thread, I found this one.. hope its okay to revive it a bit

I am backing up 1 Hyper-V host with 8 VMs in my office. For the last decade I have been backing up to a windows based repository. It works, speed is around 500 MB/s, restores work fine.

Just took my old parts and put together a Truenas x9, 128GB, system. Been running for a month now, ready to start using it. I have 6x10TB drives inside, have room for some more and or some SSD's.

I have offsite hosts that I send copies to, weekly, monthly. I'm focused on my main target. Started testing a backup to SMB share on a RAID Z2, running well, 800 MB/s but you mention above that it will not after time? it's taking up 2TB/35TB and based on my other backups, won't grow larger than 10TB.

Given the hardware I have:
1. How to configure my drives (you mention to ditch RAIDZ?)
2. Any VDEVs? (dedupe?)
3. iSCSI to a ZVOL? (ZVOL options?)

Or just leave it like it is...

Thanks!
 
Top