Good read/write performance over 10Gbe?

oguruma

Patron
Joined
Jan 2, 2016
Messages
226
I'm thinking it's time to build a NAS for my daughter. She's into photo/video editing. She currently uses a Synology NAS with about 4TB of storage.

She has right at around the 4TB of data now, and given the large photo/video files, I'll assume she will have 40TB of total storage 3 years from now, what's a good storage configuration? She has a 10Gbe connection back to the NAS.

I have a 12-bay chassis and a 2011 2x2640 CPU setup I can give to her, as well as plenty of RAM, as well as a couple Intel S35XX SSDs.
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,945
RAIDz2 or Z3 with either a metadata vdev (be careful the vdev is pool critical) or an L2ARC set for metadata only.
This assumes you are buying all the disks now

If you want to buy disks occasionally then use mirrored pairs so you can add a vdev to the pool as expansion every so often. Its not ideal but would work

However if you take the long view and assume that the RAIDz expansion feature will land sometime end of next year then back to RAIDz2or3 and start with say enough "best bang for bucks" disks for say 16TB or so useable in the first instance. Then next year you may be able to expand the pool just by adding disks and a little bit of maintenance.

You could also add a smallish Mirrored (or even striped) [RAID0] pool of SSD's as working space given the 10Gbe - but that probably still won't be as good as copying to the local computer

Just my random thoughts

Oh and one more thought - how are you backing that stuff up?
 

oguruma

Patron
Joined
Jan 2, 2016
Messages
226
RAIDz2 or Z3 with either a metadata vdev (be careful the vdev is pool critical) or an L2ARC set for metadata only.
This assumes you are buying all the disks now

If you want to buy disks occasionally then use mirrored pairs so you can add a vdev to the pool as expansion every so often. Its not ideal but would work

However if you take the long view and assume that the RAIDz expansion feature will land sometime end of next year then back to RAIDz2or3 and start with say enough "best bang for bucks" disks for say 16TB or so useable in the first instance. Then next year you may be able to expand the pool just by adding disks and a little bit of maintenance.

You could also add a smallish Mirrored (or even striped) [RAID0] pool of SSD's as working space given the 10Gbe - but that probably still won't be as good as copying to the local computer

Just my random thoughts

Oh and one more thought - how are you backing that stuff up?

Currently for backup, basically everything in the house is backed up to a RAID-Z1 pool on a separate NAS. I know RAIDz1 is typically ill-advised, but it seems reasonable, given it just houses backups. I also keep a cold spare on hand for it. I also have a series of offline HDDs that I back up to periodically as well, and critical stuff is backed up to S3, as well as blu-ray discs we keep offsite.

Her PC has a 2TB nVME that she copies her files to from the camera, but she doesn't always edit them right away, so she'll typically move her stuff off to the NAS to deal with later. I know for photos she uses Adobe Lightroom, and the photos are typically just edited on the NAS (with Lightroom, you're actually just making your edits into a local sqlite database, not the actual photos themselves). For photos, once they hit the NAS, they typically don't leave there, unless she had to move some to an external HDD to take on the road or something. Video is a different story, and I think she ends up moving files back/forth from/to the NAS to edit, but I really don't know how video editing works. I'd assume that would need to be done with the files on the workstation, itself.

The more I think about her workflow, the more I think that something like a 4TB(ish)RAID-Z1 SSD pool for her main "working" space, and then a Z2/Z3 pool for her finished/archive space might be more appropriate. Maybe back up the "working" pool to a single disk. Supposing I went that route, and assuming I wanted about 4TB of usable storage, what's a good SSD drive to use? I currently use Intel S35XX SSDs for boot devices, but other than that, I've never used SSDs with ZFS before. Are "consumer" grade SSDs appropriate? The 2TB consumer SSDs tout something like 700TBW, which seems like it would be sufficient, but I have no idea if they ever actually achieve the advertised rating.

I've never messed with a separate metadata vdev or L2ARC before. Suppose I added a vdev to a large spinning rust pool for metadata. What happens if the metadata vdev goes down? What would that look like as far as a disaster recovery process goes?
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,945
The Intel DC S35xx are low endurance (relatively speaking) drives for webservers and similar and will be fine for booting TN - overkill even. The 3600's are medium endurance whilst the 3700 are high endurance
SSD's generally don't go wrong (except Samsung QVO's for which I have a slightly irrational dislike) so consumer drives are OK. Consumer drives generally don't have PLP - but you do have a UPS don't you!!
I tend to use Crucial MX500 drives and have found they last well for a consumer drive - they even claim to have PLP (maybe)

Of course if you could add a couple of NVMe drives along with 10Gb that would be a speedy touch for the working space.

Keeping the metadata on faster than HDD storage makes things like dir listing quicker and may improve response times from programs that have to look at the entire disk and find files from time to time. You can do with with either an L2ARC for metadata only - which is not pool critical, lose the L2ARC and you do not lose the pool. OR you can use a special vdev which should be SSD and have the same resiliancy as the main vdev. The special vdev is pool critical, lose the vdev and lose the pool. However its has (IMHO) more functionality if you choose to use it. [I use mirrored SSD's as a special vdev on my main HDD pool]. Do not put L2ARC (metadata only) or a special vdev on an SSD Pool - its not worth it
 

oguruma

Patron
Joined
Jan 2, 2016
Messages
226
The Intel DC S35xx are low endurance (relatively speaking) drives for webservers and similar and will be fine for booting TN - overkill even. The 3600's are medium endurance whilst the 3700 are high endurance
SSD's generally don't go wrong (except Samsung QVO's for which I have a slightly irrational dislike) so consumer drives are OK. Consumer drives generally don't have PLP - but you do have a UPS don't you!!
I tend to use Crucial MX500 drives and have found they last well for a consumer drive - they even claim to have PLP (maybe)

Of course if you could add a couple of NVMe drives along with 10Gb that would be a speedy touch for the working space.

Keeping the metadata on faster than HDD storage makes things like dir listing quicker and may improve response times from programs that have to look at the entire disk and find files from time to time. You can do with with either an L2ARC for metadata only - which is not pool critical, lose the L2ARC and you do not lose the pool. OR you can use a special vdev which should be SSD and have the same resiliancy as the main vdev. The special vdev is pool critical, lose the vdev and lose the pool. However its has (IMHO) more functionality if you choose to use it. [I use mirrored SSD's as a special vdev on my main HDD pool]. Do not put L2ARC (metadata only) or a special vdev on an SSD Pool - its not worth it

I do have a UPS, a pretty nice Eaton Rackmount unit with an external battery module; I get about 90 mins of runtime at average load in the rack, enough to gracefully shut down, obviously. For my existing NAS/KVMs I connect one PSU to the UPS, and the other to the wall power, so I'm not too terribly worried about power loss.

3X 2TB MX500s in RAIDZ1 would be something like 2100TBW (700TBW per manufacturer specs) across the entire vdev, which would likely mean they will be obsolete long before they wear out, but I've also never used SSDs in that write-intensive of a manner before.

With L2ARC, what ratios of L2ARC storage: RAM: data storage does a person need? Supposing, 40TB of data, how much RAM/L2ARC would I need?
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,945
L2ARC for Metadata - I am guessing here but would suspect that a 256GB SSD would do just fine (and its probably waaaaay overkill)
Someone else might have a better view. 100GB would be fine I suspect - but the larger drives would have more longevity.

Remember this is L2ARC for metadata only - so won't use up too much of your memory. I suspect that L2ARC (full use) would be a waste of time here. Add more ARC (RAM) first. Note you will have to change the L2ARC to metadata only as by default it will do full L2ARC.

N.B. I am guessing on size of L2ARC Metadata - but I like to think its an educated guess.

I used a set of MX500 (0.5TB) in a RAID10 for VM Storage for 5 years and they are still useful. I am using a set of MX500 1TB in a pool of 3 mirrored vdevs for a year now - and they are down 10-12% so I am happy with their lifespan for consumer drives
 

oguruma

Patron
Joined
Jan 2, 2016
Messages
226
L2ARC for Metadata - I am guessing here but would suspect that a 256GB SSD would do just fine (and its probably waaaaay overkill)
Someone else might have a better view. 100GB would be fine I suspect - but the larger drives would have more longevity.

Remember this is L2ARC for metadata only - so won't use up too much of your memory. I suspect that L2ARC (full use) would be a waste of time here. Add more ARC (RAM) first. Note you will have to change the L2ARC to metadata only as by default it will do full L2ARC.

N.B. I am guessing on size of L2ARC Metadata - but I like to think its an educated guess.

I used a set of MX500 (0.5TB) in a RAID10 for VM Storage for 5 years and they are still useful. I am using a set of MX500 1TB in a pool of 3 mirrored vdevs for a year now - and they are down 10-12% so I am happy with their lifespan for consumer drives

Are there any documented experiences with consumer NVMe drives? The idea of mounting some nvme drives on a PCI-E card seems attractive. IIRC the MOBO I was going to use has PCI-E 3.0 x16 slots, which would accommodate a few drives for the "working" storage and an NVMe for the L2ARC.
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,945
Can you bifurcate the x16 slots?
I THINK you need to bifurcate otherwise only one drive will be seen.
 

oguruma

Patron
Joined
Jan 2, 2016
Messages
226
Can you bifurcate the x16 slots?
I THINK you need to bifurcate otherwise only one drive will be seen.

I THINK so. I'd have to boot it up to double check, but I do remember seeing bifurcation options in the BIOS, unless the bifurcation is just for the x8 slots, but that would seem sort of odd...

It's an X9-something-or-other dual 2011.
 
Top