SOLVED vdev design- IOPS questions

whitney

Cadet
Joined
Apr 27, 2020
Messages
7
New to the forums and I'm building a new personal use system, and looking for insight on how to design my new system vdev/pool to work best for my needs of photo/video editing. My current resources are as follows:

4th gen i5-4570 quad core 3.2ghz
Supermicro X10SAE MoBo
4 x 8gb non-ECC MEM-DR380L-HL01-UN16 (waiting on delivery)
LSI 9207-8i HBA + StarTech SFF-8087 adaptor (waiting on delivery)
2 x Crucial BX500 120gb SSD mirrored boot drives (waiting on delivery)
4 x 4TB WD40EFRX drives (waiting on delivery)
2 x 3TB HGST Ultrastar 7200rpm (freebies from datacenter buddy, 46k hrs per SMART, no issues since using 6 mo ago)
2 x 2TB Seagate 5900rpm (35k+ hrs per SMART, no issues so far)
SeaSonic Focus + 750 (waiting on delivery)
Fractal Design Define R6, 4 extra HDD trays
Stock CPU heatsink/cooler, but willing to upgrade/add fans (something I'll closely monitor once system is finally built/powered up)
With a dedicated APC UPS-Smart1500

I've been using FreeNas since the v9 days, but have only been using legacy components on a cheapo build level with a pair of mirrored vdevs for 2 separate pools. I decided enough was enough, and budgeted some time & money for a better build, and been trolling the forums for 9+ months as I designed my system. Many of the components above were purchased because it was previously suggested, and had been verified as workable in ZFS.

Now down to the point of this thread, my NAS is used by myself alone. I've acquired less than 2 TB of storage in my years of use, and it's primarily "RAW" photos from my hobby photography, a few videos i've capture with my DLSR & drone, and a handful of movies that I've purchased. I also use Plex & Unifi jails. My biggest complaint with a pair of mirrored vdev pools in the past was the inability to edit my RAW photos(via Darktable editor or GIMP) & edit video(via different Linux based editors)

With the inflexibility of RaidZ-2 configurations, would a single 8 disk Z-2 pool satisfy the IOPS required for editing or should I look into 2 x 6 disk Z-2 pool instead? I understand that "build & test" is the ideal way to answer my question but delays in shipping of components, disk burn-in time, & testing of memory could put me in the month(s) time-frame before I can really know. And if extra drives(plus extra fans) are needed for 2 x 6 z2 I'd like to get a jump on ordering.

Thanks, and I appreciate any insight.
 
Joined
Oct 18, 2018
Messages
969
4 x 8gb non-ECC MEM-DR380L-HL01-UN16 (waiting on delivery)
It looks like you already purchased these? I generally recommend that you not get small DIMMs like that. The reason is that FreeNAS is memory hungry and if you end up needing more and all 4 of your slots are filled with 8GB DIMMs you'll have to wholesale replace one, rather than just add another. If possible, you may consider returning those and getting 1 of the largest DIMMs supported by your board, or two if you can afford it.

Fractal Design Define R6, 4 extra HDD trays
Great case; see my build below. I eventually had a few temperature problems; likely due to an over-heated room.

2 x 3TB HGST Ultrastar 7200rpm (freebies from datacenter buddy, 46k hrs per SMART, no issues since using 6 mo ago)
If you run badblock and long smart tests and find no errors, those should be fine to use. 7200rpms will increase the heat in your chassis.

2 x 2TB Seagate 5900rpm (35k+ hrs per SMART, no issues so far)
Same as above re burn in.

I also use Plex & Unifi jails. My biggest complaint with a pair of mirrored vdev pools in the past was the inability to edit my RAW photos(via Darktable editor or GIMP) & edit video(via different Linux based editors)
Do you need to edit directly off of the NAS or can you stand to copy the files locally; edit, and then move the results back to the NAS? If you really want to work off the NAS you'll want to consider IOPS (striped mirror vdevs and SSDs help a lot here), read caching (memory), and if you're using a share that does sync writes you're going to want to use a SLOG device. You could consider a smaller pool for your "editing" pool focused on performance and then your archival pool for storing raw and results. Ideally you configure your hardware so that your network is the bottleneck.

With the inflexibility of RaidZ-2 configurations, would a single 8 disk Z-2 pool satisfy the IOPS required for editing or should I look into 2 x 6 disk Z-2 pool instead?
If you're looking for IOPS go with striped, mirror vdevs. Though not as space efficient; they will have higher IOPS.
 

whitney

Cadet
Joined
Apr 27, 2020
Messages
7
It looks like you already purchased these? I generally recommend that you not get small DIMMs like that. The reason is that FreeNAS is memory hungry and if you end up needing more and all 4 of your slots are filled with 8GB DIMMs you'll have to wholesale replace one, rather than just add another. If possible, you may consider returning those and getting 1 of the largest DIMMs supported by your board, or two if you can afford it.
The 4th Gen core processor I'm using(i5-4570) & the Supermicro X10SAE MoBo provide unique DDR3 memory limitations, with 8gb DIMMs being the only viable choice possible. I understand that if I ever decide to upgrade at a future date I need to commit to a more modern processor(preferable a Xeon), new MoBo, aim for much higher memory density, and start using ECC (Xeon upgrade would be the catalysis to that change)

Great case; see my build below. I eventually had a few temperature problems; likely due to an over-heated room.
I intend to review logs thoroughly during both the burn-in, and pool testing, phases to document temperature issues. I already have my eye on a couple Noctua solutions pending my understanding of pressure vs. flow in that specific case's HDD bay(NF-A12x25 PWM , NF-S12A PWM , or NF-F12 PWM). I do understand that using 2 x 140mm in front is typical in the Define R6, but my OCD kicks in and points me towards using 3 x 120mm in the front panel. That pending configuration would allow me to provide additional HDD cooling support by the relocation of the 2 x stock 3 pin Fractal 140mm fans directly below HDD rack as an intake & the other directly above HDD as exhaust.
I have also seen a few scripts that monitor HDD temp and provide a PWM curve to address the periodic high usage times(scrub, high IO, replication, etc) I know little about implementing scripts, so if I cant figure it out I can always fall back to using an Arduino (with code for the 25k PWM fans use)

If you run badblock and long smart tests and find no errors, those should be fine to use. 7200rpms will increase the heat in your chassis.
I failed to do badblocks on the 3TB freebies before putting into action, but thankfully I currently have 2 x mirror vdevs with their own pools backing up to each other. I do run weekly LONG tests on all HDDs at the moment of them considering their ages. The excessive redundancy is the main reason I want to put Z-2 parity in play with 8 disks(12TB storage with the 2TB in the pool, 18TB once the 3TB become the smallest, and ultimately 24TB storage once all 4 disks get upgraded to 4+TB). No more wasted space than needed moving forward, and scalable budget as I go.
I love free and the moment COVID is over, my buddy goes back to work and I can go pull more of the 3TB HGSTs out of the decommissioned equipment stack. However, the HGST 7k3000 disks are both hot & noisy so if I have the $100 in my pocket I'd rather buy another 4TB Red/IronWolf.

Do you need to edit directly off of the NAS or can you stand to copy the files locally; edit, and then move the results back to the NAS? If you really want to work off the NAS you'll want to consider IOPS (striped mirror vdevs and SSDs help a lot here), read caching (memory), and if you're using a share that does sync writes you're going to want to use a SLOG device. You could consider a smaller pool for your "editing" pool focused on performance and then your archival pool for storing raw and results. Ideally you configure your hardware so that your network is the bottleneck.
Currently I edit locally, and move to NAS for archive, which I hate. Non-destructive editing of photos, and videos, adds an extra editing file(s) that must accompany the original RAW photo/video. Too many times have I dipped back into the archive to re-edit a shoot(50-300 photos) and have to manually determine which editing file was the most recent. This is time consuming, and has caused me to lose hours of work when picking the wrong editing file(s). I can restore a snapshot, but then I get the fun job of manually re-analyze the batch AGAIN to determine the correct editing file.
I am using SMB for access ATM so I had not considered adding an SLOG SSD, but switching to NFS & adding an SLOG SSD may just be worth considering.
I had also not previously considered pairing my 8 disk Z-2 with 2 mirrored pairs of SSD(like WD SA500 500gb) in stripe configuration. If WD's specs are even close to accurate(SeqR-560Mb/s, SeqW-530Mb/s) which would theroeticly put me at 2,240Mb/s Read & 1,060 Write for the 4 SSD striped mirror(plus the extra IOPS). That would do very well, and would fill the case completely with 2 cheap Boot SSDs, 8 HDD, 4 mirror/stripe SSDs, and one SLOG SSD. Moving forward, I'd only need to worry about Quality instead of Quality because "adding" wouldn't be possible any longer.

Thanks again for all your suggestions.
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
I am using SMB for access ATM so I had not considered adding an SLOG SSD, but switching to NFS & adding an SLOG SSD may just be worth considering

Why? Write with sync will never be faster than write without sync. Best case, NFS with SLOG and sync is as fast as SMB without sync.

pairing my 8 disk Z-2 with 2 mirrored pairs of SSD(like WD SA500 500gb) in stripe configuration

Just don't take "pairing" too literally, you do want a separate pool for your SSDs. Will 1TB hold your photos, or do you need more space? If more, you might want bigger SSDs.

Also keep in mind the limitations of your Gig link. Unless you are dealing with a lot of small files, your HDD raidz2 will saturate a gig link. You can, of course, move to a 10Gig link for your PC and FreeNAS, and then a striped mirror of SSDs would really start to shine.
 

whitney

Cadet
Joined
Apr 27, 2020
Messages
7
Why? Write with sync will never be faster than write without sync. Best case, NFS with SLOG and sync is as fast as SMB without sync.

Just don't take "pairing" too literally, you do want a separate pool for your SSDs. Will 1TB hold your photos, or do you need more space? If more, you might want bigger SSDs.

Also keep in mind the limitations of your Gig link. Unless you are dealing with a lot of small files, your HDD raidz2 will saturate a gig link. You can, of course, move to a 10Gig link for your PC and FreeNAS, and then a striped mirror of SSDs would really start to shine.
I originally picked SMB shares due to the universal adoption by my various OS collection which currently consists of Android, Win10 Pro, Win10 Home x 2, Win8 Enterprise, Win7 Enterprise, Ubuntu 16.04, Ubuntu 18.04, and in a couple days Ubuntu 20.04. I also use Kali, but no shares to worry about there. I dual, or triple, boot many of my machines in the field so VMs aren't particularly useful.

I may have misspoke, but I appreciate the feedback. I meant designing a single 8 disk Z-2 vdev that builds Pool Alpha for archive, and a separate "photo & video editing" Pool called Pool Bravo with 2 vdevs of 2 x SSD-TBD(mirrored) + 2 x SSD-TBD(mirrored). This Bravo Pool would only need to hold current projects, replicating back to the archive Pool(Alpha), and 1TB would be plenty with most RAW photos being about 20-25mb each & 4k videos about 1gb each(I currently shoot more 1080p than 4k)
I've been using dual link configuration of 1Gb-LAN(QoS/VLAN) & WLAN(Unifi AP HD + Intel AX card + Priority QoS/VLAN) on my workstation without coming close to any issue/saturation yet. My workstation LAN/WLAN to NAS link bypasses the pfSense filtering/processing completely. But I currently have an SFP+ capable network switch(Ubiquiti), and previously looked at the Chelsio 10GBe cards for NAS & workstation. Just waiting for that day of saturation occurance when I've got an excuse to blow some cash and plus it would mean I get to pull some Cat6/Cat6A. Truthfully, I hope the network bottleneck takes a little time so I can budget for the expense.
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
I've been using dual link configuration of 1Gb-LAN(QoS/VLAN) & WLAN(Unifi AP HD + Intel AX card + Priority QoS/VLAN) on my workstation without coming close to any issue/saturation yet

That's what I was getting at: I think your raidz2 pool will likely be able to serve that GBit link just as well as the SSD pool. That gig link will handle roughly 100MB/sec. That depends a bit on how IOPS-heavy your workload really is. Would you test and report back, to see how editing on HDD raidz2 is vs editing on SSD, over that GBit link? I'm curious.
 

whitney

Cadet
Joined
Apr 27, 2020
Messages
7
That's what I was getting at: I think your raidz2 pool will likely be able to serve that GBit link just as well as the SSD pool. That gig link will handle roughly 100MB/sec. That depends a bit on how IOPS-heavy your workload really is. Would you test and report back, to see how editing on HDD raidz2 is vs editing on SSD, over that GBit link? I'm curious.
Yorick, I've collected almost everything needed to get the build to a working state for testing. 1 of the RAM modules I recently purchased appears to be wrong, and who would have thought getting a quality PSU would be so difficult(have ordered from 4 different vendors with "in-stock" only to get a reply 2-5 days later that they're out of stock/backordered) Thankfully I have a reliable lead on a Seasonic GX-650(for $140 :rolleyes: damn price gougers) and hopefully will soon begin the 5-7 day burn-in process for the new disks.
Unfortunately, I've already overspent my budget & with the job market being so unpredictable it's currently hard to justify the extra $550 expense for 4 x WDRED SA500's(1TB). However, an SSD mirrored(2 x 2) pool will be a priority on the next upgrade for sure. Plus the delay will allow me time to configure the RaidZ-2 pool to maximize it's efficiency
 
Top