New Core build advise needed

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
Boot: Any small and cheap enough SSD.
SLOG (not "cache"!): Must have Power-Loss-Protection and high endurance. Data Centre write-intensive drive, Optane, or Radian RMS-200/300.
But the key point is that you do NOT need a SLOG unless you have a workload with mandatory sync writes. No sync writes = no SLOG.
 

dj423

Dabbler
Joined
Feb 25, 2023
Messages
29
Boot: Any small and cheap enough SSD.
SLOG (not "cache"!): Must have Power-Loss-Protection and high endurance. Data Centre write-intensive drive, Optane, or Radian RMS-200/300.
But the key point is that you do NOT need a SLOG unless you have a workload with mandatory sync writes. No sync writes = no SLOG.
From what I read, I will need a SLOG.
"Using a SLOG for ZIL is recommended for database applications, NFS environments, virtualization, and data backups. In general, a storage environment with heavy synchronous writes benefits from using a SLOG for the pool ZIL." as I am using NFS, mainly for shared storage for my hypervisors, that host VM's, databases and some moderate IO applications.

Maybe this is a false assumption. Is it possible to add a SLOG down the road, or do I have to restructure the pool?
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
NFS is often Sync - although you can override that by setting sync=disabled. For files that fine. For db apps etc I wouldn't
You can add a SLOG at any time. Note that depending on the pool layout you may not be able to remove it. If the pool is mirrors, then the SLOG is removeable. If the Pool is RAIDZ then the slog cannot be removed
Edited as it appears I may be wrong
 
Last edited:

dj423

Dabbler
Joined
Feb 25, 2023
Messages
29
NFS is often Sync - although you can override that by setting sync=disabled. For files that fine. For db apps etc I wouldn't
You can add a SLOG at any time. Note that depending on the pool layout you may not be able to remove it. If the pool is mirrors, then the SLOG is removeable. If the Pool is RAIDZ then the slog cannot be removed
Cool thanks.

Well the 8 SATA ports went fast on the motherboard, so I opted for an LSI 9207-8i HBA with the 4x SATA breakout cables so I can expand - and if needed add in the SLOG drive pair down the road. As I shove more drives into this thing, the Icy Dock cages are looking more tempting, but boy are they pricey.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
Note that depending on the pool layout you may not be able to remove it.
SLOG and L2ARC can always be removed. Data and special vdevs cannot be removed if at least one is raidz#.
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
SLOG and L2ARC can always be removed. Data and special vdevs cannot be removed if at least one is raidz#.
Can you - I thought you couldn't remove a SLOG with RAIDZ as data vdevs. Its not something I have tried as I use mirrors anyway. Happy to be wrong though
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
For an SLOG I would usually not use a single drive but a mirror.

More information here:

 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Can I just use a consumer grade (ie: small) SSD's for these,

YES! The thing that saves an SSD is the better quality of flash combined with wear leveling. We do not hear many stories of SSD's dying. If they do and it's a concern, I have a Resource in the resources section about how to make a highly available boot solution.

I didn't find any WD Red drives under 500G, and it seems like a waste to dedicate a 500G drive for SLOG duty.

Not suitable as SLOG. You need a power loss protected SSD or Optane device.
 

dj423

Dabbler
Joined
Feb 25, 2023
Messages
29
Ok, so I will hold off on SLOG selection - if anything to give the wallet a rest until I get things online.

The revised "Updates" now includes the WD SN700/SA500 drive mix, 9207-8i hba card along with a couple SDP11 adapters.

Motherboard: ASRock Rack X570D4U-2L2T Motherboard
RAM: 64G of Nemix DDR4-3200 ECC UDIMMs

I will go from there once I get it back online with this refresh, and 2way mirrors. Really appreciate the schooling on ZFS/TrueNAS. Quite the rabbit hole :)
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
@dj423 - the HBA card - what are you planning on attaching to that?
Its fine with HDDs but may struggle with a quantity of SSD's
 

dj423

Dabbler
Joined
Feb 25, 2023
Messages
29
@dj423 - the HBA card - what are you planning on attaching to that?
Its fine with HDDs but may struggle with a quantity of SSD's

To start off a pair of WD SA500 SATA drives. May expand to more in the future. The others will plug into the motherboard. Is there an HBA model better suited to SSD duty?
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
Yes - the next range up - 9300's which have much greater bandwidth available to the card. Of course the 9300's cost more
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
Of course
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
Just make sure its cooled properly. This may involve adding a fan or ensuring adequate airflow
 

dj423

Dabbler
Joined
Feb 25, 2023
Messages
29
Just make sure its cooled properly. This may involve adding a fan or ensuring adequate airflow
It will be right in the path of an 80mm case fan, but if that's not enough I have a 40mm Noctua fan I can mount to the heatsink if the case fan isn't enough. I see they get pretty toasty, so I have the extra fan if needed.
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
Run it up and when it use try the finger test. If swearing ensues - its too hot
 

dj423

Dabbler
Joined
Feb 25, 2023
Messages
29
Well it's moving in the right direction. Ran a few tests on the new WD 'Red' SA500 disks (mounted in SDP11) with sync on and off just to baseline. This is with the SSD's connected direct to the mobo SATA interface, so this is just a disk test. Since I am not a fan of disabling sync on this setup, I will test again once I get the HBA card in. This is just a pre-build test to see what just changing the SSD's did for performance over nvme Patriot P300 consumer disks.


fio tests from within VM instance, connected to the NAS via 10GB over NFS:
Sync on
write: IOPS=5392, BW=21.1MiB/s (22.1MB/s)(4030MiB/191268msec); 0 zone resets
read: IOPS=69.0k, BW=270MiB/s (283MB/s)(3210MiB/11902msec)

Sync off
write: IOPS=43.0k, BW=168MiB/s (176MB/s)(3285MiB/19562msec); 0 zone resets
read: IOPS=68.8k, BW=269MiB/s (282MB/s)(3114MiB/11591msec)
Will report my test results on the 9207-8i card. If it is still slow I may need some suggestions for running a SLOG drive setup, since optane disks seem hard to get ahold of.
 
Last edited:

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
HBA vs Motherboards shouldn't make a lot of difference.
Optane disks are easy to get ahold off - depends if you can find the format you wanted.
M.2 - see newegg: NewEgg - don't be tempted by the smaller one. U.2 can be had on aliexpress / ebay
 
Top