Unfortunately, i haven't bought the drives for the SLOG yet. As indicated in the thread, my use case is simple file backups using Finder/File Explorer, then Time Machine backups, and rendering the video directly to the NAS box. So, if it helps to accelerate the performance, setting up a SLOG sounds fair to me. If not, i won't at least for my use case. I'll not be hosting any VM or databases and all.It may help depending on context. It shouldn't hard to test. You can either temporarily disable sync on the dataset or on SMB share. MacOS clients will in various circumstances request an SMB2 FLUSH (which gets mapped to an fsync for the file). Some users have reported performance improvements by having samba lie about whether it has actually fsynced the data. This of course comes at additional risk of data corruption. Anecdotally, since samba started by default honoring FLUSH requests I haven't seen any reports of corrupted time machine backups. I don't have SLOG on my home NAS, but I'm also not concerned about time machine backups taking a while (as long as they complete).
Default behaviour is a mix of sync and async writes, depending on the operation.As far as I'm aware the sync is disabled by default on TrueNAS right?
No, it will worsen write speed.Will enabling it improve the read/write speeds?
You would be better with increasing the RAM.Any light on the metadata for my use case? As Time Machine will have lots of small files and directories, do you think it would make a real benefit by setting up a metadata drive? I don't want to wait for several minutes to get the info of a directory or when performing a quick search in the directories.
Sync will make things slower! By a huge factor. This has been written numerous times now. Async is always faster than sync, because RAM is faster than everything, even Optane. The point of an SLOG is to make sync suck less in case you absolutely need it. It will never be nearly as fast as async.As far as I'm aware the sync is disabled by default on TrueNAS right? Will enabling it improve the read/write speeds? I haven't tested the speed via NFS. But will do that soon.
It's all clear now. So in short, if not using VM and database, SLOG isn't required and won't be much of help for normal storage including Time Machine. In addition, the sync is required when using VM and database and to speed up the operations, you need the SLOG here, otherwise no. Async is already fast and one does not need to use sync for normal storage. Did i understand it correctly?Sync will make things slower! By a huge factor. This has been written numerous times now. Async is always faster than sync, because RAM is faster than everything, even Optane. The point of an SLOG is to make sync suck less in case you absolutely need it. It will never be nearly as fast as async.
As long as you don't use sync writes there is no need for SLOG, and sync writes are required by very speific workloads. So yes, you understood correctly and are good to go.It's all clear now. So in short, if not using VM and database, SLOG isn't required and won't be much of help for normal storage including Time Machine. In addition, the sync is required when using VM and database and to speed up the operations, you need the SLOG here, otherwise no. Async is already fast and one does not need to use sync for normal storage. Did i understand it correctly?
Oh, i see. So can it switch between these two depending on the operation then?Default behaviour is a mix of sync and async writes, depending on the operation.
Got it, got it.No, it will worsen write speed.
Umm, are you sure adding metadata drive will not help in my use case, but the RAM mostly? I know ZFS loves RAM ;)You would be better with increasing the RAM.
Woohoo. Thank you guys. I'm still learning!As long as you don't use sync writes there is no need for SLOG, and sync writes are required by very speific workloads. So yes, you understood correctly and are good to go.
The issue with a metadata (special) vdev is that it's actually stiped alongisde the other VDEVS: if you lose it, you lose your entire pool and as such you need to match the parity level of the other VDEVS (example: if you have a RAIDZ2 VDEV you need a 3-way mirror metadata/special VDEV).Umm, are you sure adding metadata drive will not help in my use case, but the RAM mostly? I know ZFS loves RAM ;)
Oh, i know this. But wait, OMG. Does it mean how the Data VDEV is configured, you need the same configuration for the metadata as well? I mean if i have RAIDZ2, i would need the same configuration for the metadata as well? I have never used it before. Do you have any experience in the metadata? I mean does it actually help to find directories quick?The issue with a metadata (special) vdev is that it's actually stiped alongisde the other VDEVS: if you lose it, you lose your entire pool and as such you need to match the parity level of the other VDEVS (example: if you have a RAIDZ2 VDEV you need a 3-way mirror metadata/special VDEV).
Having 128GB as of now.Adding RAM would allow you to increase performance and, if you get at least 64GB of ARC, the abilirty to add a L2ARC drive which you could then set to only store metadata (and possibly make it persistent as well): such a drive is not striped into the pool and, being just cache, doesn't result in a data loss when it dies.
It means you should have the same level of redundancy. If you place your data on a RAIDZ2 that means you expect to be able to lose 2 disks and still retain your data. So you should place the metadata on a three way mirror, because only then can you lose 2 disks of your metadata vdev and still retain your data.Oh, i know this. But wait, OMG. Does it mean how the Data VDEV is configured, you need the same configuration for the metadata as well? I mean if i have RAIDZ2, i would need the same configuration for the metadata as well? I have never used it before. Do you have any experience in the metadata? I mean does it actually help to find directories quick?
Got it. That's what i was thinking!It means you should have the same level of redundancy. If you place your data on a RAIDZ2 that means you expect to be able to lose 2 disks and still retain your data. So you should place the metadata on a three way mirror, because only then can you lose 2 disks of your metadata vdev and still retain your data.
Yes, yes. Of course, i know it's not a cache but it stores pool metadata. xDLose either your data or your metadata vdev and your entire pool is lost. The metadata vdev is not a cache. It's where the metadata is stored.
Yes, finally cleared it up again.If you place the metadata on a single drive and that fails - pool gone. If you place the metadata on a two way mirror and two drives die - pool gone. So if you place your data on a RAIDZ2 we just assume there is a reason that you plan to be able to cope with loss of 2 disks. So we just assume that you would want your metadata vdev to also be able to cope with the loss of 2 disks. That's all. You can build to whatever risk you are willing to tolerate.
1. Will adding metadata help to achieve faster access to my pool?
2. I've been testing this NAS (not in production yet) with the spare components i had from old machine. Soon, I'll have the correct parts for it. Would a consumer hardware (Core CPU with DDR5) will work or should i really go for the traditional ECC setup paired with Xeon/EPYC CPU?
Yes, need that only. Having 128GB. Seems like first I'll need to test and then check. Just a quick question. If i setup drive for metadata, how would i know the difference? Any particular method to test?It will probably help directory traversing: in which meaningful way it's hard to tell; with that amount of RAM you should take in consideration L2ARC before a metadata VDEV.
I got your point :)We usually advise against the use of consumer-grade hardware, as well as latest-gen (read bleeding edge) technology.
Depending on your needs you could totally go for a "consumer" CPU with ECC support; for me, ECC is not an option: we spend lots of money and computing power in order to achieve data integrity and resiliency, why risk jeopardizing such efforts with non-ECC ram? Put in another way, either you care about your data and use ECC or you don't.
Likely exploring theYes, need that only. Having 128GB. Seems like first I'll need to test and then check. Just a quick question. If i setup drive for metadata, how would i know the difference? Any particular method to test?
arc_summary
data.There are 3 models for each socket that fit these requirements, respectively:Any mATX board in SuperMicro with the LGA3647 or LGA4189 socket?
Yes, at least for TimeMachine. But there's no visible benefit to adding a SLOG to speed up a background task which just takes as much time as needed to complete without the user noticing anything.I remember MacOS using syncwrites with SMB as default though.
No and No.As stated in the post, the NAS will be used mainly for Time Machine backup and the video will be rendered directly to the NAS. It has 8x10TB HDD and 10GbE network. Will SLOG be beneficial for my use case? Will it help me to accelerate the write speeds?
sync=never
on the video dataset to make sure that all writes are asynchronous. Contrary to databases or virtual machines, there's no data integrity issue here: If a rendering is interrupted for whatever reason, you sigh, delete partial files, and run rendering again.Thank you so much. Your help is greatly appreciatedLikely exploring thearc_summary
data.
There are 3 models for each socket that fit these requirements, respectively:
For the other requirements, please see for yourself. You likely won't find both 10Gbps SFP+ and Base-T on the same board.
- for LGA3647 the X11SPM-TPF, X11SPM-TF, and X11SPM-F;
- for LGA4189 the X12SPM-TF, X12SPM-LN6TF, and X12SPM-LN4F.
Yes, its all clear now.Yes, at least for TimeMachine. But there's no visible benefit to adding a SLOG to speed up a background task which just takes as much time as needed to complete without the user noticing anything.
Yes, the SLOG thing is already clear. Please see post #26. Now, I'm working on the metadata drive. Whether i would need it or not.No and No.
For maximal write performance, you want to setsync=never
on the video dataset to make sure that all writes are asynchronous. Contrary to databases or virtual machines, there's no data integrity issue here: If a rendering is interrupted for whatever reason, you sigh, delete partial files, and run rendering again.
Currently, its default. The weird situation is the same SMB setting works so much fast on the Windows side but its like one fourth of the speed on the Mac side. This is a MacPro with AQC107 NIC and the other Windows system has the same NIC and the same CPU. Both uses the same switch and same cable as well.You may also force async on the SMB share for Macs and see if it helps. But I would keep the default behaviour on the TimeMachine share, for safety.
zilstat
command during a write workload to see if you're actually making use of the ZIL (ZFS Intent Log) which will tell you if you could benefit from a SLOG.