Split vdev, break pool, best performance

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
3.- People here do not want to help, only brag .

Would you care to give us an example of posts where people didn't help but only bragged? I eyeballed all the posts in this thread and only found either help (direct or links), answers (even ones like danb35's firmly negative but nevertheless correct one in #2), or questions designed to pull out information needed in order to help provide answers. We can take care of any useless braggarts (points to mod hat).
 

aseiras

Dabbler
Joined
Jan 4, 2023
Messages
12
Block storage isn't that much different on NFS. RAIDZ3 is still a terrible fit. The linked article does a deep dive explaining why.
So, NFS is not better than iscsi?

I Understand I have to reformat my current setup, convert to mirrors (still trying to understand / learn how that would work with my hardware, but then I have to 'share' the space to my proxmox, if SCSI is bad and NFS is not better, how?

1673035597203.png
 

Whattteva

Wizard
Joined
Mar 5, 2013
Messages
1,824
Not sure if this might help .. I have no idea how to get the RAID (zfs) info at this point..
Wow.... you have 20 disks and only 1 vdev? Performance must be pretty lousy when compared to striped mirrors.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
So, NFS is not better than iscsi?
Well, it's different, but it's not magic. Block storage is a sucky workload that does not benefit from many of the optimizations (aka "cheats") that can be applied to file storage to hide the suckiness of working over the network instead of directly on a local disk.
 

aseiras

Dabbler
Joined
Jan 4, 2023
Messages
12
Wow.... you have 20 disks and only 1 vdev? Performance must be pretty lousy when compared to striped mirrors.
Well, that is the reason of my post.. thanks.
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
Do also note that at 20% of utilized space you are in the sweet spot of performance for block storage.
delphix-small.png
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
So, NFS is not better than iscsi?

The protocols are different but many of the underlying problems remain similar. iSCSI is typically limited use, such as if you are using it for ESXi datastores; you can store your datastore and your vmdk on iSCSI, but you cannot easily store your ISO file library on that, because the volume is formatted VMFS. With NFS, you can have a share that can be mounted on both your ESXi hypervisors AND on a VM that you use for downloading or mirroring new ISO's from upstream Internet sources. As an example. I realize your specifics may be different.

iSCSI also optimizes easier for ZFS blocksize/recordsize issues; I believe that the modern TrueNAS takes care of a lot of this for you. NFS users should take care to analyze their VM disk blocksizes and stuff to see how that interacts with ZFS block/recordsizes. NFS probably backs up more easily due to the use of discrete files for vmdk's, which also makes recovery of specific VM's easier, etc. However, iSCSI wins if you are just looking for a SAN-like storage service. iSCSI also multipaths better than NFS.

No clear winners, sorry, I know that feels kinda useless. And also somewhat ESXi/VMFS-centric.

but then I have to 'share' the space to my proxmox, if SCSI is bad and NFS is not better, how?

Pick the lesser of two evils. I've tried to dump various factors that might influence your choice above.

Either way, please please really do make sure to read my post "The path to success with block storage" linked above. There is MUST KNOW information in there, and further links to additional subtopics. This is a HUGE topic with lots of sharp pointy stabby razor sharp edges.
 

aseiras

Dabbler
Joined
Jan 4, 2023
Messages
12
The speed issue was resolved with zfs sync=disabled , even after changing the pool to 10 vdevs , all of them mirrors, and adding cache vded, etc./.. the performance was still terrible.. so, I tried the zfs sync=disabled and speed went to what it should be on this type of setup, writing speeds are now excellent and I can actually take advantage of the 10gb nic.
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
At this point you have a few choices:
1. Use Sync=disabled, which is as fast as things can go. However it isn't strictly datasafe. If you have a sudden unexpected power outage / kernal dump then you might lose up to 5 seconds of write transactions. This can be a disaster with block storage or VMDK's - as it can corrupt the virtual disk.
2. Go back to sync=enabled and slow down, a lot
3. Use a suitable (and I do emphasise that word "suitable") SLOG device to accelerate the sync response. The can speed things up to somewhere between sync=disabled and syn=enabled, but will never be as fast as sync=disabled.

Sync writes are slow because the NAS won't report back to the calling application that the write is complete until its been commited to permanent storage - your pool. With sync=disabled the NAS reports back a successful write as soon as possible - which is after the data has been written to RAM essentially but before its committed to disk. What a SLOG does is enable the NAS to write the ZIL (Zero Intent Log) to RAM and to the SLOG Device and, as the SLOG is permanent storage, then report back to the writing app the the write has been committed. The NAS will then go on to write the data in RAM to the permanent disk in its own time (never reading from the SLOG). Its only in the event of an outage that the SLOG is read to ensure that all confirmed writes have been successfully committed to permanent storage.

Thus a SLOG needs:
1. Power Loss Protection - it must hold data in the event of a power outage
2. High Endurance - its always written to and never read
3. Low Latency - as low as possible to speed up the write acknowledgement.
4. High Speed - kinda obvious.

Overall its performance / reponse time should be better than the pool its added to, otherwise its pointless.

Do you have room for 1 or 2 NVMe drives in the server you have? How many PCIe slots do you have spare and does the server support bifurcation?
Is this work or home lab?
 

kspare

Guru
Joined
Feb 19, 2015
Messages
508
Nugent is right. You need a slog. And mirrored slogs.

see if you can find 2 Intel p3700 800gb drives. They will work excellent for you!
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
@kspare - I disagree. The 3700's are not vastly better than his existing pool of SSD's, allbeit they are enterprise write intensive rather than consumer SSD's - so they are better.
I think the OP needs something a bit better. Which is why I asked about his slots availability and bifurcation support.
Either optane 900p or better such as the 4801x. Mirrored if enterprise work, homelab, a single will be fine.
 

kspare

Guru
Joined
Feb 19, 2015
Messages
508
I’ve done a ton of testing with the 900p, radian 200, 4801x albeit only 100gb, and I settled on the p3700. radian worked, but for my work flow the 8gb of space was just not enough and I couldn’t find any 16gb models, 8gb of space for my slow actually because a bottle neck. for what he’s doing a couple p3700 will be cheaper than a 900p and he’ll have his performance.
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
A 900p is dirt cheap at this point. $180.00 or £159.00. I also feel that the 8GB model of the RMS units isn't enough and would be happier with the 16GB models (Hen's Teeth / Rocking Horse waste)
 
Top