Question/Idea for Cache Performance

dashtesla

Explorer
Joined
Mar 8, 2019
Messages
75
So here's why I'm even posting this, I just built a new freenas vm with hyper-v passing through all the hard drives properly and it works a treat no complaints about it and in fact i'm running other hyper-v vms off iSCSI that's also hosted on freenas (another array this one 3x 860 evo ssds) and i get better (way better) performance than anything i could run natively off the physical machine.

There is one thing that bugs me and it has to do with the way freenas or maybe zfs is writing data to the drives.

So I have an smb share (same physical server usb drive samsung t5 2tb which is plenty fast connected to the machine) and i was getting about 300mb/s write speeds to freenas but I know for a fact the drive can read faster than that, the network is a hyper-v switch so 10gbps.

I used 8x 5TB seagate backup plus portable drives (shucked and connected through sata 3 ports to the server using an icy box cage, and a 970 evo 500GB for L2 cache (already had the hardware which is why i used it).

So now to the whole point of this, while i was transfering files and getting the around 300mb/s i also noticed the active times for all drives (on windows task manager) weren't really quite to 100% which is odd it's as though some drives are writing more data than others more/less random writes, might be common behaviour of zfs but as soon as the time machine backup starts and my mac starts sending data to freenas at the same time as i'm moving files through smb it just slows everything down to a crawl like 20-30mb/s sometimes even slower and i can see the drives from here the leds are usually all 8 active when it starts doing multiple things at once they kind of interchange as though the drives have more to do at once but then why can't the 970 evo ssd help with that?

Why can't the ssd take all the heavy lifting and iops and give the hard drives a more smooth and sequential stream of data to write? Is this a missing feature or is it something that needs to be set etc?

Thanks :D
 
Last edited:

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,700
First thing to understand about SLOG is that there's a limit of about 5 seconds where it can be useful. (https://www.ixsystems.com/community/threads/calculation-of-ssd-size-for-slog-zil-device.17515/)

If your SLOG absorbs 5 seconds worth of pressure and your pool can't take that data from the SLOG, your transfer slows to the speed you can get data off the SLOG to allow new data in.

If your pool structure is poor for IOPS, then your overall IOPS won't see a big improvement for Synchronous writes as the data needs to hit either a SLOG or pool disk to count as written.

You aren't helping a lot by not giving your system hardware specs... perhaps your RAM amount is too small to be running L2ARC and that's where you're taking a big hit in performance when your time machine is doing directory comparisons and asking a lot of metadata from the pool.

I won't speculate further without some more hardware information to go on. Please also provide the output from zpool status -v
 

dashtesla

Explorer
Joined
Mar 8, 2019
Messages
75
First thing to understand about SLOG is that there's a limit of about 5 seconds where it can be useful. (https://www.ixsystems.com/community/threads/calculation-of-ssd-size-for-slog-zil-device.17515/)

If your SLOG absorbs 5 seconds worth of pressure and your pool can't take that data from the SLOG, your transfer slows to the speed you can get data off the SLOG to allow new data in.

If your pool structure is poor for IOPS, then your overall IOPS won't see a big improvement for Synchronous writes as the data needs to hit either a SLOG or pool disk to count as written.

You aren't helping a lot by not giving your system hardware specs... perhaps your RAM amount is too small to be running L2ARC and that's where you're taking a big hit in performance when your time machine is doing directory comparisons and asking a lot of metadata from the pool.

I won't speculate further without some more hardware information to go on. Please also provide the output from zpool status -v
It's an HP z420 Workstation, Xeon E5-2690 128GB RAM 1600mhz ECC Quadro P4000 and the aforementioned hard drives plus ssds for various purposes, motherboard has 10 ports plus an IT flashed dell h310 sas hba and I'll be adding 3x4TB wd red for other purposes/unrelated array and an LTO-6 Tape Drive so i'll also add a sas expander to it soon since it's running low on available ports.

I will look into the SLOG with a clear head and try to figure out a way to take advantage of the cache a little better.

The freenas vm has 32GB ram, it's plenty for what it's doing I believe but I can increase it to 48/64GB though I do need ram for other vms so I rather only give freenas what it really needs at this point as long as it doesn't sacrifice any performance.
 
Last edited:
Top