dashtesla
Explorer
- Joined
- Mar 8, 2019
- Messages
- 75
So here's why I'm even posting this, I just built a new freenas vm with hyper-v passing through all the hard drives properly and it works a treat no complaints about it and in fact i'm running other hyper-v vms off iSCSI that's also hosted on freenas (another array this one 3x 860 evo ssds) and i get better (way better) performance than anything i could run natively off the physical machine.
There is one thing that bugs me and it has to do with the way freenas or maybe zfs is writing data to the drives.
So I have an smb share (same physical server usb drive samsung t5 2tb which is plenty fast connected to the machine) and i was getting about 300mb/s write speeds to freenas but I know for a fact the drive can read faster than that, the network is a hyper-v switch so 10gbps.
I used 8x 5TB seagate backup plus portable drives (shucked and connected through sata 3 ports to the server using an icy box cage, and a 970 evo 500GB for L2 cache (already had the hardware which is why i used it).
So now to the whole point of this, while i was transfering files and getting the around 300mb/s i also noticed the active times for all drives (on windows task manager) weren't really quite to 100% which is odd it's as though some drives are writing more data than others more/less random writes, might be common behaviour of zfs but as soon as the time machine backup starts and my mac starts sending data to freenas at the same time as i'm moving files through smb it just slows everything down to a crawl like 20-30mb/s sometimes even slower and i can see the drives from here the leds are usually all 8 active when it starts doing multiple things at once they kind of interchange as though the drives have more to do at once but then why can't the 970 evo ssd help with that?
Why can't the ssd take all the heavy lifting and iops and give the hard drives a more smooth and sequential stream of data to write? Is this a missing feature or is it something that needs to be set etc?
Thanks :D
There is one thing that bugs me and it has to do with the way freenas or maybe zfs is writing data to the drives.
So I have an smb share (same physical server usb drive samsung t5 2tb which is plenty fast connected to the machine) and i was getting about 300mb/s write speeds to freenas but I know for a fact the drive can read faster than that, the network is a hyper-v switch so 10gbps.
I used 8x 5TB seagate backup plus portable drives (shucked and connected through sata 3 ports to the server using an icy box cage, and a 970 evo 500GB for L2 cache (already had the hardware which is why i used it).
So now to the whole point of this, while i was transfering files and getting the around 300mb/s i also noticed the active times for all drives (on windows task manager) weren't really quite to 100% which is odd it's as though some drives are writing more data than others more/less random writes, might be common behaviour of zfs but as soon as the time machine backup starts and my mac starts sending data to freenas at the same time as i'm moving files through smb it just slows everything down to a crawl like 20-30mb/s sometimes even slower and i can see the drives from here the leds are usually all 8 active when it starts doing multiple things at once they kind of interchange as though the drives have more to do at once but then why can't the 970 evo ssd help with that?
Why can't the ssd take all the heavy lifting and iops and give the hard drives a more smooth and sequential stream of data to write? Is this a missing feature or is it something that needs to be set etc?
Thanks :D
Last edited: