Not sure my understanding of RAM Cache is correct

Status
Not open for further replies.

Deadringers

Dabbler
Joined
Nov 28, 2016
Messages
41
Hey all,

So I'm using iSCSI for 2 ESXI hosts to my FreeNAS system.

I'm using 10Gb connections and using iPerf I can get 9.89 Gb between my freenas host and the ESXI systems.

Due to iSCSI being used, sync=always is the only way to ensure writes are immediately written to disk, so I forced sync=disabled as none of this data is mission critical and our UPS is pretty good.

So, with the above information and 128GB of ram I'd expect to see some pretty decent read/write speeds when doing an artificial tests like Crystal disk mark.

However I don't get anything amazing.

Here are results from a couple days ago after a reboot on the FreeNas system:
terutsk.png


And here are some results from today after being online for a few days doing "normal VM".
eY91bk3.png




Seq Q32T1 result for writes is basically maxing out the 10Gb link.
The Seq result for writes is also doing fairly well.
But the others are not great for something that I'd expect to be kept in RAM.


This is pretty powerful server which should be caching all writes before writing.
Am I crazy in thinking that at a minimum the Write rates should be higher?



If you have any other tests etc I'm happy to perform them.
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
Hey all,

So I'm using iSCSI for 2 ESXI hosts to my FreeNAS system.

I'm using 10Gb connections and using iPerf I can get 9.89 Gb between my freenas host and the ESXI systems.

Due to iSCSI being used, sync=always is the only way to ensure writes are immediately written to disk, so I forced sync=disabled as none of this data is mission critical and our UPS is pretty good.

So, with the above information and 128GB of ram I'd expect to see some pretty decent read/write speeds when doing an artificial tests like Crystal disk mark.

However I don't get anything amazing.

Here are results from a couple days ago after a reboot on the FreeNas system:
terutsk.png


And here are some results from today after being online for a few days doing "normal VM".
eY91bk3.png




Seq Q32T1 result for writes is basically maxing out the 10Gb link.
The Seq result for writes is also doing fairly well.
But the others are not great for something that I'd expect to be kept in RAM.


This is pretty powerful server which should be caching all writes before writing.
Am I crazy in thinking that at a minimum the Write rates should be higher?



If you have any other tests etc I'm happy to perform them.
The performance you're getting may be 'as good as it gets'. :)

What are the details about your system (per the forum rules)?

How full is your pool? And especially, how full are the zvol datatsets you're using for block storage? FreeNAS block storage works best when the utilization rate is low and will degrade if this isn't the case.

Also, how do you have your pool configured: mirrors? RAID-Z'n'? Mirrors are recommended for best performance in serving out block storage.

Here's another forum thread that may interest you:

https://forums.freenas.org/index.ph...res-more-resources-for-the-same-result.28178/

Good luck!
 

Deadringers

Dabbler
Joined
Nov 28, 2016
Messages
41
The performance you're getting may be 'as good as it gets'. :)

What are the details about your system (per the forum rules)?

How full is your pool? And especially, how full are the zvol datatsets you're using for block storage? FreeNAS block storage works best when the utilization rate is low and will degrade if this isn't the case.

Also, how do you have your pool configured: mirrors? RAID-Z'n'? Mirrors are recommended for best performance in serving out block storage.

Here's another forum thread that may interest you:

https://forums.freenas.org/index.ph...res-more-resources-for-the-same-result.28178/

Good luck!

Thanks for the response.

I've got most/all my specs in my sig, motherboard etc are unknown as it's just a "Cisco UCS" server.
The RAM is DDR4 2400 in Quad Channel config on both CPUs - 8 sticks in total.
Is there anything else specific you wanted? I'll do my best!


I'm setup with 8 disks in a RAID 10 config - e.g. 4 sets of mirrors.

with regards to my zpools here is a screenshot, so it's 25% full, does this help?

bdzrKh2.png




I guess I'm just expecting the system to "cache" all those writes to RAM which should be super quick, but it's not like that.
Even with Sync=disabled.
 

mav@

iXsystems
iXsystems
Joined
Sep 29, 2011
Messages
1,428
I see there two possible factors:
1) In 4K cases significant part of performance depends on latency and processing overhead, not actual throughput of anything. When queue depth is sufficient (4K Q32T1 case), it mostly covers the latency. When queue is short (the last case), this benchmark purely measures the latency of the whole stack from the test tool, through the guest OS, ESXi, network, FreeNAS, etc.
2) ZFS uses only part of your 128GB RAM (IIRC 1/16th of it) for dirty data. That is why writes still may be limited by disk speed even if your whole dataset fits into ARC.
 

Deadringers

Dabbler
Joined
Nov 28, 2016
Messages
41
I see there two possible factors:
1) In 4K cases significant part of performance depends on latency and processing overhead, not actual throughput of anything. When queue depth is sufficient (4K Q32T1 case), it mostly covers the latency. When queue is short (the last case), this benchmark purely measures the latency of the whole stack from the test tool, through the guest OS, ESXi, network, FreeNAS, etc.
2) ZFS uses only part of your 128GB RAM (IIRC 1/16th of it) for dirty data. That is why writes still may be limited by disk speed even if your whole dataset fits into ARC.

Hey thanks for the info!

Are there any system tunables for point #1 that I can try out?
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
Thanks, @Deadringers : it looks like you're doing most everything right, i.e., using mirrors with lotsa RAM. Your pool doesn't appear to be full, but
your VM-Store-1 datatset is close to 60% full and this might be affecting performance.
 

mav@

iXsystems
iXsystems
Joined
Sep 29, 2011
Messages
1,428
Dataset usage does not really matter. It is pool usage that should be kept low to reduce on-disk fragmentation, and 25% there is fine.
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
Dataset usage does not really matter. It is pool usage that should be kept low to reduce on-disk fragmentation, and 25% there is fine.
Yes, sir - that's why I said 'might'. :smile:
 

Deadringers

Dabbler
Joined
Nov 28, 2016
Messages
41
Yes, sir - that's why I said 'might'. :)
Thanks for the responses people!

So it looks like:

1: RAM does not mean you'll automatically get lots of Caching happening for data. It can and will happen, just not as often as you'd like.

2: If I want more speeds I'd have to go for something like all flash, or at least an NVMe pcie drive.
 
Status
Not open for further replies.
Top