upgrading to SSD pool don't increase performance

chiccol

Cadet
Joined
Jan 2, 2020
Messages
5
Hi guys,

I know there are a lot of thread with same question, but I don't solve mine,

I have freenas installed, and since yesterday pool was composed by
4 x4tb wd red + intel SSD d3-s4610 -480gb SLOG and samsung 850 evo 500gb as Cache

i decided to upgrade to 4 x d3-s4610 and delete slog and cache,

freenas via NFS is connected to HP dl80 gen 9 with ESXI 6.5 inside

VM on are 2012 server as DC
+ 2016 as RDS

after upgrading, performance are remained the same.
using Crystaldiskmark i obtain always same results
i'm not interested in sequential reading and writing because there bottleneck is the NIC 1gb

but obtaining 8MB in RND4K, i think is not enough
i tried with Sync = standard - = always - off

and result increase, but not as I thought
If i test the same in my samsung tablet, i obtain better performance

here my hardware

Freenas

mainboard gigabyte B250M-DS3H-CF
processore: i3 7100
ram: 64gb ddr4 2400mhz
controller LSI 9211-8I with firmware upgraded (IT)
case rack fantec with 8 hot swap bay

HP server DL80 gen 9
processor: e5 2603v3
48gb ram ECC

after many test i tried to install one test VM directly inside Freenas, with VM Disk inside the SSD Pool,
I increase the sequential read to 520 MB/s and write to 450MB/S
but Random read and write remain to 8/10MB

can someone help me to understand where is the bottleneck?
i'm so frustrated of testing and re-testing without any difference,

thank you
merry christmas!
and stay safe!
 

chiccol

Cadet
Joined
Jan 2, 2020
Messages
5
result with sync = default
sync on.JPG


result with sync = off

sync off.JPG
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
result with sync = default

result with sync = off

Those results look essentially the same to me.

Much as I don't agree that the tool you're using is a good measure of any real-world performance, what it does indicate is that you're not doing any sync writes with whatever access method you're using for the test.

You don't mention anywhere what your pool layout is... maybe it would help to show zpool list -v

A RAIDZ2 pool of 4 SSDs would be a bad thing if you want IOPS.

Have you looked at this? :
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
My simple advice it to read up on how Crystal Disk Mark tests, I think you may be confused at what those results mean. Do the test again and this time instead of looking at MB/s, look at IOPS. Also your drives are not speed demons but they do have high IOPS ratings and very long life for write endurance. Also the pool layout is a factor as @sretalla commented.
 

chiccol

Cadet
Joined
Jan 2, 2020
Messages
5
My simple advice it to read up on how Crystal Disk Mark tests, I think you may be confused at what those results mean. Do the test again and this time instead of looking at MB/s, look at IOPS. Also your drives are not speed demons but they do have high IOPS ratings and very long life for write endurance. Also the pool layout is a factor as @sretalla commented.

thank for both your answer @sretalla and @joeschmuck

i post the test with IOPS,

iops.JPG


the problem is, I don't see any difference during the use of the server, my guys in office say that nothing changed..

CPU is also a bottleneck 'cause touch 100% too many times and I'm trying to understand if this is the best we can obtain or there is something wrong
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
the problem is, I don't see any difference during the use of the server, my guys in office say that nothing changed..
I suspect your pool is not configured properly to meet your needs, or it is possible that you are just expecting too much out of the NAS? You are using ESXi and running FreeNAS (or is it TrueNAS) on top of that, are you giving it enough resources? (you only have a 2 core/4 thread CPU, not much for ESXi use, so what else are you running since you are using ESXi?) Exactly what is your use case? What are you expecting to achieve? Did you read the link @sretalla provided? Are you implementing those practices in your setup?

I know I'm asking a lot of questions but it's easier to ask now instead of over a few days tossing messages back and fourth.

I'm really curious what your use case is and what your expectations are for the NAS.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
4 x4tb wd red + intel SSD d3-s4610 -480gb SLOG
So from looking at your pool layout, what I can see is that you had a pool of two mirrored pairs (4 disks) with SLOG (and L2ARC, which probably makes no difference in this story).

If that single SLOG SSD was already not the bottleneck (as 45K IOPS probably isn't) in your overall system, then of course nobody notices any difference when you move to a pool layout of mirrored pairs of SSDs with approximately 90K IOPS capacity.

You can't make it seem faster if nobody is demanding more from the disks than before.

the problem is, I don't see any difference during the use of the server, my guys in office say that nothing changed..

CPU is also a bottleneck 'cause touch 100% too many times and I'm trying to understand if this is the best we can obtain or there is something wrong
I suspect that you need to look at where this is coming from... is it on your ESXi box or on the FreeNAS box?
 
Top