Branislav Kirilov
Dabbler
- Joined
- Jun 20, 2016
- Messages
- 22
I made a freenas server that occasionally serves windows machines that boot from iscsi zvols on that server. The pool has 4 intel ssd's in raidz1. There is separate storage pool from 6 mechanical disks also in raidz1.
My issue is that the performance I get from these ssd's is something that i should be pulling from single one or at least should be better.
Here are the tests first is SSD pool
Second pool from mechanical disks
Pool, is heavily fragmented, but with the type of usage that we have it is inevitable.
Other info
So when i put heavy write load to the server this is all i get. See the screenshots from gstat and iwstat, which seems very low to me.
What is worse that, everything that is connected to that pool, servers via iscsi and VM-s with qcow drives connected via NFS, they all get extremely sluggish and unresponsive.
So I was wondering can the bottleneck be somewhere else except the drives?
As i see it it's the drives.
Screen 3 is the usual disk usage that i see under heavy write, but disks, should be able to do around 200. At least 200 is what i saw when i did the dd tests.
What I'm missing or it's all normal?
My issue is that the performance I get from these ssd's is something that i should be pulling from single one or at least should be better.
Here are the tests first is SSD pool
Code:
dd if=/dev/zero of=/mnt/ssdstorage/test/temp.dat bs=2048k count=50k 51200+0 records in 51200+0 records out 107374182400 bytes transferred in 315.186943 secs (340668244 bytes/sec) dd if=/mnt/ssdstorage/test/temp.dat of=/dev/null bs=2048k count=50k 51200+0 records in 51200+0 records out 107374182400 bytes transferred in 248.083857 secs (432814064 bytes/sec)
Second pool from mechanical disks
Code:
dd if=/dev/zero of=/mnt/storage/test/temp.dat bs=2048k count=50k 51200+0 records in 51200+0 records out 107374182400 bytes transferred in 472.189081 secs (227396581 bytes/sec) dd if=/mnt/storage/test/temp.dat of=/dev/null bs=2048k count=50k 51200+0 records in 51200+0 records out 107374182400 bytes transferred in 264.581848 secs (405825960 bytes/sec)
Pool, is heavily fragmented, but with the type of usage that we have it is inevitable.
Code:
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT ssdstorage 1.73T 1.29T 459G - 46% 74% 1.00x ONLINE /mnt storage 16.2T 5.45T 10.8T - 16% 33% 1.00x ONLINE /mnt
Other info
Code:
last pid: 15020; load averages: 0.25, 0.36, 0.29 up 76+01:01:43 16:02:32 70 processes: 1 running, 69 sleeping CPU: 1.0% user, 0.0% nice, 3.5% system, 0.1% interrupt, 95.4% idle Mem: 120M Active, 1118M Inact, 44G Wired, 8980K Cache, 1724M Free ARC: 36G Total, 8318M MFU, 24G MRU, 11M Anon, 593M Header, 3551M Other Swap: 20G Total, 80K Used, 20G Free
So when i put heavy write load to the server this is all i get. See the screenshots from gstat and iwstat, which seems very low to me.
What is worse that, everything that is connected to that pool, servers via iscsi and VM-s with qcow drives connected via NFS, they all get extremely sluggish and unresponsive.
So I was wondering can the bottleneck be somewhere else except the drives?
As i see it it's the drives.



Screen 3 is the usual disk usage that i see under heavy write, but disks, should be able to do around 200. At least 200 is what i saw when i did the dd tests.
What I'm missing or it's all normal?