Herr_Merlin
Patron
- Joined
- Oct 25, 2019
- Messages
- 200
Hi all,
we are running a freenas as NFS datastore for our LAB vmware cluster.
Past hardware:
Some Supermicro X9 with 2x Xeon E5 Quadcore V0 3,4Ghz HT disabled
Current Hardware:
Some Supermicro X9 with 2x Xeon E5 Tencore V2 2,8Ghz HT disabled
Hardware which did not change:
256GB DDR3 1600Mhz
3x HP H240 SAS12G HBA (1 HBA holds all disk of the first pool, the other of the second and the third has all ZIL + L2ARC SSDs)
10G Intel NIC
2 zpools are in place:
First pool:
25x1,8TB 2.5" SAS 10k configured as 12x mirror of 2 disk with 1 spare.
2x HGST SAS12G ZeusIOPS 800GB as mirrored ZIL
1x Samsung SAS 12G PM 480GB as L2ARC
Dataset and pool size 16K, dedup on, compression lz4, sync always, atime off
Second pool:
14x 4TB SAS 7,2k configured as 6x mirror of 2 disk with 2 spares.
2x HGST SAS12 ZeusIOPS 800GB as mirrored ZIL
1xSamsung SAS 12G PM 480GB as L2ARC
Dataset and pool size 16k, dedup off, compression lz4, sync always, atime off
Performance prior to the hardware change:
First pool:
~ 280MB/s write
~ 1100MB/s read
Second pool:
~ 95MB/s write
~ 320MB/s read
Performance after the hardware change:
First Pool:
~ 25MB/s write
~ 80MB/s read
Second pool:
~20MB/s write
~40MB/s read
We had issues with the dual quad core configuration as with dedup and heavy load NFS would time out as dedup used 100% of the CPU in a peak.
To work around this issue we replaced the hardware with 2x 10Core as mentioned above. Due to the old PSU with the old board we just went for another Supermicro Server with board, CPU and newer more efficient PSU, configured BIOS the same, added the HBAs disk etc. and booted the system.
Reconfigured the NFS service to be allowed to use a max of 18 cores and that's it.
I am quite confused what might be the cause of this issue.
- Pools are not degraded
- SMART for all disk is fine
- network test shows 10GbE transfers
- system has way more power now..
we are running a freenas as NFS datastore for our LAB vmware cluster.
Past hardware:
Some Supermicro X9 with 2x Xeon E5 Quadcore V0 3,4Ghz HT disabled
Current Hardware:
Some Supermicro X9 with 2x Xeon E5 Tencore V2 2,8Ghz HT disabled
Hardware which did not change:
256GB DDR3 1600Mhz
3x HP H240 SAS12G HBA (1 HBA holds all disk of the first pool, the other of the second and the third has all ZIL + L2ARC SSDs)
10G Intel NIC
2 zpools are in place:
First pool:
25x1,8TB 2.5" SAS 10k configured as 12x mirror of 2 disk with 1 spare.
2x HGST SAS12G ZeusIOPS 800GB as mirrored ZIL
1x Samsung SAS 12G PM 480GB as L2ARC
Dataset and pool size 16K, dedup on, compression lz4, sync always, atime off
Second pool:
14x 4TB SAS 7,2k configured as 6x mirror of 2 disk with 2 spares.
2x HGST SAS12 ZeusIOPS 800GB as mirrored ZIL
1xSamsung SAS 12G PM 480GB as L2ARC
Dataset and pool size 16k, dedup off, compression lz4, sync always, atime off
Performance prior to the hardware change:
First pool:
~ 280MB/s write
~ 1100MB/s read
Second pool:
~ 95MB/s write
~ 320MB/s read
Performance after the hardware change:
First Pool:
~ 25MB/s write
~ 80MB/s read
Second pool:
~20MB/s write
~40MB/s read
We had issues with the dual quad core configuration as with dedup and heavy load NFS would time out as dedup used 100% of the CPU in a peak.
To work around this issue we replaced the hardware with 2x 10Core as mentioned above. Due to the old PSU with the old board we just went for another Supermicro Server with board, CPU and newer more efficient PSU, configured BIOS the same, added the HBAs disk etc. and booted the system.
Reconfigured the NFS service to be allowed to use a max of 18 cores and that's it.
I am quite confused what might be the cause of this issue.
- Pools are not degraded
- SMART for all disk is fine
- network test shows 10GbE transfers
- system has way more power now..
Last edited: