Hi,
I have just created a new storage server with the following basic specifications:
CPU: 2x E5-2650v1
RAM: 192GB (24x8GB ECC DDR3 1600)
NIC: Intel X520DA-2
HBA: LSI SAS 9200-8e
JBOD: SC847 E16-RJBOD1
HDD: currently 27x 4TB ST4000DM000
FreeNAS: 9.10.2-U1
zpool (will be adding another raidz2 of 9 drives soon):
So I have a data-set on that array using lz4 compression and i'm getting very promising results from DD:
If my math is correct that would mean it has just writen a terabyte of data at around 23.5 Gbps or 2.93 GB/s.
I can live with that :D
But of-course I want real life tests so I fired up a machine with RAID 1 SSD's that should be able to output a constant 600+ MB/s.
The machine is hooked up to the freenas via a 10G nic at network with which I can iperf to the freenas at 8Gbps.
After the DD and Iperf I was comfortable enough to do the first test so I started writing a 100GB testfile to the freenas over SMB.
It started at 300~ MB/s but after writing about 10GB it dropped down 20 MB/s and soon after it fell to 0 and stays there for about 30 seconds before going up to 300 again and then down to 0, up to 300, down to 0 etc. etc.
This does not happen when writing from the SSD test system to a SSD in my workstation over the same network.
Does anyone have any idea what could be causing this?
I looks like something cant keep up but that does not reveal itself when doing a 1TB DD or a 10 minute Iperf, also tried other sources that are know to output a steady 200+ MB/s but they all dipped straight down after a few seconds.
I am new to ZFS but I have done some research, for starters I have enabled auto tuning which helped in the Iperf department but not with the irregular transfer speeds.
Would I be correct in saying that SMB is async and therefore a ZIL/SLOG drive would not solve this issue?
There is probably something stupid I need to be reminded of but I can't seem to find the answer on the forum.
I have looked around on the forum but most posts with a comparable issue are running 20+TB arrays on 16 or less GB ram and as far as I know I'm well withing the recommended ram requirements, I know more it better but 2GB ram per TB should still be fine should it not?
Besides, a 1TB DD benchmark test should go past my ram at some point.
I have just created a new storage server with the following basic specifications:
CPU: 2x E5-2650v1
RAM: 192GB (24x8GB ECC DDR3 1600)
NIC: Intel X520DA-2
HBA: LSI SAS 9200-8e
JBOD: SC847 E16-RJBOD1
HDD: currently 27x 4TB ST4000DM000
FreeNAS: 9.10.2-U1
zpool (will be adding another raidz2 of 9 drives soon):
Code:
~ zpool status -v pool: FloppyD state: ONLINE scan: scrub repaired 0 in 18h10m with 0 errors on Sun Jan 29 15:15:16 2017 config: NAME STATE READ WRITE CKSUM FloppyD ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0 gptid/93108990-e48a-11e6-b1e3-0025907e82ce ONLINE 0 0 0 gptid/97d47490-e48a-11e6-b1e3-0025907e82ce ONLINE 0 0 0 gptid/9a98d29c-e48a-11e6-b1e3-0025907e82ce ONLINE 0 0 0 gptid/94808bdc-e48a-11e6-b1e3-0025907e82ce ONLINE 0 0 0 gptid/9c2e2cf7-e48a-11e6-b1e3-0025907e82ce ONLINE 0 0 0 gptid/936061b7-e48a-11e6-b1e3-0025907e82ce ONLINE 0 0 0 gptid/9f4a75ab-e48a-11e6-b1e3-0025907e82ce ONLINE 0 0 0 gptid/a6c61557-e48a-11e6-b1e3-0025907e82ce ONLINE 0 0 0 gptid/a1d94dd6-e48a-11e6-b1e3-0025907e82ce ONLINE 0 0 0 raidz2-1 ONLINE 0 0 0 gptid/91e97661-e48a-11e6-b1e3-0025907e82ce ONLINE 0 0 0 gptid/a39d4540-e48a-11e6-b1e3-0025907e82ce ONLINE 0 0 0 gptid/908760af-e48a-11e6-b1e3-0025907e82ce ONLINE 0 0 0 gptid/a11c8a81-e48a-11e6-b1e3-0025907e82ce ONLINE 0 0 0 gptid/9ee2ec62-e48a-11e6-b1e3-0025907e82ce ONLINE 0 0 0 gptid/a618af0c-e48a-11e6-b1e3-0025907e82ce ONLINE 0 0 0 gptid/a3b56e8f-e48a-11e6-b1e3-0025907e82ce ONLINE 0 0 0 gptid/a5bca058-e48a-11e6-b1e3-0025907e82ce ONLINE 0 0 0 gptid/a72cba93-e48a-11e6-b1e3-0025907e82ce ONLINE 0 0 0 raidz2-2 ONLINE 0 0 0 gptid/a517fa6f-e48a-11e6-b1e3-0025907e82ce ONLINE 0 0 0 gptid/a63ff419-e48a-11e6-b1e3-0025907e82ce ONLINE 0 0 0 gptid/95c7c648-e48a-11e6-b1e3-0025907e82ce ONLINE 0 0 0 gptid/9e76827b-e48a-11e6-b1e3-0025907e82ce ONLINE 0 0 0 gptid/9fa4abb1-e48a-11e6-b1e3-0025907e82ce ONLINE 0 0 0 gptid/a2031236-e48a-11e6-b1e3-0025907e82ce ONLINE 0 0 0 gptid/a4c14ba4-e48a-11e6-b1e3-0025907e82ce ONLINE 0 0 0 gptid/a7c82eac-e48a-11e6-b1e3-0025907e82ce ONLINE 0 0 0 gptid/a58aa746-e48a-11e6-b1e3-0025907e82ce ONLINE 0 0 0 errors: No known data errors pool: freenas-boot state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM freenas-boot ONLINE 0 0 0 da36p2 ONLINE 0 0 0
So I have a data-set on that array using lz4 compression and i'm getting very promising results from DD:
Code:
~ dd if=/dev/zero of=testfile bs=1024k count=1000000 1000000+0 records in 1000000+0 records out 1048576000000 bytes transferred in 357.351477 secs (2934298772 bytes/sec)
If my math is correct that would mean it has just writen a terabyte of data at around 23.5 Gbps or 2.93 GB/s.
I can live with that :D
But of-course I want real life tests so I fired up a machine with RAID 1 SSD's that should be able to output a constant 600+ MB/s.
The machine is hooked up to the freenas via a 10G nic at network with which I can iperf to the freenas at 8Gbps.
After the DD and Iperf I was comfortable enough to do the first test so I started writing a 100GB testfile to the freenas over SMB.
It started at 300~ MB/s but after writing about 10GB it dropped down 20 MB/s and soon after it fell to 0 and stays there for about 30 seconds before going up to 300 again and then down to 0, up to 300, down to 0 etc. etc.
This does not happen when writing from the SSD test system to a SSD in my workstation over the same network.
Does anyone have any idea what could be causing this?
I looks like something cant keep up but that does not reveal itself when doing a 1TB DD or a 10 minute Iperf, also tried other sources that are know to output a steady 200+ MB/s but they all dipped straight down after a few seconds.
I am new to ZFS but I have done some research, for starters I have enabled auto tuning which helped in the Iperf department but not with the irregular transfer speeds.
Would I be correct in saying that SMB is async and therefore a ZIL/SLOG drive would not solve this issue?
There is probably something stupid I need to be reminded of but I can't seem to find the answer on the forum.
I have looked around on the forum but most posts with a comparable issue are running 20+TB arrays on 16 or less GB ram and as far as I know I'm well withing the recommended ram requirements, I know more it better but 2GB ram per TB should still be fine should it not?
Besides, a 1TB DD benchmark test should go past my ram at some point.
Last edited: