what speed can I get from 4 vdev SSD?

mangelot

Dabbler
Joined
May 24, 2016
Messages
11
I'm building my second Freenas system.
But I'm not happy with the performance speed (read 400MB / Write 300MB) to Proxmox Client (ISCSI LVM or NFS same result)

Iperf3 shows full 9.89 Gbits from Server to Client (and 9.81 Gbits from Client to Server)
When running Bonnie++ on Freenas pool, I receive more than 2000MB/s


Hardware:
Dell R520 2x Quadcore E5-2407 CPU's
128GB RAM
8x Samsung SSD 480GB
Dell Perc 710 mini flashed in IT-Mode
CHELSIO CC2-S320E-SR 10GB DUAL PORT (cxgb driver)

Software:
Latest freenas 11.3
4 vdev configured (4x2 Samsung SSD) (without L2ARC/SLOG)

Tunables:
enabled auto tuning in freenas and 10Gbe tuning advice below

keithTunes.png


Who has any suggestions, what could go wrong?
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Do you have the MTU on your interface set to 9000?

It's also curious that you're taking a screenshot of the tunables from the legacy UI which isn't present on 11.3. Are you sure about this claim?
Software:
Latest freenas 11.3
 

mangelot

Dabbler
Joined
May 24, 2016
Messages
11
Do you have the MTU on your interface set to 9000?

It's also curious that you're taking a screenshot of the tunables from the legacy UI which isn't present on 11.3. Are you sure about this claim?

I have the new UI not the legacy, I have used the image from the 10Gbe primer topic on the forum, these settings where optimized for 10Gbe.
I have set MTU to 9000 but also 1500 and 4069 (because CXGB is limited to 4096 See here)
 

websmith

Dabbler
Joined
Sep 20, 2018
Messages
38
Hi,

I have been in your boat and decided to get rid of my SSD's and go NVME - but in hindsight I should probably just have kept my SSD pool.

Don't be too focused on any single threaded sequential performance, since that is almost never what any VM is doing - what you should be concerned about is your IOPS, which is probably decent on your pool, since you have 4xmirrors striped.

What can skew your results are sync settings on the pool - normally you should run with sync=always if you like your data and do not have a UPS/battery backed ZIL.

So test locally on your pool with zync=always and see if that changes your numbers.

But as I wrote earlier, sequential performance are almost irrelevant, what matters are IOPS, since that will control how many concurrent read/writes you can have - I mean who would want a 1TB/s system with only one IOPS? - much better to have 3-400 MB/S with sync=always and lots of IOPS - that will give you the most responsive VM's
 

mangelot

Dabbler
Joined
May 24, 2016
Messages
11
I did some testing with ISCSI (dual port 10Gbe Chelsio T3) on a windows server 2012 (fresh install) system direct connected to the Freenas
(4x mirrors striped Samsung SSD 480GB PM883) added a 100GiB Vdev for testing.

Without MPIO (MB/s)
20200627_180017.jpg

Without MPIO (IOPS)
20200627_180005.jpg


MPIO Active (MB/s)
20200627_193549.jpg


MPIO Active (IOPS)
20200627_193559.jpg


Are these results okay? Because I think I can get more MB/s and also IOPS from this setup...

PM883 specs:

ModelPM883InterfaceSATA 6.0 Gbps
F/F2.5 inchCapacity480 GB
Seq. Read550 MB/sSeq. Write520 MB/s
Ran. Read98K IOPSRan. Write25K IOPS
 
Last edited:
Top