New build for iSCSI

Status
Not open for further replies.

XenonXTZ

Dabbler
Joined
Jan 13, 2017
Messages
22
Good evening.
I have a box with the following configuration. I would like to know what kind of performance I should be getting.
I've been playing with my setup, but could not get the results I was hoping and aiming for.

Dell R720xd.
2x E5-2640 0 2.5GHz (3.0GHz)
128GB ECC RDIMM DDR3
24x 300GB 10K SAS drives (Single drive in HD Tune ~ 170MB R&WR)
2x200GB Enterprise Toshiba SSDs (mirrored SLOG thru FreeNAS) (rear flex bay)
1x400GB Intel 750 AIC (L2ARC)
Intel 10Gbit x520 card
LSi 9207-8i on the latest firmware (updated a couple of days ago)
Internal RAID1 SD for OS (9.10.2 Update1)

Dell's N4032F SFP+ switch

Dell R710 with 4x SSDs in RAID 0 with the same Intel x520 card (Client, to testing the performance) 1500/1400 R/WR

Here's the situation.
I've tried 2 scenarios:

1. RAID 10. 12 mirrors. Gives 500MB/s write and 1GB/s read ( I guess it's capped with 10Gbit connection. Yet not aggregated)
2. RAID Z2. 6 drive Z2 x 4. Gives 400MB/s write and the same 1GB/s read.

I'd like to improve the write throughput. How would I achieve this.
If somebody has a similar or close setup - please help.
 
Joined
Feb 2, 2016
Messages
574
Which Toshiba SSD are you using for SLOG? Looking at the specs on their Enterprise Value Endurance SSD, a write speed of 270 MiB/s is expected for a 200G drive. Jumping to a larger drive will get you 480 MiB/s.

Your read speed is about what I'd expect. Read doesn't hit the SSD. Pull the SSD SLOG from the pool and check your write speed again.

What performance are you expecting? Have you done testing locally (using dd) so as to take the network performance out of the loop?

Cheers,
Matt
 

XenonXTZ

Dabbler
Joined
Jan 13, 2017
Messages
22
This is the one. I have them mirrored.
http://toshiba.semicon-storage.com/...oducts/enterprise-ssd/px04shb-px04shqxxx.html

20170202_121136.jpg

I'm not sure how to check dd. Would you please tell me how would I run it?
I'm expecting to max out the capability of my 10Gbit connection. Looking to double my current 500Mb/s write speed.

Thank you very much.
 
Joined
Feb 2, 2016
Messages
574

Those are SAS 3 drives but you have them connected to a SAS 2 controller dropping the theoretical throughput by half - 6G versus 12G.

I'm not sure how to check dd. Would you please tell me how would I run it?

Log into your FreeNAS server's shell, navigate to the pool you'd like to benchmark. Try writing a fairly large file at different block sizes. This will tell you your disk throughput in a best-case scenario. You'll never get faster than this.

Here's what mine looks like when I write a 3-gig file using different block sizes...

$ dd if=/dev/zero of=testfile bs=1024 count=100000
102400000 bytes transferred in 0.813839 secs (125823419 bytes/sec)

$ dd if=/dev/zero of=testfile bs=4096 count=25000
102400000 bytes transferred in 0.247519 secs (413705587 bytes/sec)

$ dd if=/dev/zero of=testfile bs=16384 count=6250
102400000 bytes transferred in 0.094521 secs (1083356614 bytes/sec)

$ dd if=/dev/zero of=testfile bs=65536 count=1562
102367232 bytes transferred in 0.055369 secs (1848821838 bytes/sec)

This is going to an six-wide mirror using unremarkable 2TB SATA drives. As you can see, block size can affect transfer rate. Compression level can also affect write rate.

Non-local writes, too, are going to be affected by file size. How are currently measuring your write rates?

Cheers,
Matt
 

XenonXTZ

Dabbler
Joined
Jan 13, 2017
Messages
22
The following is my result.
Seems a little lower than your results, but dd performance seems to be fine.
Current pool setup is attached as a pic.
Pool.jpg

[root@freenas ~]# dd if=/dev/zero of=testfile bs=1024 count=100000
100000+0 records in
100000+0 records out
102400000 bytes transferred in 0.984253 secs (104038273 bytes/sec)
[root@freenas ~]# dd if=/dev/zero of=testfile bs=4096 count=25000
25000+0 records in
25000+0 records out
102400000 bytes transferred in 0.348730 secs (293636838 bytes/sec)
[root@freenas ~]# dd if=/dev/zero of=testfile bs=16384 count=6250
6250+0 records in
6250+0 records out
102400000 bytes transferred in 0.130736 secs (783257189 bytes/sec)
[root@freenas ~]# dd if=/dev/zero of=testfile bs=65536 count=1562
1562+0 records in
1562+0 records out
102367232 bytes transferred in 0.073189 secs (1398669251 bytes/sec)
 

XenonXTZ

Dabbler
Joined
Jan 13, 2017
Messages
22
I measure the performance by setting iSCSI, creating the ZVol, Extent as a device and measure the file transfer (large image file, ~25GiB) from the FreeNAS server to the client that is not a bottleneck.
Also I use HD Tune and similar apps. AS SSD as well. All with more or less similar results.
 
Status
Not open for further replies.
Top