Big Hardware - Mediocre Performance

Status
Not open for further replies.

Nick Lutz

Dabbler
Joined
Jul 10, 2014
Messages
21
I'm a bit puzzled as to why our upgraded FreeNAS server performs so poorly.

Hardware: HP DL585 - 32 cores - 256GB RAM (DDR3) - 2x 400GB SSD Intel S3700DC (L2ARC Stripe), 2X HP MAS arrays with 24x 600GB 10K drives, using all "auto tune/auto configure" for our ZFS RAID Z2 groups.

I'm getting about 3.5MB/sec Samba write speeds (?). Our QNAP runs circles around this thing with only 16GB RAM and no SSD caching, as well as fewer processing cores.

Right now I've relegated the FreeNAS to archival duty due to these performance problems.

I've gone through some of the suggested tuning steps found here in the forum, and find that deviating from the "autotune" values either makes no/little difference to worse yet performance.

Was really hoping to outshine a $5000 QNAP with this system. Any ideas?
 

katco

Cadet
Joined
Jan 29, 2016
Messages
1
First, ram has very little impact on write speeds. with 24 drives you should get at minimum 100MB/sec. You didn't say what network you have?
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
The L2ARC will help with reads, not writes.
I think the latest suggested guidance is to not use auto-tune. I'd remove that and then reboot.
Do you have sync forced for your dataset? (zfs get sync)

And then break apart the problem,
have you tried a local copy (or dd) from ssh or the console? (and then monitor performance with: zpool iostat -v)
iperf to test network performance?
sometimes the specific client has an issue, have you tried another? CIFS is limited to a single CPU core per thread (iirc), so CPU speed is more important than core count.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
Don't just change random tuning parameters until you know what the slow part is. I suspect you probably made it worse. Out of the box freenas can easily saturate a gigabit link.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
@Nick Lutz I know you have quite a bit of feedback and I'm going to add to it...

I'm not familiar with the hardware you are trying to run FreeNAS from but I looked for it on Google and it seems like a fairly advanced piece of hardware. So I have a few questions....

1) Is the hardware, including the drive controller interface cards on the FreeBSD 10 approved hardware list?
2) Exactly what are you doing for testing throughput?
3) Since I'm suspecting your hardware is causing this I'd like you to try an experiment and if it fails then you can stop using FreeNAS or adjust the hardware....
a) Disconnect all your drives, get back to just the basic server.
b) Using a single USB drive, install FreeNAS 9.10
c) Configure your network connection.
d) Power off and connect a few hard drives (four to six).
e) Power on and via the GUI, create a single pool of all those drives.
f) Create a dataset with compression turned off.
g) Create a CIFS share to the dataset.
h) With a direct physical Ethernet connection to a computer, transfer a few large pieces of data, report the transfer rates.
i) If the transfer rates appear slow, there was a like above a few postings where I placed a few commands where you can test internal throughput. If those are poor then you have a hardware compatibility issue. Report the results.

Right now it's in your hands and I would strongly suggest you do as I request to reduce you chasing your tail. I have noticed that people are buying these unique servers and trying to put FreeNAS on them. I'm sure it works most times but I suspect the hard drive controllers are causing issues, maybe not supported by drivers or they are optimized for Windows Server. Who knows.

Good luck and I think in about 20 minutes you could figure out if the hardware will support FreeNAS.
 

titan_rw

Guru
Joined
Sep 1, 2012
Messages
586
I don't really understand how the drives are connected to the server. What's a HP MAS array?

Ideally the drives are accessible without any additional raid layers getting in the way. If the SAS disks are seagate disks for example, you should see seagate when doing "camcontrol devlist".

If the SAS drives are directly accessible in FreeNAS, check to see if write caching is staying disabled on the disks. The sas disks that I did play with had terrible performance (10 mb/sec or something) until I enabled their write cache.

As Anados mentioned, if you could post a debug file, people could look through it for any trouble indicators.
 

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
My guess is the OP meant HP MSA array.


Sent from my iPhone using Tapatalk
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,554
I don't really understand how the drives are connected to the server. What's a HP MAS array?

Ideally the drives are accessible without any additional raid layers getting in the way. If the SAS disks are seagate disks for example, you should see seagate when doing "camcontrol devlist".

If the SAS drives are directly accessible in FreeNAS, check to see if write caching is staying disabled on the disks. The sas disks that I did play with had terrible performance (10 mb/sec or something) until I enabled their write cache.

As Anados mentioned, if you could post a debug file, people could look through it for any trouble indicators.
My guess is that ZFS is sitting on top of RAID volumes on the MSAs, which might be causing some performance issues. Hence the request for the debug file.
 

Nick Lutz

Dabbler
Joined
Jul 10, 2014
Messages
21
Hi, I've been really busy and have been a bit removed from FreeNAS over the past few days, but I've done a couple of things;

1) I've removed all traces of "auto-tune"
2) I've removed my SSD's (2x200GB) from L2ARC to ZIL (SLOG)

And some answers to the above;

ZFS is NOT sitting upon RAID volumes; the HBA card sees each drive individualy. The RAID is a Z2 - an optimal config using the FreeNAS volume manager.

Yes, it is an HP MSA array. Sorry for that confusion.

Is the hardware, including the drive controller interface cards on the FreeBSD 10 approved hardware list? - Heck - I didn't realize that there was an approved list.

Also, we have a new QNAP TS-EC2480U-RP-US 24 bay 4U storage array on order along with qty 28 8TB Segate enterprise level drives and 2 twin port 10GBE cards. (and finally two 24 port HP 10GBit switches).

Once the new QNAP is up and running, I'll be able to move my data off the FreeNAS and play around with configuration, but for now I have to keep changes to a minimum to avoid data interruptions.

I did create a set of debug files, but I'll have to go through an redact any information that may be considered sensitive before I can post it.

Below is a zpool status output:


pool: DRIVE-GROUP-LARGE
state: ONLINE
scan: scrub repaired 0 in 1h21m with 0 errors on Wed Jun 1 13:22:01 2016
config:

NAME STATE READ WRITE CKSUM
DRIVE-GROUP-LARGE ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
gptid/521227c9-7be8-11e5-b6f4-0012c007bdb4 ONLINE 0 0 0
gptid/52a01a22-7be8-11e5-b6f4-0012c007bdb4 ONLINE 0 0 0
gptid/532bb5aa-7be8-11e5-b6f4-0012c007bdb4 ONLINE 0 0 0
gptid/53b8de60-7be8-11e5-b6f4-0012c007bdb4 ONLINE 0 0 0
gptid/54461894-7be8-11e5-b6f4-0012c007bdb4 ONLINE 0 0 0
gptid/54d2b8c2-7be8-11e5-b6f4-0012c007bdb4 ONLINE 0 0 0
gptid/555eb2a7-7be8-11e5-b6f4-0012c007bdb4 ONLINE 0 0 0
gptid/55ec9c1f-7be8-11e5-b6f4-0012c007bdb4 ONLINE 0 0 0
gptid/567f3713-7be8-11e5-b6f4-0012c007bdb4 ONLINE 0 0 0
gptid/570d2127-7be8-11e5-b6f4-0012c007bdb4 ONLINE 0 0 0
gptid/579b7c32-7be8-11e5-b6f4-0012c007bdb4 ONLINE 0 0 0
gptid/5828cae4-7be8-11e5-b6f4-0012c007bdb4 ONLINE 0 0 0
gptid/58b6ca57-7be8-11e5-b6f4-0012c007bdb4 ONLINE 0 0 0
gptid/594639c4-7be8-11e5-b6f4-0012c007bdb4 ONLINE 0 0 0
gptid/59d5c9a9-7be8-11e5-b6f4-0012c007bdb4 ONLINE 0 0 0
gptid/5a6750e6-7be8-11e5-b6f4-0012c007bdb4 ONLINE 0 0 0
gptid/5af862c2-7be8-11e5-b6f4-0012c007bdb4 ONLINE 0 0 0
gptid/5b87c306-7be8-11e5-b6f4-0012c007bdb4 ONLINE 0 0 0
gptid/5c18e451-7be8-11e5-b6f4-0012c007bdb4 ONLINE 0 0 0
gptid/5caa4ffa-7be8-11e5-b6f4-0012c007bdb4 ONLINE 0 0 0
logs
mirror-1 ONLINE 0 0 0
gptid/35d1cc0d-2dc6-11e6-93c4-0012c007bdb4 ONLINE 0 0 0
gptid/366434c6-2dc6-11e6-93c4-0012c007bdb4 ONLINE 0 0 0

pool: SHARE2
state: ONLINE
scan: scrub repaired 0 in 0h17m with 0 errors on Wed Jun 1 12:17:34 2016
config:

NAME STATE READ WRITE CKSUM
SHARE2 ONLINE 0 0 0
raidz3-0 ONLINE 0 0 0
gptid/aeff3eeb-7c33-11e5-83de-0012c007bdb4 ONLINE 0 0 0
gptid/af7ce6ce-7c33-11e5-83de-0012c007bdb4 ONLINE 0 0 0
gptid/b00bb2c4-7c33-11e5-83de-0012c007bdb4 ONLINE 0 0 0
gptid/b0ad58c1-7c33-11e5-83de-0012c007bdb4 ONLINE 0 0 0
gptid/b14bffae-7c33-11e5-83de-0012c007bdb4 ONLINE 0 0 0
gptid/b1e80755-7c33-11e5-83de-0012c007bdb4 ONLINE 0 0 0
gptid/b299fb3f-7c33-11e5-83de-0012c007bdb4 ONLINE 0 0 0
gptid/8ce43b06-a2a4-11e5-9668-0012c007bdb4 ONLINE 0 0 0
gptid/b3e2f1f4-7c33-11e5-83de-0012c007bdb4 ONLINE 0 0 0
gptid/b4822e50-7c33-11e5-83de-0012c007bdb4 ONLINE 0 0 0
gptid/b520414f-7c33-11e5-83de-0012c007bdb4 ONLINE 0 0 0
gptid/00be2eb5-277e-11e6-b7f2-0012c007bdb4 ONLINE 0 0 0
gptid/b664a435-7c33-11e5-83de-0012c007bdb4 ONLINE 0 0 0
gptid/b70a8ede-7c33-11e5-83de-0012c007bdb4 ONLINE 0 0 0
raidz3-1 ONLINE 0 0 0
gptid/b812a58c-7c33-11e5-83de-0012c007bdb4 ONLINE 0 0 0
gptid/b8c7566c-7c33-11e5-83de-0012c007bdb4 ONLINE 0 0 0
gptid/b96a372d-7c33-11e5-83de-0012c007bdb4 ONLINE 0 0 0
gptid/ba0b5669-7c33-11e5-83de-0012c007bdb4 ONLINE 0 0 0
gptid/bab2210c-7c33-11e5-83de-0012c007bdb4 ONLINE 0 0 0
gptid/bb55aa98-7c33-11e5-83de-0012c007bdb4 ONLINE 0 0 0
gptid/bbfe14ef-7c33-11e5-83de-0012c007bdb4 ONLINE 0 0 0
gptid/bc97db92-7c33-11e5-83de-0012c007bdb4 ONLINE 0 0 0
gptid/bd72f905-7c33-11e5-83de-0012c007bdb4 ONLINE 0 0 0
gptid/be799472-7c33-11e5-83de-0012c007bdb4 ONLINE 0 0 0
gptid/bf59e75f-7c33-11e5-83de-0012c007bdb4 ONLINE 0 0 0
gptid/c02a8414-7c33-11e5-83de-0012c007bdb4 ONLINE 0 0 0
gptid/c1350648-7c33-11e5-83de-0012c007bdb4 ONLINE 0 0 0
gptid/c206f034-7c33-11e5-83de-0012c007bdb4 ONLINE 0 0 0

pool: freenas-boot
state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
the pool may no longer be accessible by software that does not support
the features. See zpool-features(7) for details.
scan: scrub repaired 0 in 0h0m with 0 errors on Wed Jun 8 16:18:30 2016
config:

NAME STATE READ WRITE CKSUM
freenas-boot ONLINE 0 0 0
da49p2 ONLINE 0 0 0

errors: No known data errors


Note: The freenas-boot device is really a RAID1 hardware mirror on the HP server. It is comprised of two 146GB 10K "junk" drives I pulled out of old hardware getting ready to be trashed on our loading dock.
Note: The 200GB SLOG/ZIL devices are Intel S3700DC drives. The are high write durability drives (can withstand 5 complete over-writes per day over 10 years) with high QOS low variability write speeds.

DRIVEGROUPLARGE was intended to serve ISCSI to our Hyper-V farm (I know, Microsoft Hyper-V does suck - but it's free with a server license)
SHARE2 was intended to service CIFS shares for archival purposes
freenas-boot is the boot volume (see above note)
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
Dude your vdevs are way too big! At most you should make them 11 disks in raidz3. Anything more than that save performance is going to suck. Also when serving vm storage you need to use mirrors. Also stop using raid on your boot device, freenas takes care of that for you. Use mirrors boot devices.
 

Sakuru

Guru
Joined
Nov 20, 2015
Messages
527
I'm a bit puzzled as to why our upgraded FreeNAS server performs so poorly.
It's probably because you are using REALLY wide VDEVs. The largest people usually recommend is 12 drives, but that's for low performance, large pools for backups and stuff like that. 1 VDEV = 1 drive's worth of IOPS.
https://forums.freenas.org/index.ph...ning-vdev-zpool-zil-and-l2arc-for-noobs.7775/
https://forums.freenas.org/index.php?threads/some-insights-into-slog-zil-with-zfs-on-freenas.13633/
https://forums.freenas.org/index.php?threads/zfs-primer.38927/
https://forums.freenas.org/index.php?threads/hardware-recommendations-read-this-first.23069/
 

titan_rw

Guru
Joined
Sep 1, 2012
Messages
586
"camcontrol devlist -v"?

Or just post a system debug?

And yes, the super wide vdevs probably aren't helping at all.
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
Please use Code Tags when you post a lot of output

I swear I have this copied to my clipboard just to paste... Used it like 4 times today already... :P

Code tags hint:
index.php
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Status
Not open for further replies.
Top