Performance Review (Need more RAM?)

Status
Not open for further replies.

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
FreeNas Server Specs:
  • Chassis: Dell PowerEdge C2100 Server
  • Build: FreeNAS-9.3-STABLE-201511040813
  • CPUs: 2 - Intel Xeon E5506 SLBF8 2.13GHz/4MB/4.8GTs/Quad Core LGA1366 CPU
  • Memory: 24 GB (6x4GB - 2Rx4 PC3-10600R - DDR3 - ECC)
  • HBA: Dell H200 Mezzanine - Flashed to LSI 9211-8i IT Mode
  • Network (1 GB): 2 - Intel 82576 GB Ethernet
    • Running in "Fail-Over" LAGG
    • Used just for Production Access
  • Network (10 GB): Chelsio 110-1088-30 10GB 2-Port PCI-e HBA Adapter Card with SFP's
    • Running in "Fail-Over" LAGG
    • Used just for Backup Routine Access
    • Is Direct Connected to ESXi Server w/Static IP Assigned (No Switch Involved)
  • OS Hard Disk(s): 2 - 120 GB SSD (were a good deal, so I grabbed them...)
  • Storage Hard Disk(s): 12 - Hitachi Ultrastar HUA723030ALA640 3TB
    • Enterprise Rated 7200RPM 64MB SATAIII (6Gb/s) 3.5"
  • Volume: Composed of 2 RAIDZ2 w/6 Disks in each one

ESXi Server Specs (Houses VM of SME Server):
  • Chassis: Dell PowerEdge T31o Server
  • Build: ESXi/vSphere 5.5 Update 3
  • CPUs: 1 - Intel Xeon X3430 2.4GHz/8MB/2.5GTs/Quad Core LGA1156 CPU
  • Memory: 32GB (4x 8GB) 800MHz DDR3 SDRAM RAM Memory
  • RAID: PERC H700 1GB Cache SAS Raid Controller w/Battery Backup
    • Running a RAID 6
  • Network (1 GB): 2 - Broadcom NetXtreme Gigabit Ethernet
    • Running in "Fail-Over" LAGG (in Virtual Network)
    • Used just for Production Access
  • Network (10 GB): QLogic QLE8152 10GB Dual Port PCI-E Fibre Channel Host Bus Adapter QLE 8152
    • Running in "Fail-Over" LAGG (in Virtual Network)
    • Used just for Backup Routine Access
    • Is Direct Connected to FreeNas Server w/Static IP Assigned (No Switch Involved)
SME Server (VM) Specs (Will post in a few)


So anyways, with the systems running I ran the following tests:

iPerf from SME Server to FreeNas Server (Got 8+ GB/Sec, so I was happy about that):
Command: iperf -p 5001 -c 172.20.1.6 -w 512k

Results:
------------------------------------------------------------
Client connecting to 172.20.1.6, TCP port 5001
TCP window size: 244 KByte (WARNING: requested 512 KByte)
------------------------------------------------------------
[ 3] local 172.20.1.5 port 41214 connected with 172.20.1.6 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 9.47 GBytes 8.13 Gbits/sec


DD on FreeNas (64 Gb File - Got ~ 350 MB/Sec)
Command: dd if=/dev/zero of=tmp.bin bs=4096 count=16777216 && sync

Results:
16777216+0 records in
16777216+0 records out
68719476736 bytes transferred in 187.602163 secs (366304288 bytes/sec)

Now, main purpose for all of this was because the SME Server used to take +24 hours to perform an initial Full Backup using DAR. Data on the SME Server is 400 GB.

So, theoretically I was thinking that with a 10 GB Pipe and ~350 MB/Sec write speeds I could drastically cut that time down. Backup is to a CIFs share on the FreeNas Server.

Now the initial backup ran and while it was faster; it wasn't near what I thought it would be. I know that other things may come into play, but it didn't even halve the time...

Report from SME Server (Looks like just below 14 hrs):
==================================
DAILY BACKUP TO WORKSTATION REPORT
==================================
Backup of [name removed] started at Wed Nov 11 03:13:04 2015 Destination //172.20.1.6/SMEBackupsCIFs/[name removed]/set1
No existing reference backup, will make full backup Basename full-20151111031304 Starting the backup with a timeout of 24 hours


--------------------------------------------
354816 inode(s) saved
including 0 hard link(s) treated
0 inode(s) changed at the moment of the backup and could not be saved properly
0 byte(s) have been wasted in the archive to resave changing files
0 inode(s) not saved (no inode/file change)
0 inode(s) failed to be saved (filesystem error)
293 inode(s) ignored (excluded by filters)
0 inode(s) recorded as deleted from reference backup
--------------------------------------------
Total number of inode(s) considered: 355109
--------------------------------------------
EA saved for 0 inode(s)
--------------------------------------------
Destination disk usage 473G, 3% full, 16T available Backup successfully terminated at Wed Nov 11 14:56:31 2015

Wondering if I am missing something and will plan on at least doubling the RAM. But would appreciate and other suggestions.

Thanks.

 
Last edited:

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,545
Doubling RAM probably won't help write performance to your freenas server. Increasing number of vdevs may help (ie mirrors). Switching from CIFS to NFS may help.
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
Yeah, I will considered trying out NFS.

Only thing that has me a little puzzled is say even if I was only getting 200 MB/Sec, then I would think that the "Full Backup" routine (which is ~500 GB) would be:

200 MB/Sec * 60 Seconds = 12,000 MB/Minute
12,000 MB/Minute * 60 Minutes = 720,000 MB/Hour
720,000 MB / 1024 = 703 GB/Hour

So with 500 GB, I could safely presume < 1 hour. Heck even if it was 2 hours, that would be worlds better than the 13+ that is currently showed...

Maybe this is a DAR thing, it does create the backups in 700 MB chunks. Will dig into that a little as well.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,110
I've found the CIFS implementation on FreeNAS to be pretty slow, and it's single-threaded which on your E5506 won't be the perkiest thing in the world. Give NFS a shot but make sure you're connection from within the VM and tell it to do async writes.

Your raw DD numbers (you disabled compression when testing from /dev/zero, right?) still seem a bit pokey. I'd expect closer to 800MB/s for sequential writes given your disk config/count.
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
Sorry for the delay, I will double check to be sure. However, there are some VPN issues that need to be corrected before I can get back in. Will post a reply as soon as I get back on.

Thanks.
 

titan_rw

Guru
Joined
Sep 1, 2012
Messages
586
I have the same pool layout. 12 drives in 2 vdevs of 6 in z2. They are 7200 rpm seagates. Currently the server is very busy moving data around, but I still get ~900 MB/sec with a dd test. Try a bigger block size. 4k is pretty small for a sequential write test. I used bs=1m count=64k

As for CIFS, with my e5-1650 and intel 10gig 540's, I usually get around 600-800 MB/sec. Looking in top, it does look like samba is burning 80% of a core doing this, so I'm pretty close to the single thread performance limit. This is direct connect (no switch), to a windows 7 box over copper.

What does iperf give you? I think I got 9-9.5 last time I checked. Check 'top -SH' and look for the samba thread (while a cifs copy is going on). Is it eating a core?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
Doubling RAM probably won't help write performance to your freenas server. Increasing number of vdevs may help (ie mirrors). Switching from CIFS to NFS may help.

Doubling RAM can help write performance sometimes; RAM and transaction group sizing are intertwined variables. Larger transaction groups can increase write performance, all other things being equal.
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,545
Doubling RAM can help write performance sometimes; RAM and transaction group sizing are intertwined variables. Larger transaction groups can increase write performance, all other things being equal.
Out of curiosity, has anyone quantified write performance gains due to larger txg / more ram? Is it significant enough to advise purchasing more ram to improve write performance?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
Out of curiosity, has anyone quantified write performance gains due to larger txg / more ram? Is it significant enough to advise purchasing more ram to improve write performance?

Sure. If I have 32GB of RAM on my FreeNAS box with a 30TB pool, the writes are about 3x faster than if I only have 8GB of RAM. The problem is that this isn't a two-way street; there's no promising that buying more RAM *will* increase write speeds.
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
I've found the CIFS implementation on FreeNAS to be pretty slow, and it's single-threaded which on your E5506 won't be the perkiest thing in the world. Give NFS a shot but make sure you're connection from within the VM and tell it to do async writes.

Your raw DD numbers (you disabled compression when testing from /dev/zero, right?) still seem a bit pokey. I'd expect closer to 800MB/s for sequential writes given your disk config/count.

NFS may not be an option, since they want to easily be able to see the share via Windows and they are running Windows 7 Pro (I believe you need Enterprise Version to be able to use NFS Client).
  • Side note, if I do make a DataSet that is "Unix" for the "ShareType" and then create two shares (one NFS and one CIFs) that point to the same DataSet would this not provide the ability for them to see the same data without needing a NFS Client? Meaning that I could then have SME (Linux) use NFS for the backups and then they can browse to the CIFs share to view the files?

On to the DD test, this is what I did and hopefully it is correct:
  1. Created a new DataSet and called it "Transfer"
    • Disabled compression:
      upload_2015-11-18_6-42-33.png
  2. Created a new CIFs Share and called it "TransferCIFs" (this was just for viewing)
  3. Via Putty (over VPN)
    • CD to Transfer: cd /mnt/DataVolume01/Transfer/
    • Ran the DD Command: dd if=/dev/zero of=tmp.bin bs=4096 count=16777216 && sync
      • Is my command correct? Not 100% sure about if I should be using "/dev/zero" or something else?
      • Got similar results as when done initially (~326 MB/Sec)
        upload_2015-11-18_7-5-42.png
    • Deleted the "tmp.bin"
    • Ran the revised DD Command (per titan_rw): dd if=/dev/zero of=tmp.bin bs=1m count=64k && sync
      • Results were much better (~805 MB/Sec)
        upload_2015-11-18_7-12-8.png
    • Deleted the "tmp.bin"
So with all that being said, perhaps it was the way I ran the DD Command? Even with it showing a much improved result, this still leaves me to ponder the time it is taking for the backups from the SME Server using DAR.

Of course that is not a FreeNas concern, so I will check the NFS route.

Thanks all for the input. I will post any new information that I get.
 

Attachments

  • upload_2015-11-18_6-55-13.png
    upload_2015-11-18_6-55-13.png
    10.8 KB · Views: 316

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
I have the same pool layout. 12 drives in 2 vdevs of 6 in z2. They are 7200 rpm seagates. Currently the server is very busy moving data around, but I still get ~900 MB/sec with a dd test. Try a bigger block size. 4k is pretty small for a sequential write test. I used bs=1m count=64k

As for CIFS, with my e5-1650 and intel 10gig 540's, I usually get around 600-800 MB/sec. Looking in top, it does look like samba is burning 80% of a core doing this, so I'm pretty close to the single thread performance limit. This is direct connect (no switch), to a windows 7 box over copper.

What does iperf give you? I think I got 9-9.5 last time I checked. Check 'top -SH' and look for the samba thread (while a cifs copy is going on). Is it eating a core?

Previously, I got 8.13 Gbits/sec. Just ran the test three (3) times in a row and averaged 5.58 Gbits/sec. While slower than before, I will look at that later. 5.58 is still a lot better than using a 1 GB connection.
upload_2015-11-18_7-36-40.png


Backups start at 00:01 am and I will try to hop on then to check the samba thread. Out of curiosity, did you want me to check the CPU usage on the FreeNas Server or the SME Server?

Thanks.
 

titan_rw

Guru
Joined
Sep 1, 2012
Messages
586
Out of curiosity, what is the speed of an interactive copy via windows explorer over to freenas? Maybe this is a backup issue.

Yea, cpu usage on freenas using 'top -SH' to see the individual threads.

The write performance of the disks seems fine though. I'd have expected a bit more from iperf however.
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
Sorry, didn't get a chance to hop on in time. Incremental backup was completed in 15 minutes. Data backed up was ~ 3GB, so for sure it could not be due to the write speeds on the FreeNas Server or the 10 GB NIC. More than likely it is the SME Server and/or DAR itself.

No way 3 GB of write took 15 minutes...

** Another Update, I checked on a different FreeNas Server that I loaned out. They have a SME Server as well and only have a 1 GB connection to the FreeNas Server. Looking at their backups it appears as if under similar circumstances they average about the same amount of time to data ratio. This most definitely is due to SME/DAR and/or CIFs.

Will try to do a workstation copy test if I can RDP to a machine there (if one is available). Not thinking that CPU Cores would really be an issue since it has 2 - Xeon Quad Cores; so at the least there are 8 cores available.

BTW, is there a "Point of Diminishing Returns" as far as CPUs? Meaning, I could easily upgrade the CPUs to Hex Core that have Hyper-Threading and VT-D support. This would/could then been seen by FreeNas as 24 CPUs, but would I actually be reaping the benefits of doing so if I am not running Virtual Machines in FreeNas and mainly using it for Shares? Might be best to only use one of those CPUs and use the other in a different FreeNas Server...

** I did read/review the "Hardware recommendations (read this first)" post. Saw the part about CPUs:
Top CPU frequency is important for Samba as Samba is single-threaded on a per-user basis. Generally any 3Ghz+ CPU will be more than capable of hanlding Samba at Gigabit speeds without a problem.

Loving FreeNas so far as well as the great knowledge provided!

Thanks.
 
Last edited:
Status
Not open for further replies.
Top