Aoc-sat2-mv8 slow?

Status
Not open for further replies.

hibble

Dabbler
Joined
Jul 12, 2012
Messages
12
Hi all. and thankyou for anytime spent helping me out.

I have been using zfs and freenas for a few years now but it is time to rebuild my server and I wanted some advice. As it was only in use by me and 2 pc's for backup, performanc was not a issue. Now it is going to be used for backups of 5 pc's 4 users shared folders and offsite backups of 3 online servers. Not a large quantity of data each day but the performance I am getting is inconsistant and seems low. I need to find the bottle neck or rebuild now it will be used 24/7. e.g currantly a 8TB scrub takes 2 days if I go with larger drives in the future scrubs will take to long if done weekly.

When I set this up as a opensolaris with just 1 raidz1 <50% full I recall a scrub speed of ~350MB/s and faster cifs transfer but I have no record of this. Over the years it has had 2 mirros added for extra storage capacity and weekly+monthy snapshots ~5GB held in snapshots.

Now I get:
Scrub ~125MB/s (no network transfers in progress)
from server to main PC
Max nfs share 40-55MB/s (not often used)
Max cifs 30-50MB/s transfer will clime to start at 50MB/s then gradualy drop to ~30MB/s
A cifs transfer to multiple pc's totals ~ 60MB/s dose not matter if its 2 or 5pc performance is about the same. No difrence if linux or windows 7.

server spec:
freenas 8.3 pool v28 performance was very simila on v15
AMD X3 processor
Asus workstation motherbord
PCI-x HBA supermicro aoc-sat2-mv8 (should be good for 100MB/s a channel) http://blog.zorinaq.com/?e=10
8GB ram
2x Icy Box IB-555SSK backplanes with 4 disks in each going to the supermicro HBA
Gigabit network 2 nic both realtek but difrent chipsets. No agrigaed link performance equal on both.
Intel server NIC same performance as reltek may put back in if I can get close to saturating a gigabitlink.
Switch Netgear prosafeport gigabit. Have tried 2 3com/hp 24port and 48port rackmount switches.no improvement is transfer times.

Find zpool status -v, top etc.. at the end of this post.

I will update with DD benchmark when latest scrub is finished.

Question time?

I would perfer to keep costs low by fixing what I have if its capable of say 90MB/s transfer speed.

1) Can anyone spot the bottle neck?
If I had to guess I would presume its the PCI-x link or the aoc-sat2-mv8 but as sun microsystems used them in there original 48 drive thumper server. It should work well for zfs.

2) anyone got a benchmark for the ixsystem mini?
would like to support freenas development but import taxes will probably make it to expensive.

3) Hardware recommendation on new setup?
I am looking to saturate a gigabit link(no agragated link etc..) This will be in a home office so quiet is perfered. 8 drive max probably 2x 4 drive raidz1 or 8 Drive raidz2 with 2 hot spares on the motherbord sata ports if I reuse the case and backplanes. The ibm m1015 is often recomended but is getting hard to find any recomended replacements.





Currantly my zpool status -v (gptid has been shortend to correct formatting)

Code:
[root@freenas] ~# zpool status -v
  pool: DeathStar
 state: ONLINE
  scan: scrub in progress since Thu Feb 28 02:30:30 2013
        5.11T scanned out of 7.07T at 125M/s, 4h35m to go
        8K repaired, 72.28% done
config:
        NAME                                          	  STATE     READ WRITE CKSUM
        DeathStar                                      	 ONLINE       0     	0     	0
          raidz1-0                                     	 ONLINE       0     	0     	0
            gptid/aa4d3				         ONLINE       0     	0     	0
            gptid/aa998				         ONLINE       0     	0     	0
            gptid/aae0ff  				         ONLINE       0     	0     	0
            gptid/ab26e				         ONLINE       0     	0     	0
          mirror-1                                      	 ONLINE       0     	0     	0
            gptid/c4fa0				         ONLINE       0            0     0
            gptid/c56a5				         ONLINE       0             0     0
          mirror-2                          	                 ONLINE       0             0     0
            gptid/f025e1				         ONLINE       0             0     0
            gptid/f0ad67				         ONLINE       0             0     0
errors: No known data errors


top shows
Code:
load averages:  0.34,  0.43,  0.41
33 processes:  1 running, 32 sleeping
CPU:  0.0% user,  0.0% nice, 14.0% system,  1.5% interrupt, 84.5% idle
Mem: 90M Active, 57M Inact, 536M Wired, 2260K Cache, 205M Buf, 6963M Free
Swap: 16G Total, 16G Free


systat -io
Code:
                    /0   /1   /2   /3   /4   /5   /6   /7   /8   /9   /10
     Load Average   ||| 

          /0%  /10  /20  /30  /40  /50  /60  /70  /80  /90  /100
cpu  user|
     nice|
   system|********* 
interrupt|X
     idle|***************************************

          /0%  /10  /20  /30  /40  /50  /60  /70  /80  /90  /100
md0   MB/s 
      	tps| 
md1   MB/s 
      	tps|
md2   MB/s 
      	tps|
ada0  MB/s
      	tps|XX
ada1  MB/s 
      	tps|XX
ada2  MB/s 
      	tps|  
ada3  MB/s 
      	tps| 
ada4  MB/s*********************XX 
      	tps|******************************************XX566.65
ada5  	MB/s*********************XX 
      	tps|******************************************XX644.17
ada6  	MB/s*********************XX
      	tps|******************************************XX571.04
ada7  	MB/s*********************X
      	tps|******************************************XX562.45
 

hibble

Dabbler
Joined
Jul 12, 2012
Messages
12
results of a DD benchmark = ~105MB/s
Code:
dd if=/dev/zero of=/mnt/DeathStar/temp bs=2048k count=10000
10000+0 records in
10000+0 records out
20971520000 bytes transferred in 207.094960 secs (101265236 bytes/sec)
20971520000 bytes transferred in 192.240066 secs (109090266 bytes/sec)
20971520000 bytes transferred in 189.114526 secs (110893227 bytes/sec)
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Your issue is definitely not network related. If a scrub is doing 125MB/sec with no user load you can't expect to ever get those speeds through the network. I always consider that a scrub should be able to do at least double the speed I want. So if I expect 100MB/sec across my network then my scrub must be able to do at least 200MB/sec. Same for DD. Your DD and scrub are close enough to prove that your speed issue isn't network related.

Presumably ada0 through ada3 are part of the zpool. Notice how their loading is almost zero compared to ada4 through ada7.
1. How full is your zpool? Once a zpool gets to about 80% full fragmentation begins to kill performance. If the zpool becomes fragmented enough there is no cure except to destroy and recreate the zpool. Not sure if this is actually your problem unless have been this full for a long period of time.
2. Also, your data is not evenly distributed across all drives on the zpool because you added another vdev after the zpool had been in use for quite some time.
3. It's not recommended that you mix vdev types. If you start with a RAIDZ1 you really should only be adding RAIDZ1s to the zpool. Yes, you can mix and match however you want, but it causes uneven loading on the drives which can prevent you from reaching the high speeds you'd likely want.
4. What model are those drives? If they are old and slow that can cause performance issues too.
5. Are you by chancing running the x86 version of FreeNAS? You didn't mention your version, but if you are using the 32 bit version you are only using 4GB of RAM. This also means that prefetching is disabled which really hurts performance. If you aren't using 8.3.0 you should consider upgrading.


With the loading you mentioned above(backups of 5 pc's 4 users shared folders and offsite backups of 3 online servers) I would seriously consider rebuilding with a single larger RAIDZ1 with drives big enough to handle your expected storage needs for at least 2 years. Also 8GB of RAM is a bit low for what you use it for. Ideally you should have 6GB + 1GB for each TB of disk space you have. You didn't mention how big your drives are, but I'd assume that 125MB/sec over 2 days means you have quite a few TB of storage space.
 

hibble

Dabbler
Joined
Jul 12, 2012
Messages
12
1. How full is your zpool? Once a zpool gets to about 80% full fragmentation begins to kill performance. If the zpool becomes fragmented enough there is no cure except to destroy and recreate the zpool. Not sure if this is actually your problem unless have been this full for a long period of time.
7.07TBs out of 8TB so yes geting close to full.
2. Also, your data is not evenly distributed across all drives on the zpool because you added another vdev after the zpool had been in use for quite some time.
raidz1 is 6TB usable mirrors at 1TB usable each
3. It's not recommended that you mix vdev types. If you start with a RAIDZ1 you really should only be adding RAIDZ1s to the zpool. Yes, you can mix and match however you want, but it causes uneven loading on the drives which can prevent you from reaching the high speeds you'd likely want.
I knew at the time it would be a problem but as recent graduate money was a problem so reusing older disks made sence. If i replace it will be with 4TB disks. depending on rebuild cost 2x 4-5 disk raidz1(better iops i beleave) or 1x 8 disk raidz2.
4. What model are those drives? If they are old and slow that can cause performance issues too.
Started life as 4 1TB WD green as raidz1 these where swaped out 1 at a time with 2TB 7200rpm drives to grow the raidz1. two of the original 1TB drives got added back in about 6 months as the first mirror. 2 newer 1TB WD red drives(special offer at the time where added about a year ago)
5. Are you by chancing running the x86 version of FreeNAS? You didn't mention your version, but if you are using the 32 bit version you are only using 4GB of RAM. This also means that prefetching is disabled which really hurts performance. If you aren't using 8.3.0 you should consider upgrading.
64bit freenas reports all 8GB ram which is the max that old workstation motherbord will take its also not eec
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Yeah, zpools should be kept <80% full at all times. You're closer to 90%.

I'd say its time to buy bigger drives, make a new zpool and copy the data to the new zpool. I recommend RAIDZ2 over RAIDZ1 because if 1 disk fails with RAIDZ1 and you have any bad sectors on another drive you will have corruption. It's no problem if you have RAIDZ2.

Also, with the size of your zpool, you really need more RAM. Since that machine can't handle more than 8GB of RAM I think your only good option is to build a whole new server and copy the data to the new one.
 

hibble

Dabbler
Joined
Jul 12, 2012
Messages
12
New build underway A4-5300, GA-F2A85X-D3H and 32GB non ECC ram. speed has improved to 100-110MB/s when reading from server. Small variation is probably because of the 90% full issue over difrent sized vdev's.

Plan to rebuild as one 8x 4TB raidz2 when i have collected enouth disks.
 
Status
Not open for further replies.
Top