New build - Abysmal RW speeds

Status
Not open for further replies.

CoteRL

Dabbler
Joined
Jun 26, 2015
Messages
12
I have just finished getting all of my data off of an Openfiler box I was running and rebuilt the system with FreeNAS. I started going through the [How To] Hard Drive Burn-In Testing tutorial when I noticed instead of a nice SMART report I saw "Device does not support Self Test logging". I then started digging and it was then I found the Hardware recommendations (read this first) thread and saw that Adaptec RAID cards are a big no no.. I hadn't checked that thread before since I had most of the hardware I needed (except ECC RAM, which I ordered) and had seen mentioned if you had a RAID card just to place the disks into JBOD. So I have no ordered the sacred M1015 and it should be here the middle of next week.

Now for the reason for the thread title and my questions. I couldn't do the drive tests without the SMART information but I decided to try doing some speed tests while I wait for the new controller. In the Openfiler box I had a RAID10 array with the 4 Seagate disks and another array of disks I have now removed entirely (a ton of older 500GB disks in RAID6). I wanted to do a comparison so I did a test on the RAID10 before installing FreeNAS and it got ~279MB/s. I then built a RaidZ2 array in FreeNAS with those same 4 disks and did another test which only got ~110MB/s. I then built another RaidZ2 array with my new WD Red Pro disks using just 4 disks (I was trying to do comparable tests across the different disks, I could have done 6 disks). This test however took a very very long time and after finally completing was something like 11MB/s.

My questions are:

  1. Could this be related to the Adaptec card and its JBOD mode and some incompatibility with WD Red Pro?
  2. Is there a way for me to try to narrow down what could be causing the terrible write speed?
  3. If I decided to build my RaidZ2 pool of all 10 disks now and begin the excruciatingly slow process of copying data back onto it, would I be able to just swap the Adaptec for the M1015 and have it properly read the disks and FreeNAS correctly load the pool? Or am I being stupid for even wanting to do that and should just wait for the card to arrive?

Hardware:
CPU: Xeon E5520 2.26GHz
RAM: 24GB (3x 8GB) unbuffered ECC DDR3 1333
Mobo: Supermicro X8ST3-F
RAID/HBA: Adaptec 6805
Backplane: BPN-SAS2-836EL2
Chassis: CSE-836E26-R1200B

Disks:
4x Seagate 3TB NAS HDD - ST3000VN000
6x WD 3TB Red Pro - WD3001FFSX
 

BigDave

FreeNAS Enthusiast
Joined
Oct 6, 2013
Messages
2,479
Or am I being stupid for even wanting to do that and should just wait for the card to arrive?
ZFS needs *direct access* to the disks (which will only happen through a HBA pass through)
so I'd wait until you have the new HBA (after flashing P16 firmware in IT mode)
installed before configuring your pool.
While the new card is on the way, there are a number of good posts on this subject.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
It's probably the adaptec. I seem to recall a few people having similar issues, solved with a proper HBA.

Whether you can simply swap the cards will depend on what exactly the Adaptec is doing - so don't really trust it.
 

CoteRL

Dabbler
Joined
Jun 26, 2015
Messages
12
Thanks for the responses, I'll just wait it out until the card arrives. So the consensus is really that just ZFS not having direct access to the disks can make it that slow? Not doubting, actually hoping that's the case, but just surprised.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Thanks for the responses, I'll just wait it out until the card arrives. So the consensus is really that just ZFS not having direct access to the disks can make it that slow? Not doubting, actually hoping that's the case, but just surprised.

It's not quite that simple.

My working theory is that Adaptec's firmware/driver stack is designed for hardware RAID workloads, meaning a bunch of requests are sent at once and the controller takes care of it.
This would leave everything very poorly tuned for direct disk access. Since you're now dealing with N times more transactions (as an unrealistically low minimum) across the software stack, performance craters.
Onboard caches may also add latency, further complicating things.

Yet another possibility is a random bug in the aforementioned software that only affects direct disk accesses, which nobody bothered to fix because nobody buys an expensive RAID controller to use as an HBA.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525

CoteRL

Dabbler
Joined
Jun 26, 2015
Messages
12

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
There's no way to know for certain, but there is plenty of evidence that the answer is "yes". ;)
 

CoteRL

Dabbler
Joined
Jun 26, 2015
Messages
12
Since I'm sure you are all on the edge of your seats to hear the results..

RaidZ2 of all 10 3TB disks.
Write test
Code:
dd if=/dev/zero of=/mnt/pool1/test/tmp.dat bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 251.938074 secs (426192757 bytes/sec)


Read test
Code:
dd of=/dev/zero if=/mnt/pool1/test/tmp.dat bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 146.775548 secs (731553613 bytes/sec)
 
Status
Not open for further replies.
Top