BUILD Dell R210 II with QLE2460 FC HBA and Netapp DS14mk2 shelf

Status
Not open for further replies.

Aaron Ryan

Dabbler
Joined
Mar 20, 2015
Messages
20
I built this system for fun, just because I could and had the hardware for it.
  • Dell R210 II with E3-1230v2 and 8GB RAM
  • Qlogic QLE2460 Fibre Channel card, which supports 1,2, and 4Gbit FC. (for $13 on ebay)
  • Netapp DS14mk2 with ESH2 adapter running at 2Gbit
  • 14x Netapp X274 144GB FC hard drives
One of the issues with using Netapp drives, is they are format with 520byte blocks,
and they need to be converted to 512byte blocks, so i used sg_format on each drive which
took about 1.5 hrs. I ran it in parallel across all 14 drives:
# sg_format --format --size=512 /dev/da0 &
# sg_format --format --size=512 /dev/da1 &
....
# sg_format --format --size=512 /dev/da14 &

NETAPP X274_HJURE146F10 NA14 peripheral_type: disk [0x0]
Mode Sense (block descriptor) data, prior to changes:
Number of blocks=288821760 [0x11371200]
Block size=512 [0x200]

A FORMAT will commence in 5 seconds
ALL data on /dev/da14 will be DESTROYED
Press control-C to abort

Format has started
Format in progress, 0.04% done

....100% done

After a reboot, I manually create the zpool since some of the disks had slightly different sizes, and the freenas gui seperated them into 3 groups:
# zpool create disk1 raidz2 da0 da1 da2 da3 da4 da5 da6 da7 da8 da9 da10 da11 da12 da13

I built a 14disk raidz2 and ran a dd test to check thoughput and got about 98MB/s:
# dd if=/dev/zero of=/disk1/test bs=1000000 count=10000
10000000000 bytes transferred in 101.087915 secs (98923793 bytes/sec)
I also tried 14disk Stripe and got 212MB/s:
# dd if=/dev/zero of=/disk1/test bs=1000000 count=10000
10000000000 bytes transferred in 47.057993 secs (212503751 bytes/sec)
The bottleneck is the single FC 2Gbps connection to all the disks.

I just wanted to share this incase someone else was curious if it was possible and wanted to try it too.
 
Last edited:

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
I manually create the zpool since some of the disks had slightly different sizes, and the freenas gui seperated them into 3 groups:
# zpool create disk1 raidz2 da0 da1 da2 da3 da4 da5 da6 da7 da8 da9 da10 da11 da12 da13
You would have been better off going into manual mode in the volume manager, which would have allowed you to combine disks of different sizes into the same pool and retained the FreeNAS-isms (gptid identifiers, swap space, etc.) that the GUI does. Nonetheless, looks like you're up and running.
 

Aaron Ryan

Dabbler
Joined
Mar 20, 2015
Messages
20
You would have been better off going into manual mode in the volume manager, which would have allowed you to combine disks of different sizes into the same pool and retained the FreeNAS-isms (gptid identifiers, swap space, etc.) that the GUI does. Nonetheless, looks like you're up and running.

Thanks, I didnt notice the manual mode, thanks for the tip.
I ended up exporting and re-importing the zpool to get it to show up in the volume manager.
I'll use "manual mode" next time.

-Aaron
 
Status
Not open for further replies.
Top