Aaron Ryan
Dabbler
- Joined
- Mar 20, 2015
- Messages
- 20
I built this system for fun, just because I could and had the hardware for it.
and they need to be converted to 512byte blocks, so i used sg_format on each drive which
took about 1.5 hrs. I ran it in parallel across all 14 drives:
# sg_format --format --size=512 /dev/da0 &
# sg_format --format --size=512 /dev/da1 &
....
# sg_format --format --size=512 /dev/da14 &
After a reboot, I manually create the zpool since some of the disks had slightly different sizes, and the freenas gui seperated them into 3 groups:
I built a 14disk raidz2 and ran a dd test to check thoughput and got about 98MB/s:
I just wanted to share this incase someone else was curious if it was possible and wanted to try it too.
- Dell R210 II with E3-1230v2 and 8GB RAM
- Qlogic QLE2460 Fibre Channel card, which supports 1,2, and 4Gbit FC. (for $13 on ebay)
- Netapp DS14mk2 with ESH2 adapter running at 2Gbit
- 14x Netapp X274 144GB FC hard drives
and they need to be converted to 512byte blocks, so i used sg_format on each drive which
took about 1.5 hrs. I ran it in parallel across all 14 drives:
# sg_format --format --size=512 /dev/da0 &
# sg_format --format --size=512 /dev/da1 &
....
# sg_format --format --size=512 /dev/da14 &
NETAPP X274_HJURE146F10 NA14 peripheral_type: disk [0x0]
Mode Sense (block descriptor) data, prior to changes:
Number of blocks=288821760 [0x11371200]
Block size=512 [0x200]
A FORMAT will commence in 5 seconds
ALL data on /dev/da14 will be DESTROYED
Press control-C to abort
Format has started
Format in progress, 0.04% done
....100% done
Mode Sense (block descriptor) data, prior to changes:
Number of blocks=288821760 [0x11371200]
Block size=512 [0x200]
A FORMAT will commence in 5 seconds
ALL data on /dev/da14 will be DESTROYED
Press control-C to abort
Format has started
Format in progress, 0.04% done
....100% done
After a reboot, I manually create the zpool since some of the disks had slightly different sizes, and the freenas gui seperated them into 3 groups:
# zpool create disk1 raidz2 da0 da1 da2 da3 da4 da5 da6 da7 da8 da9 da10 da11 da12 da13
I built a 14disk raidz2 and ran a dd test to check thoughput and got about 98MB/s:
# dd if=/dev/zero of=/disk1/test bs=1000000 count=10000
10000000000 bytes transferred in 101.087915 secs (98923793 bytes/sec)
I also tried 14disk Stripe and got 212MB/s:10000000000 bytes transferred in 101.087915 secs (98923793 bytes/sec)
# dd if=/dev/zero of=/disk1/test bs=1000000 count=10000
10000000000 bytes transferred in 47.057993 secs (212503751 bytes/sec)
The bottleneck is the single FC 2Gbps connection to all the disks.10000000000 bytes transferred in 47.057993 secs (212503751 bytes/sec)
I just wanted to share this incase someone else was curious if it was possible and wanted to try it too.
Last edited: