Phantom HBA card?

Status
Not open for further replies.

Brian1974

Cadet
Joined
Oct 20, 2015
Messages
5
Hi


System:

Supermicro SC847BE2C Chassis (dual backplanes)
Supermicro X10DRi motherboard
2 x xeon e5 v3 procs
128gb ddr4 memory
intel 10gb Ethernet
30 x 6TB HGST
2 x Samsung PRO 128 SSD mirored for OS
2x LSI 9300 8i SAS HBA cards

so I had previously configured my system by using two HBA cards on my Supermicro Chassis.
I had flashed the firmware to the version Freenas asked for v9.
I was able to create a Freenas raidz2 volume and move some data onto it, and play with the system a bit.

However i noticed when using the two HBA's that in the Freenas GUI under storage the drives wouldn't show up under disks, however were seen in Multipaths.
one disk would show active the other would show passive and above it it said optimal.

I was worried by the fact that i couldn't see the disks under the disk tab, so i decided to shut down and remove the second HBA completely.

When i rebooted nothing had changed apart from a warning alert light that now that said
"Critical the following multipaths are not optimal" and listed all the disks.

I decided to shutdown and do a fresh install of freenas, booted from the CD drive, told it to wipe the two mirrored drives that i had been using for freenas OS and then ran install (not upgrade)

However once booted up and into the GUI i get the same error still.

Any ideas on how i should go about making it see and use just one card and also being able to actually see the disks?
 

Brian1974

Cadet
Joined
Oct 20, 2015
Messages
5
Yes , the issue was that it was holding onto the information of the HBA setup on the raid disks.(as its supposed to after I read a bit more on how zfs works)
So i needed to wipe all the disks , however as i mentioned i didnt have access to the disks in the tab window, possibly because the dual setup like this was not supported using these newer HBA 12gb cards.
Anyway I tried Dban but that wouldnt work as it doesnt see Raids, so after some fishing around i found a product called ActiveKillDisk free, which allowed me to do a 1 pass wipe on all the drives .
It wipes all the drives at the same time, which was fortunate as i was using 6TB drives.took about 14 hours total.
Next day I reinstalled Freenas on my 2 OS mirrored drives and Used the single HBA setup.
This time i was able to see all the drives in the array.
I built a 30 disk ZF2 pool which gave me about 130TB with L4 compression.I created a Folder and set the file size to 512
However upon testing and sharing via NFS , I was not impressed by the R/W that i was getting.

I mounted my 3 NFS shares on my 10GB mac, 2 centos 6.5 storage servers and the ZFS. The 2 Centos machines use 10gb and are about a year old and have 96gb ram.
and the same lesser procs than my new machine they are both using RAID6 with 24x4tb SATA drives.

copied a folder with 100GB data in just 2 files (autodesk flame archives) from the Centos 6.5 machine to the ZFS it took over twice the amount of time as the Centos servers did to each other
i then copied a folder with 200gb with 140000 files and ZFS was at leat 3 x slower.

However when running iperf tests and dd tests they were all normal


MAC to FAS -
[root@fas ~]# iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[ 4] local 199.95.x.x port 5001 connected with 199.95.x.x port 51523
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.0 sec 3.85 GBytes 3.31 Gbits/sec


MAC to COPPERPOT

[ 5] local 199.95.x.x port 5001 connected with 199.95.x.x port 51522
[ ID] Interval Transfer Bandwidth
[ 5] 0.0-10.0 sec 3.81 GBytes 3.27 Gbits/sec


MAC to ZFS

Client connecting to 199.95.x.x, TCP port 5001
TCP window size: 32.5 KByte (default)
------------------------------------------------------------
[ 6] local 199.95.x.x port 11781 connected with 199.95.x.x port 5001
[ ID] Interval Transfer Bandwidth
[ 6] 0.0-10.0 sec 4.04 GBytes 3.47 Gbits/sec

[ 8] local 199.95.x.x port 5001 connected with 199.95.x.x port 51568
[ 8] 0.0-10.0 sec 5.79 GBytes 4.97 Gbits/sec


-----------------------------------------------------------------------------

FAS - COPPERPOT
^C[root@fas ~]# iperf -c 199.95.x.x
------------------------------------------------------------
Client connecting to 199.95.x.x, TCP port 5001
TCP window size: 92.6 KByte (default)
------------------------------------------------------------
[ 3] local 199.95.x.x port 34358 connected with 199.95.x.x port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 11.4 GBytes 9.76 Gbits/sec


FAS to MAC

[root@fas ~]# iperf -c 199.95.x.x
------------------------------------------------------------
Client connecting to 199.95.x.x, TCP port 5001
TCP window size: 92.6 KByte (default)
------------------------------------------------------------
[ 3] local 199.95.x.x port 48524 connected with 199.95.x.x port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 6.94 GBytes 5.96 Gbits/sec

FAS to ZFS

[ 8] local 199.95.x.x port 5001 connected with 199.95.x.x port 46086
[ 8] 0.0-10.0 sec 10.3 GBytes 8.82 Gbits/sec
[root@fas ~]# iperf -c 199.95.x.x
------------------------------------------------------------
Client connecting to 199.95.x.x, TCP port 5001
TCP window size: 92.6 KByte (default)
------------------------------------------------------------
[ 3] local 199.95.x.x port 60332 connected with 199.95.x.x port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 11.1 GBytes 9.54 Gbits/sec



COPPERPOT - FAS

[ 5] local 199.95.x.x port 5001 connected with 199.95.x.x port 46118
[ ID] Interval Transfer Bandwidth
[ 5] 0.0-10.0 sec 11.5 GBytes 9.88 Gbits/sec

COPPERPOT - MAC

^C[root@COPPERPOT Iperf]# iperf -c 199.95.x.x
------------------------------------------------------------
Client connecting to 199.95.x.x, TCP port 5001
TCP window size: 95.8 KByte (default)
------------------------------------------------------------
[ 3] local 199.95.x.x port 47858 connected with 199.95.x.x port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 7.49 GBytes 6.43 Gbits/sec

COPPERPOT - ZFS

Client connecting to 199.95.x.x, TCP port 5001
TCP window size: 54.1 KByte (default)
------------------------------------------------------------
[ 3] local 199.95.x.x port 46339 connected with 199.95.x.x port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 11.0 GBytes 9.41 Gbits/sec

[root@COPPERPOT Iperf]# iperf -c 199.95.x.x
------------------------------------------------------------
Client connecting to 199.95.x.x, TCP port 5001
TCP window size: 95.8 KByte (default)
------------------------------------------------------------
[ 3] local 199.95.x.x port 46256 connected with 199.95.x.x port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 11.5 GBytes 9.90 Gbits/sec

------------------------------------------------------------------------------

ZFS - COPPERPOT

[root@COPPERPOT Iperf]# iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
199[ 4] local 199.95.x.x port 5001 connected with 199.95.x.x port 38539
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.0 sec 2.76 GBytes 2.37 Gbits/sec


ZFS - FAS

[root@fas ~]# iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[ 4] local 199.95.x.x port 5001 connected with 199.95.x.x port 61011
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.0 sec 3.51 GBytes 3.01 Gbits/sec


ZFS - MAC

Client connecting to 199.95.x.x, TCP port 5001
TCP window size: 32.5 KByte (default)
------------------------------------------------------------
[ 6] local 199.95.x.x port 11781 connected with 199.95.x.x port 5001
[ ID] Interval Transfer Bandwidth
[ 6] 0.0-10.0 sec 4.04 GBytes 3.47 Gbits/sec


I don't have the DD tests but they seemed quite normal.
afraid that it was a issue with just dragging and dropping from one to another i also tried an rsync from one to another with the same result.
Not sure if its because the HBA isn't supported or not but right about now im ready to go back to Centos and raid 6 which at least gave me reliable speeds over NFS which is primarily what i use between servers.
 
Status
Not open for further replies.
Top