Multipath Old Data

Status
Not open for further replies.

tbrim

Dabbler
Joined
Apr 29, 2016
Messages
13
I have 6 drives that were used in a previous zpool that seem to be holding onto old information (or the FreeNAS box is holding onto old information). I have destroyed all of the multipaths and I also ran dd in several ways. First I ran it on the first few hundred sectors then on entire disk until total completion. I even went into the /dev folder and delete all of the "da" devices. No matter what method or combination of methods I have used thus far I still get every one of the multipaths showing up.

What is interesting is that I get 12 da numbers for the 6 disks. I suppose that is because I previously had setup the zfs equivalent of RAID10 (a mirror of 2 striped sets). What am I doing wrong, or am I doing anything wrong at all? The multipaths do not hinder me from creating any type of vdev that I desire, but they are always there nagging me saying, "haha, you don't understand me". I truly thought that multipathing had to do with iSCSI and nothing else.

Among other things I have deleted all iSCSI configuration (portal, initiator, target, extent, and relationships), turned off iSCSI service, and restarted several times.

Here is copy of my multipath information (with 1 disk removed because I was trying to wipe it using alternative methods):
 

Attachments

  • fn01Multipaths.png
    fn01Multipaths.png
    25.7 KB · Views: 314
Joined
Jul 3, 2015
Messages
926
It appears the drives are still multi-pathed otherwise their status wouldn't say 'Optimal'. Also getting two da numbers per drive indicates two different controllers addressing the drives. When you destroy multi-path after reboot assuming the drives are still physically multi-pathed then the multi-path is auto re-created. Would be helpful if you could tell us your exact config so we can better understand the setup.
 

tbrim

Dabbler
Joined
Apr 29, 2016
Messages
13
I have an iXsystems FreeNAS-4U (FreeNAS Certified 24-bay).

"getting two da numbers per drive indicates two different controllers addressing the drives."

Is this really true? How do I know this is the truth? I mean I really like this answer, it is the easy answer to swallow. It basically means that I don't need to worry about those multipaths at all.
 
Joined
Jul 3, 2015
Messages
926
You could run sas2ircu list from the CLI and see how many controllers you have. After that try sas2ircu 0 display and see which controllers are talking to which drives. Failing that dig out that old quote from iXsystems and see how many HBAs they fitted.
 
Joined
Jul 3, 2015
Messages
926
Two connections to one card perhaps? Whenever I think multi-path I always assume two or more controllers as that's what I do. Might be worth asking iX to explain as that's why you went FreeNAS Certified.
 

tbrim

Dabbler
Joined
Apr 29, 2016
Messages
13
Ok, so I sent an email to iXsystems and they asked for a debug file, so I zipped one up and sent it to them and here is their response after looking at the debug file:

iXsystems said this:
From your debug I can confirm that you have a multipath configuration on this FreeNAS certified unit.

Of course I have asked them to elaborate on this, so we shall see what they say. Thank you for your help so far Johnny, hopefully there will be a really easy answer to this problem short of resetting the iXsystems box back to defaults and purchasing new hard drives (that would just not be right).
 

bestboy

Contributor
Joined
Jun 8, 2014
Messages
198
It would be interesting to know if iXsystems is actually using "dual linking" (1 HBA ==2 links==> 1 expander) for this FreeNAS certified gear. If this is true, then we can assume dual linking is working contrary to some reports [1][2][3]. This would be good news, because it might keep some people from jumping to SAS3 due to bandwidth concerns.
 
Joined
Jul 3, 2015
Messages
926
This config wouldn't help bandwidth as one connection is passive. This is only to protect against a cable failure. Your thinking of wide porting.
 

tbrim

Dabbler
Joined
Apr 29, 2016
Messages
13
I just don't understand, if it is something that just IS because that is the way the hardware was designed then why is it that when I insert a new drive that has never been inserted before that it does not show multipaths, but it does appear in the disks section. Here is what I am seeing:
upload_2016-5-3_14-41-43.png


da5 is the new disk that I inserted, ada0 and ada1 are ssd's that I have not gotten around to using yet.

upload_2016-5-3_14-42-49.png



multipath/disk1-5 are disks that have been wiped in every way possible and whose multipaths have been destroyed, but upon reboot they appear again, alive from the dead.

I can use these disks without any issues at all in a new zvol, and then I can detach them and mark them as new (destroy data). But then again, after a reboot they show up again with mulitpaths. I know that when they were originally inserted for the first time into the FreeNAS box that they did not have multipaths and they also showed their presence in the, "View Disks" section of the interface. So how could this "multipath hardware feature" not work on new disks but somehow work on old disks that have had some type of history? I am more convinced that these multipaths are there (not because of hardware) but because of previous configuration, thus the reason I have been all over the web trying to figure out how to get rid of the multipaths which most people say, "you need to wipe those drives and reboot", well I've been there and done that. No matter what they keep coming back.
 
Joined
Jul 3, 2015
Messages
926
This is normal behaviour for a multi-path system. When you insert a new drive it is single pathed and appears in view disks. If you reboot the system then the system detects the drive is multi-pathed and does the right thing. You can get around doing this by creating the multi-path yourself from the CLI using the gmultipath command.

gmultipath label -v Name /dev/da0 /dev/da1

I do this and name the drives their position in the chassis so J1S4 which means JBOD 1 Slot 4. This way if a drive fails it's easy to find its location even in very large arrays with multiple JBODs.
 

tbrim

Dabbler
Joined
Apr 29, 2016
Messages
13
So are you saying that some drives are multipathed and others are not? Because when I reboot the system with this new (actually an old 500gb drive) it never shows a multipath.
 
Joined
Jul 3, 2015
Messages
926
Is that a SATA drive?
 
Joined
Jul 3, 2015
Messages
926
You can't multi-path SATA drives only SAS.
 

tbrim

Dabbler
Joined
Apr 29, 2016
Messages
13
Excellent! good information! First of all I want to thank you for your time and attention to my thread. My immediate concerns are resolved. I hope that some other curious soul finds it and can learn from it.

Moving on, I am curious about what you were saying about SAS3 and still don't understand why multiple paths to a single hard drive from a single controller is a good thing.
 

bestboy

Contributor
Joined
Jun 8, 2014
Messages
198
This config wouldn't help bandwidth as one connection is passive. This is only to protect against a cable failure. Your thinking of wide porting.

Oh ok. I was going off you assumption that there are 2 connections to one HBA.
Johnny Fartpants said:
Two connections to one card perhaps?

But you are right. If there is multi pathing for dual ported SAS drives, then there must be 2 expanders.
So the chassis iX systems is using is probably a 846BE26 (and not a 846BE16 connected with 2x SFF 8087).
 
Joined
Jul 3, 2015
Messages
926
No worries.

The SAS3 comment was referring to the potential bottleneck of lots of drives channeling through a single SAS2 interface. Often this is not an issue as your bottleneck is normally elsewhere like network.

Regarding what the benefits are to your config I'd be interested to hear what iX say. Sure it provides you with a little resilience but for me multi-pathing is best when you have two or more HBAs allowing for a whole card to fail and still continue.
 
Last edited:

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
It would be interesting to know if iXsystems is actually using "dual linking" (1 HBA ==2 links==> 1 expander) for this FreeNAS certified gear. If this is true, then we can assume dual linking is working contrary to some reports [1][2][3]. This would be good news, because it might keep some people from jumping to SAS3 due to bandwidth concerns.

Not true.. because....

This config wouldn't help bandwidth as one connection is passive. This is only to protect against a cable failure. Your thinking of wide porting.

Only SAS is multipath, and yes... your system supports (and is using) multipath.

In your case, you need to stick with SAS drives for that system. You should not mix SAS drives and SATA drives on the same controller at the same time.

If you want to get rid of multipath, you will need to dd the disks to destroy the multipath data AND break the hardware multipath.

There's two "parts" to multipath functioning on a FreeNAS certified system:

1. The hardware must be designed to use multipath (two indepent paths from the SAS disk to the system using either 2 sets of SAS cable paths or 2 independent SAS controllers connected to the same backplane/jbod).
2. On bootup the ix-multipath service runs (this happens for everyone at bootup). If multipath is determined to be possible for a disk, then special signatures are written to the disk so that if multipath were to somehow fail (such as cable or controller failure) you will know that multipath should be functioning but only 1 path is found. If the signatures already exist, then the signatures are simply read to determine the state of multipath.

So you'll need to break the multipath hardware as well as zero the disks out. There's lots of more hacky ways to accomplish the same thing, but the easy way is to break the hardware connections so multipath no longer exists, followed by a dd to the entire disk.

HTH. :D
 
Status
Not open for further replies.
Top