Boot stuck at "Beginning ZFS volume imports"

Status
Not open for further replies.

James Snell

Explorer
Joined
Jul 25, 2013
Messages
50
Most likely, the other SSD has some sort of fault or the interface had some problem. The error would not be caused by the fact that a ZFS mirror was involved.

I'll look in to it. Both SSDs came out of their packaging and went straight in to my box for my FreeNAS installation. So if one is DOA, then this could add up. I'll investigate and report back. I guess that's all aside of this thread's topic, regarding boot hanging over some kind of ZFS Volume Import function. Hopefully FreeNAS 11.0 will save the day. I'll report on that too. :) Thanks for the help Ericloewe.
 

James Snell

Explorer
Joined
Jul 25, 2013
Messages
50
@Ericloewe

Mirrored OS Install Fails
Regarding this tangent, you were right. 1/2 of my new SSDs was DOA. Thanks for the tip on that. It was just highly counter-intuitive to me that error messages about AHCI timeout complaints would be a hardware fault as I've actually never before had a DOA drive. +1 to experience.

Importing ZFS Volumes slow
Back on-topic, as per your suggestion, I installed FreeNAS 11.0-U4. The Importing ZFS Volumes message came up again, but the system got past it in a matter of minutes. Once the OS was fully up, I found one of my ZFS mirrors had a heavily faulting drive. This same mirror volume came online in the middle of a scrub. I think that scrub would normally take about 4 hours, this one reported it took a total of 73 hours. It might have been that the ~2 days I had FreeNAS 11.1 sitting on this Importing ZFS Volumes message was actually resolving the scrub. I've removed the faulting mirror drive and will replace it. So what I don't know now is if FreeNAS 11.1 was ever really the issue or if instead the fact I had a mirror with a faulted drive was simply making a scrub take a profoundly long time.

Now that the scrub is done, zpool status reports:
scan: scrub repaired 2.20M in 73h36m with 0 errors on Wed Jan 17 01:36:40 2018
This is for a 4TB volume, 2 drives mirrored.

Closing thoughts
I had actually replaced my OS drive (a Kingston USB drive) because I thought it was faulting. But now I think it was always just this very slow scrub on my mirror with a faulting drive.

I've always felt weird about running my OS on a USB drive and while that's worked wonderfully for my VMware ESX hosts (8 units), I seem to run in to storage media issues when using these drives for FreeNAS OS every year or so. Hopefully an SSD with 2million hours MTBF rating can be more reliable. I guess there remains no contingency like having full independent backups. I'm sure this SSD will eventually fail me too. Just seems to be the way it goes.

Thanks for the help. I hope my posts have added somewhat to this conversation.
 
Last edited:

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Mirrored OS Install Fails
Regarding this tangent, you were right. 1/2 of my new SSDs was DOA. Thanks for the tip on that. It was just highly counter-intuitive to me that error messages about AHCI timeout complaints would be a hardware fault as I've actually never before had a DOA drive. +1 to experience.
Hardware is the worst thing on earth. It always fails in a way that:
  • Maximizes your confusion
  • Maximizes the time required for troubleshooting
  • Minimizes the reproducibility
 

James Snell

Explorer
Joined
Jul 25, 2013
Messages
50
Hardware is the worst thing on earth.

Lol, yeah. I do like to play in the hardware realm, but it's more making my own PCBs for things and programming embedded systems. PC hardware feels like it is its own beast.

My recent mirror failure reinforced a policy I started a few years ago. I buy drives from both of the big vendors for my mirrors. The premise being that both have good and bad models and which is which is unknown until they've been out in the wild for a while. On this occasion, it was a WD Red that died after like a year of modest use. The mirror is functional by the other drive a Seagate IronWolf. I've had tons of Seagate drives die on me in the past, but I also have had WD fails. I trust no one. And folks that authoritatively declare one is better than the other immediately limit my respect for them. I quite like the data posted by backblaze on drive reliability, but again, that doesn't really help you when buying a new drive, unless it's a 'new' instance of a very well-characterized model. The backblaze data is still of some immediate value though, as at least it gives some general trends.
 
Status
Not open for further replies.
Top