FreeNas won't boot

ramair2k

Dabbler
Joined
Apr 21, 2021
Messages
10
Hi All,

First off, I'm a total n00b with FreeNas, hence why I'm seeking help here. I've had FreeNas running on an HP micro server for 10yrs now and never had any issues with it. I shut the server down for maintenance like I've done many times in the past, and blew out all the dust inside. I removed all the cables, motherboard and USB stick to do this and never had an issue until now when I tried to boot the server back up. The server turns on just fine, BIOS is set to boot of The USB stick however I'm getting a 400-AHCI Port0 Device Error and FreeNas won't work. I'm not sure what to do here or what my options are. Any help would be appreciated.
 

ramair2k

Dabbler
Joined
Apr 21, 2021
Messages
10
Is it possible my USB Drive with FreeNas OS on it died? Is there a way to install a fresh version of FreeNas and keep the data on the HDDs and import the pool?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
I'm getting a 400-AHCI Port0 Device Error
Well, that's not related to USB, AHCI is the SATA host driver specification. Of course, it could be noise and the boot device could indeed be dead.
I've had FreeNas running on an HP micro server for 10yrs now
What version are you running, exactly?
 

Evertb1

Guru
Joined
May 31, 2016
Messages
700
400-AHCI Port0 Device Error
That's your first HDU and most likely not a bootable device (maybe part of your Storage pool). It looks like the system can't boot on your USB stick any more. If you have your config file stored on a save place you could reinstall FreeNAS and then dowload your configuration after the first boot of the reinstalled system.
 

ramair2k

Dabbler
Joined
Apr 21, 2021
Messages
10
I don't have a backup copy of the USB stick, sadly and the more I think about it, I do think The USB stick failed. Can I do a fresh install and import existing pool? Would I do a fresh install with HDDs installed or removed? There are 4x HDDs in the micro server
 

ramair2k

Dabbler
Joined
Apr 21, 2021
Messages
10
Well, that's not related to USB, AHCI is the SATA host driver specification. Of course, it could be noise and the boot device could indeed be dead.

What version are you running, exactly?
I honestly don't even recall the version I was running. I had an issue where I could no longer log into the device and keep getting "Welcom to nginx". It was on my radar to upgrade to the latest version but now this happened. Most of the data on the server is movies, family pictures and videos. I do have this all backed up to another server (Synology) but would really like to get this back up and running without losing any data
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Is it vaguely recent or might we be talking 0.7 or older? Because that's a completely different product and the migration path to something supported would involve reconfiguring from scratch. Of course, if you have a ZFS pool, you can just import it (some tweaks to permissions may be necessary, YMMV).

Would I do a fresh install with HDDs installed or removed?
All you strictly have to do is not install to those disks, but for extra safety you can, of course, disconnect them.
 

ramair2k

Dabbler
Joined
Apr 21, 2021
Messages
10
Is it vaguely recent or might we be talking 0.7 or older? Because that's a completely different product and the migration path to something supported would involve reconfiguring from scratch. Of course, if you have a ZFS pool, you can just import it (some tweaks to permissions may be necessary, YMMV).


All you strictly have to do is not install to those disks, but for extra safety you can, of course, disconnect them.

The install was from back in 2012 time frame, I want to say it was version 9.x but I could be wrong. I do already have a ZFS pool with the existing 4x HDDs installed and I'm hoping to reinstall the OS and simply import that pool and I'm back up and running without losing any data on the drives. Is that realistic? And to be clear, when I reinstall the OS, I can keep the HDD installed but not format them? Will it recognize that these 4x HDDs were already part of a pool? Thanks for the help!
 

Evertb1

Guru
Joined
May 31, 2016
Messages
700
The install was from back in 2012 time frame,

Then you should at least have version 8.xx as that was released in 2010 (beta). Most likely it has been version 8.2 (released mid 2012) or 8.3 (released late 2012) I think. According to some info I found, the release of 9.1 was announced somewhere in august 2013.
 
Last edited:

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
without losing any data on the drives
As I said, no problem there.

back up and running
That's the potentially tricky part. Depends on your configuration, and you're looking at reconfiguring TrueNAS from scratch given the age of your config database.

And to be clear, when I reinstall the OS, I can keep the HDD installed but not format them?
Yes.

Will it recognize that these 4x HDDs were already part of a pool? Thanks for the help!
I think you get a warning if a disk is not empty, but all you have to do is not choose to install to your data disks.
 

ramair2k

Dabbler
Joined
Apr 21, 2021
Messages
10
I installed the latest version of Freenas. I'm at the point now where the server is online and pingable over the network and I can log in. I'm trying to import what I think is the original pool, but keep getting error that /data/zfs is not a valid directory.
Pool to import: Microserver | (bunch of numbers). I assume this is the original ZFS pool?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Can you show us some screenshots so we can follow along?
 

ramair2k

Dabbler
Joined
Apr 21, 2021
Messages
10
Here is what i'm seeing for the errors when trying to import:

Error: concurrent.futures.process._RemoteTraceback:
"""
Traceback (most recent call last):
File "/usr/local/lib/python3.7/concurrent/futures/process.py", line 239, in _process_worker
r = call_item.fn(*call_item.args, **call_item.kwargs)
File "/usr/local/lib/python3.7/site-packages/middlewared/worker.py", line 97, in main_worker
res = loop.run_until_complete(coro)
File "/usr/local/lib/python3.7/asyncio/base_events.py", line 587, in run_until_complete
return future.result()
File "/usr/local/lib/python3.7/site-packages/middlewared/worker.py", line 53, in _run
return await self._call(name, serviceobj, methodobj, params=args, job=job)
File "/usr/local/lib/python3.7/site-packages/middlewared/worker.py", line 45, in _call
return methodobj(*params)
File "/usr/local/lib/python3.7/site-packages/middlewared/worker.py", line 45, in _call
return methodobj(*params)
File "/usr/local/lib/python3.7/site-packages/middlewared/schema.py", line 965, in nf
return f(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/middlewared/plugins/zfs.py", line 390, in import_pool
'Failed to mount datasets after importing "%s" pool: %s', name_or_guid, str(e), exc_info=True
File "libzfs.pyx", line 369, in libzfs.ZFS.__exit__
File "/usr/local/lib/python3.7/site-packages/middlewared/plugins/zfs.py", line 383, in import_pool
zfs.import_pool(found, found.name, options, any_host=any_host)
File "libzfs.pyx", line 870, in libzfs.ZFS.import_pool
libzfs.ZFSException: '/data/zfs' is not a valid directory
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/middlewared/job.py", line 349, in run
await self.future
File "/usr/local/lib/python3.7/site-packages/middlewared/job.py", line 385, in __run_body
rv = await self.method(*([self] + args))
File "/usr/local/lib/python3.7/site-packages/middlewared/schema.py", line 961, in nf
return await f(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/middlewared/plugins/pool.py", line 1934, in import_pool
'cachefile': ZPOOL_CACHE_FILE,
File "/usr/local/lib/python3.7/site-packages/middlewared/main.py", line 1141, in call
app=app, pipes=pipes, job_on_progress_cb=job_on_progress_cb, io_thread=True,
File "/usr/local/lib/python3.7/site-packages/middlewared/main.py", line 1081, in _call
return await self._call_worker(name, *args)
File "/usr/local/lib/python3.7/site-packages/middlewared/main.py", line 1101, in _call_worker
return await self.run_in_proc(main_worker, name, args, job)
File "/usr/local/lib/python3.7/site-packages/middlewared/main.py", line 1036, in run_in_proc
return await self.run_in_executor(self.__procpool, method, *args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/middlewared/main.py", line 1010, in run_in_executor
return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
libzfs.ZFSException: ("'/data/zfs' is not a valid directory",)
 

ramair2k

Dabbler
Joined
Apr 21, 2021
Messages
10
1619044016393.png
1619043979262.png
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Do you have any datasets with the mountpoint set to /data/zfs? That might be borking things, since the canonical mountpoint for TrueNAS is /mnt/[I]poolname[/I].

If that's what's going on, this should be recoverable by manually importing the pool from the CLI using the -N (don't mount anything) flag, and then changing the offending mountpoints.

Sort of spitballing here, I'm not sure what libzfs' behavior is supposed to be, exactly. Might be worth a bug report.
 

ramair2k

Dabbler
Joined
Apr 21, 2021
Messages
10
Do you have any datasets with the mountpoint set to /data/zfs? That might be borking things, since the canonical mountpoint for TrueNAS is /mnt/[I]poolname[/I].

If that's what's going on, this should be recoverable by manually importing the pool from the CLI using the -N (don't mount anything) flag, and then changing the offending mountpoints.

Sort of spitballing here, I'm not sure what libzfs' behavior is supposed to be, exactly. Might be worth a bug report.
if you think it's worth a try to manually import the pool from CLI, would you mind helping me with those commands? I don't recall if I had anything /data/zfs as a mount point. I'd like to try everything possible before starting fresh and wiping the drives
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Wiping is an extreme option that's completely off the table for now, don't worry.

List importable pools with zpool import, then import your pool with zpool import -N [I]poolname[/I]. From there, look at mountpoint column in the output of zfs list.
 

ramair2k

Dabbler
Joined
Apr 21, 2021
Messages
10
I have good news! I rebooted the server for giggles and tried to import the pool again...it worked! All my files are there and the server is back up and running. Thanks so much for all the help!
 
Top