"Error getting available space" after Reboot - need to add some delay?

Status
Not open for further replies.

hidden72

Dabbler
Joined
Aug 8, 2011
Messages
22
I'm using FreeNAS 8.3 in an "all in one" ESXi 5.1 environment. I'm passing through an Areca ARC1230 PCI-E Controller-card (configured in JBOD mode) through directly to the FreeNAS guest VM. FreeNAS installed just fine, I was able to create Volumes, Datasets, shares, etc. Everything works great UNTIL I reboot FreeNAS. At that point everything stops working. When I go to the GUI and look at the Active Volumes tab, it shows:

Code:
tank1 	/mnt/tank1 	None (Error) 	Error getting available space 	Error getting total space 	HEALTHY 


If I export/import the Volume, then things work great again until the next bootup.

I believe this is what's happening: The FreeNAS VM starts to boot and the Areca controller card is initialized. While the Areca card is still initializing, FreeNAS finishes booting up. Once it's booted, then the Areca card completes initialization and the drives are presented to FreeNAS. All of the daemons are "dead" (for example, rsyncd) until I export/import the volume and then it works again.

Is there a way to insert an "artificial delay" after the controller card is initialized, but before the rest of the FreeNAS services start? I'm thinking it'll need 30-60 seconds of delay before continuing. Here's the last few lines of output from dmesg that show the hotplug

Code:
Trying to mount root from ufs:/dev/ufs/FreeNASs1a
ZFS filesystem version 5
ZFS storage pool version 28
VMware memory control driver initialized
arcmsr_dr_handle: Target=0, lun=0, Plug-IN!!!
arcmsr_dr_handle: Target=1, lun=0, Plug-IN!!!
arcmsr_dr_handle: Target=2, lun=0, Plug-IN!!!
arcmsr_dr_handle: Target=3, lun=0, Plug-IN!!!
arcmsr_dr_handle: Target=4, lun=0, Plug-IN!!!
arcmsr_dr_handle: Target=5, lun=0, Plug-IN!!!
[root@freenas] ~#


Thanks in advance.
 

ProtoSD

MVP
Joined
Jul 1, 2011
Messages
3,348
You'll probably want to add a "sleep 60" to one of the files in /conf/base/etc/rc.d

I'll need some time to poke around and figure out which one.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I played with ESXi 5.1 earlier this month with an Areca 1280ML-24 and I had your error at one point. I could never resolve the error. Here's some random thoughts that may(or may not help). I ended up abandoning ESXi completely as I determined that PCI passthrough isn't fully functional for my motherboard, the controller, or both.

1. I don't believe you can add a delay in the bootcycle. (Ignore this.. protosd looks like he knows how)
2. I couldn't get my card to function properly despite emails to Areca and my motherboard manufacturer(Gigabyte in my case).
3. FreeNAS is best used with PCIe Passthrough. I believe there are 2 forms of passthrough. PCIe and disk. I didn't play with the disk passthrough at all since I decided it was PCIe or bust. In my case it was bust. I could be completely wrong about the disk thing though.
4. I could never get the system to be reliable enough with PCIe passthrough to trust it with any data.
5. Emails to Areca are pointless since the card is discontinued and they don't support it anymore unless you have a support contract(read.. BIG $$$$$$$). They'll simply ask you to use the latest driver(FreeNAS' driver is newer than Areca's website!) and to use the latest BIOS(I was). After that, nothing more except "if you buy our newest line they are fully supported".
6. I had IRQ storms and all sorts of other errors I couldn't resolve. My thread is at http://forums.freenas.org/showthread.php?10216-Areca-1280ML-24-IRQ-storm-and-CCB-time-out.

jgreco is a wiz with ESXi and he couldn't help me. He eventually told me that he thinks that the hardware(motherboard, controller, or a combo of both) just won't work with VT-d(BIOS support for PCIe-passthrough). Hopefully jgreco will provide some advice for this thread.. and maybe he'll know how to fix it for you.

If you do get it working you should definitely post back your fix. I'd almost be willing to experiment with it just to see if it fixes my problem. I hadn't ever used ESXi until this month but I've had years of experience with virtualizing stuff.

Good luck!
 

ProtoSD

MVP
Joined
Jul 1, 2011
Messages
3,348
Ok,

I don't have any way to test this, but it shouldn't cause any problems if it doesn't work.

First, mount "/" read-write with: mount -uw /

Edit /conf/base/etc/rc.d/ix-zfs

Add sleep 60 or whatever value you think is enough time where you see ** HERE ** in the file below, then save the file and reboot. Adjust the value as necessary.

Code:
#!/bin/sh
#
# $FreeBSD$
#

# PROVIDE: ix-zfs
# REQUIRE: hostid mountcritlocal
# BEFORE: zfs

. /etc/rc.subr

#
# Generate fstab right before mountlate.
#
import_zpools()
{
        local IFS="|"
        local f="vol_name vol_guid"
        local sf=$(var_to_sf $f)
        local rc=1
        ** HERE **
        if [ ! -d "/data/zfs" ]; then
                mkdir /data/zfs || true
        fi
        ${FREENAS_SQLITE_CMD} ${FREENAS_CONFIG} "SELECT $sf FROM storage_volume WHERE vol_fstype = 'ZFS'" | \
        while eval read -r $f; do
                if [ -n "${vol_guid}" ]; then
                        /sbin/zpool import -o cachefile=none -R /mnt -f ${vol_guid}
                        rc=$?
                fi
                if [ ${rc} -ne 0 ]; then
                        /sbin/zpool import -o cachefile=none -R /mnt -f ${vol_name}
                fi
                /sbin/zpool set cachefile=/data/zfs/zpool.cache ${vol_name}
                # Fixup mountpoints
                [ -d /mnt/mnt ] && /sbin/zfs inherit -r mountpoint ${vol_name}
        done
}

name="ix-zfs"
start_cmd='import_zpools'
stop_cmd=':'

load_rc_config $name
run_rc_command "$1"
 

hidden72

Dabbler
Joined
Aug 8, 2011
Messages
22
Add sleep 60 or whatever value you think is enough time where you see ** HERE ** in the file below, then save the file and reboot. Adjust the value as necessary.

That worked! Needed to do a few more things, but that did the trick.

Tried 60s the first time and it wasn't quite enough. Bumped it to 120s and got a different error message (something about trying to import a pool that already exists). I destroyed that volume and then re-created a new one with a different name, just to be sure. This time, with the 120s delay, the volume survived the reboot!!

You're a lifesaver protosd, thanks a ton!
 

ProtoSD

MVP
Joined
Jul 1, 2011
Messages
3,348
Glad to hear it! :)

Make a backup copy of that file since it's likely to be overwritten during an upgrade.
 

hidden72

Dabbler
Joined
Aug 8, 2011
Messages
22
for the purposes of documentation, in 9.2.0 Release this file has been moved to /conf/base/etc/ix.rc.d/ix-zfs
 

hidden72

Dabbler
Joined
Aug 8, 2011
Messages
22
That tweak survived many, many upgrades but finally got wiped out after today's upgrade (9/29/2015). Same fix still works great!
 
Status
Not open for further replies.
Top