FreeNAS 11 Install "Error: no symbol table."

Status
Not open for further replies.

chipped

Dabbler
Joined
May 2, 2016
Messages
29
Hi,

See attached image.

I get this error when I try boot the install disk trying to install FreeNAS 11 STABLE.

Gigabyte H77-D3H-MVP running latest BIOS.

Tried UEFI only and Legacy only, same result.

Using USB DVD drive to install, also tried from USB.

Tried disconnecting all drives.

Are there some boot flags I can try? Or BIOS/UEFi settings?

Cheers.
 

Attachments

  • UNADJUSTEDNONRAW_thumb_ea3b.jpg
    UNADJUSTEDNONRAW_thumb_ea3b.jpg
    197.6 KB · Views: 396

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
This is very strange. Are you sure your hardware is even supported? I would say don't use it even if it's supported but you can try to use it. That motherboard just isn't a good option.

Sent from my Nexus 5X using Tapatalk
 

chipped

Dabbler
Joined
May 2, 2016
Messages
29
After some Googling I think I might have a 32 but EFI, CPU is 64 bit and hardware is compatible though.

Any thoughts?
 

iammykyl

Explorer
Joined
Apr 10, 2017
Messages
51
Your MB has a UEFI DualBIOS™ . Page #46 user manual. PCI ROM Priority. Allows you to determine which Option ROM to launch. Options are Legacy ROM and EFI Compatible ROM. (Default: EFI Compatible ROM)
Try setting BIOS to defaults, use Rufus for creating the installer, (accept all default settings) and use only the USB 2.0 ports.
 

chipped

Dabbler
Joined
May 2, 2016
Messages
29
Thanks for the suggestion.

However it didn’t help, same message.

I’ve decided to jump ship and go to Rockstor. I think BTRFS will suit me more.

Plus their bootloader actually works :)
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
That's part of why I haven't upgraded to 11 yet, but that seems to be much more stable. But I'd be very hesitant to trust my data to btrfs.
 

chipped

Dabbler
Joined
May 2, 2016
Messages
29
That's part of why I haven't upgraded to 11 yet, but that seems to be much more stable. But I'd be very hesitant to trust my data to btrfs.
I’ve been watching it for years, it’s stable now. Synology even ship it as default in their NAS.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
it’s stable now
If you say so. They don't say the same of anything but single-disk volumes, or mirrors where both disks are working. And Synology doesn't use it as a RAID manager, just as a filesystem. But your data, your choice. Personally, if I wanted to use Linux, I'd be looking for something that used ZFS on Linux.
 

chipped

Dabbler
Joined
May 2, 2016
Messages
29
If you say so. They don't say the same of anything but single-disk volumes, or mirrors where both disks are working. And Synology doesn't use it as a RAID manager, just as a filesystem. But your data, your choice. Personally, if I wanted to use Linux, I'd be looking for something that used ZFS on Linux.

Fair enough, you're right there. RAID 5/6 are not production ready. However I could also argue the opposite for ZFS, it's only productions ready.

You can't even remove/add disks for a ZVOL or change its RAID-Z level. That is pathetic and the fact is, the only people who can implement it just don't care about it anymore.

Home users want this feature and I'd say it is the achilles heel of ZFS.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
But if RAID5/6 aren't production-ready (in a ten-year-old filesystem/RAID manager), you can't safely add/remove disks or change RAID level with btrfs either.

You're right that there are limitations with ZFS. The inability to add/remove disks from a RAID set or change RAID level is definitely one of them for small (i.e., home) users. For many home users, mirrors are probably a better storage configuration--just whack in another pair of disks (and/or replace a pair with larger disks) when you need to expand. It certainly would be nice to be able to turn a three-disk RAIDZ1 into a four-disk RAIDZ1, particularly for home users who may not have carefully planned their storage before building. But I keep saying "for home users", and there's a reason for that: that's not who ZFS was designed for. It was designed for enterprise use, and in that setting, those "pathetic" limitations you mention really aren't a problem.

the fact is, the only people who can implement it just don't care about it anymore.

That isn't close to being a "fact;" it's your opinion. You can implement redundant ZFS with two disks. You can easily and safely expand it with two more disks, or by replacing your two disks with two larger disks. That isn't a high hurdle even for the modest home user. You can buy a proper turnkey server, new, for US$200. That likewise isn't a high hurdle. You don't have to be Microsoft/Amazon/Google to be able to take advantage of ZFS.

But this is getting more back-and-forth than I'd really intended. If you know and trust btrfs, and are familiar with its (self-described; my information here is primarily coming from the btrfs wiki) limitations, and want to use that filesystem for your data, go for it. I prefer the known stability of ZFS.
 
Last edited by a moderator:

chipped

Dabbler
Joined
May 2, 2016
Messages
29
For many home users, mirrors are probably a better storage configuration--just whack in another pair of disks (and/or replace a pair with larger disks) when you need to expand. It certainly would be nice to be able to turn a three-disk RAIDZ1 into a four-disk RAIDZ1, particularly for home users who may not have carefully planned their storage before building. But I keep saying "for home users", and there's a reason for that: that's not who ZFS was designed for. It was designed for enterprise use, and in that setting, those "pathetic" limitations you mention really aren't a problem.

Mirroring is wasteful for a home user, they lose 50% of their storage and the performance would be useless on their Gigabit connections. Plus it costs a lot.

That isn't close to being a "fact;" it's your opinion.
Have you read the original blog post where one of the developers talks about it? I have, from memory it was almost 10 years ago, maybe more. It will never happen, they don't care about it.

But this is getting more back-and-forth than I'd really intended. If you know and trust btrfs, and are familiar with its (self-described; my information here is primarily coming from the btrfs wiki) limitations, and want to use that filesystem for your data, go for it. I prefer the known stability of ZFS.

Thanks, it was good to have a bit of tech banter. Seriously though, good luck with FreeNAS.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Mirroring is wasteful for a home user,
Any redundancy is wasteful until you need it, then it isn't any more. Mirrors are the simplest and cheapest way to get redundancy with a small install. Of course you can use RAID5/6 (or RAIDZ1/2) with potentially a smaller penalty for redundancy, but then you have other problems: with btrfs it's unstable and poorly designed (e.g., parity isn't checksummed, they developed it with a write hole); with ZFS it's stable, but you're locked into that number of devices in the vdev.
Plus it costs a lot.
Disks are cheap, and if we're talking about a home user with presumably-small storage requirements, they're very cheap indeed. A 4 TB WD Red disk is US$137 today. A pair of them is US$274, which will give 4 TB of redundant storage. A 2 TB WD Red is US$85 today, so three are US$255 to get 4 TB in RAID5/RAIDZ1. The penalty for mirrors is $19. Balance that against ease of expansion and greater flexibility, and I think it's well worth it.

With larger storage demands, mirrors don't make much sense unless there are other reasons (like high IOPS). But if we're talking home users, the "waste", if any, is minimal.
Have you read the original blog post where one of the developers talks about it?
Yes, but I'm now thinking I misread your earlier post. I read "the only people who can implement it just don't care about it anymore" with the "it" referring to a ZFS server--that is, the only people who can deal with its requirements are those who just don't care about the cost. I don't believe that's the case, as I don't think the cost needs to be very high. But now it sounds like the "it" refers to block pointer rewrite, which would be necessary to expand/shrink vdevs and/or change RAID levels--and in that case, I'm inclined to agree that it isn't likely to happen. I don't think it's the case that the devs don't care, necessarily, so much as that it would be too much work, if possible at all, to add it now. Though I understand they are working on the ability to remove vdevs from a pool, which would be a good step (especially for those people who insist on ignoring all the warnings and adding a single disk to their RAIDZ pools).

Of course, none of this has anything to do with your boot issues--those, I expect, are something to do with FreeBSD and your hardware, and have nothing to do with any filesystem.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Though I understand they are working on the ability to remove vdevs from a pool, which would be a good step (especially for those people who insist on ignoring all the warnings and adding a single disk to their RAIDZ pools).
Yeah, that's going to be a nasty hack, though.

The vdev's contents get allocated to a virtual vdev (yes, a virtual virtual device) which lives elsewhere on the pool, and there's a table that has this mapping.

If it happened to me, I'd destroy the pool and restore from backup anyway.
 

chipped

Dabbler
Joined
May 2, 2016
Messages
29
Disks are cheap, and if we're talking about a home user with presumably-small storage requirements, they're very cheap indeed. A 4 TB WD Red disk is US$137 today. A pair of them is US$274, which will give 4 TB of redundant storage. A 2 TB WD Red is US$85 today, so three are US$255 to get 4 TB in RAID5/RAIDZ1. The penalty for mirrors is $19. Balance that against ease of expansion and greater flexibility, and I think it's well worth it.

You're conveniently forgetting the other items.
- Bigger case
- Bigger Power Supply
- More RAM for ZFS
- Buy two drives, not one for every storage expansion
- More expensive motherboard or card for more SATA ports

That's all the cost related cons compared to BTRFS.

But now it sounds like the "it" refers to block pointer rewrite, which would be necessary to expand/shrink vdevs and/or change RAID levels--and in that case, I'm inclined to agree that it isn't likely to happen.

Yes, thats right.
 

chipped

Dabbler
Joined
May 2, 2016
Messages
29
The average user here starts with at least four hard drives right?

Most motherboards have 6. When it's time to double their storage they run into trouble with their case or motherboard not being able to accomodate the extra drives.

Mirrors are made for enterprise who have hard drive slots top spare.

Not ideal for the average user trying to store their media and backups.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
That's all the cost related cons compared to BTRFS.
But btrfs doesn't actually avoid any of those right now, and it's doubtful that it will without a major rewrite of the RAID5/6 stuff. Not quite BPR complexity, but it's as complex as writing a new vdev type for ZFS which does RAIDZn but allows for mutability (at the cost of some performance and disk space).
 

chipped

Dabbler
Joined
May 2, 2016
Messages
29
But btrfs doesn't actually avoid any of those right now, and it's doubtful that it will without a major rewrite of the RAID5/6 stuff. Not quite BPR complexity, but it's as complex as writing a new vdev type for ZFS which does RAIDZn but allows for mutability (at the cost of some performance and disk space).
Sorry I don't think we're on the same wavelength.

How doesn't it avoid those? BTRFS allows you to resize your RAID 5/6, you can add/remove disks or even change the RAID scheme.
 
Status
Not open for further replies.
Top