True NAS Core 13 installation on IBM X3100M4

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Now I am banging my head - why the TrueNAS CORE-13.0-U3.1 won't
I already told you: Core uses the upstream FreeBSD BTX bootloader, which dropped support for many legacy PCI bridges. You're not going to have success banging your head against this wall.
 

Netfreak

Dabbler
Joined
Jul 22, 2017
Messages
22
Thanks for the double check...
So far dropped ONLY for this specific IBM servers cos I can install on all other old PCs here. HP7700, ASUS Xeon board, SuperMicro Xeon board all which are older than 10 yrs. I wonder why Free/True NAS drops support on many PCI bridges (vague, not specific) when it's being used by many globally.
I have 2 of this IBM servers at home and since Free/True NAS CORE can't be installed on them, then I have no choice than to stick to the FreeNAS 11.3 for now or maybe going over to SCALE which I, by the way, noticed is slower than CORE but the performance issue is not important to me as a home hobby user.
Thanks again to all for prompt responses and info
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I wonder why Free/True NAS drops support on many PCI bridges (vague, not specific) when it's being used by many globally.

Simple. Free/TrueNAS didn't drop this support. The iXsystems team does not write their own operating system, and instead they rely on whatever is in the upstream FreeBSD operating system.

I'm not exactly clear on what happened here, but I can tell you that there's enough drift in the world of PC's that it is impossible to remain compatible with everything all the time over the decades.
 

Netfreak

Dabbler
Joined
Jul 22, 2017
Messages
22
Well as in the word. FreeNAS now TrueNAS. is free and opensource and i have to take it for what it is offering. Now, too bad for my 2x IBM servers where TrueNAS is concerned.
Thanks for your response and your clarity
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
I already told you: Core uses the upstream FreeBSD BTX bootloader, which dropped support for many legacy PCI bridges. You're not going to have success banging your head against this wall.
Is this really a thing? PCI bridges of all sorts should really still be considered widely used. And the server doesn't even have anything that would justify something exotic, it's just a few PCIe slots that shouldn't even need muxes, let alone switches or Conventional PCI bridges. The only bridges this thing should have are in the PCH.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Is this really a thing?

It could be, though I'd love a less vague reference. There are definitely changes going on in the boot sequence. For example, FreeBSD 13.0R-i386 won't even boot with less than 128MB RAM, the loader freaks.

PCI bridges of all sorts should really still be considered widely used.

Sure, but, on the flip side of that coin, PCIe has been a major interconnection technology for ... what, 20 years now? And PCI before it. PCI really only lasted about 10-15 years before it "wore out", and we learned lots of lessons along the way. It caused some significant internal redesigns inside FreeBSD and Linux to support it, and not all of it was exactly standard. Remember ServerWorks?

I remember needing to get tweaks installed or committed to handle quirks early in the lifecycle of PCI{,e} and I'm not particularly shocked that as some of these changes get deprecated, things break.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
I get all that, but we're talking about a Sandy Bridge system, with seemingly no special legacy features you wouldn't find on a Dell, HPE or Supermicro system of the same generation.
Maybe it's a power management setting or other arcane PCIe configuration in the system firmware? I did spend several hours yesterday fighting with Windows until it would boot (story in spoiler below) on a Haswell machine and it came down to weird power settings.

So story time. This was Haswell desktop, with an Asus Z97 Pro (WiFi AC) motherboard. It was in the shop, so to say, for dusting and an upgrade to an NVMe SSD.
Sidenote to my sidenote: Only after starting this process did I remember/realize how crappy the Xeon E3 v3/v4 platform was in terms of PCIe, especially in the typical desktop boards of 2014. To add to the misery, SATA Express was a thing at this point and competed with M.2, so the M.2 socket is limited to PCIe 2.0 x2, which is barely better than SATA 6Gb/s. Add in typical features like extra SATA controllers, extra USB 3.0 controllers and the WiFi card and the PCH's pitiful 8 PCIe 2.0 lanes need all sorts of crazy switching and muxing to support all this junk plus a truckload of PCIe slots mainly intended as performance art. Being used to my X99/Xeon E5 system, it was shocking how bad the equivalent desktop platform was.

Anyway, the SSD upgrade involved manually replicating the old install's partitioning to keep Windows happy. I fumbled this up several times because I did not engage my brain, but eventually got it right. Also had to repeat everything because I had to put the old SSD back in to install the NVMe driver, which Windows annoyingly only installs "as needed". Reinstalling Windows' chain of bootloaders is also a massive pain in the ass that needs to be done manually, but fine, whatever.

I get to the point where the session manager is loaded correctly... But then the system just hangs almost immediately. This was very weird because the old install worked fine with the same hardware (including with the NVMe SSD installed) and firmware settings, and Kubuntu 22.04 loaded up and worked just fine. Some fumbling around showed that Safe Mode with Networking would also hang, but without networking it would work. So, it's either Bluetooth, WiFi or Ethernet. After a bit, it became clear that WiFi was the problem.

That's when I went down the rabbit hole of looking at all the muxing and switching going on to get all the PCIe devices to work. I thought the SSD might be sitting on the same switch as the WiFi NIC, but that was not the case, it just hangs directly off of the PCH, with a mux to select between SATA Express and M.2. So I figured maybe the driver was corrupted or just buggy and dug up a slightly more recent version distributed by Dell, which is a bit of a pain to test because of the time needed to reboot. No luck there.

At this point, the problem was isolated and I was very close to just turning off WiFi. Then I remembered that I'd turned on some of the power management features of the PCIe buses - specifically, to let the OS control the power state, instead of having the system firmware do so. Turning that off fixed the issue! System firmware bug, WiFi driver bug, WiFi hardware bug, I don't know, but I want several hours of my life back.
 

Netfreak

Dabbler
Joined
Jul 22, 2017
Messages
22
Just to be clear for our viewers/readers, my issue where installing TrueNAS on my IBMs "IBM X3100 M4" is ONLY for the CORE TrueNAS-13.0-U3.1 version as the SCALE version installs successfully. I didn't/can't/won't try the Enterprise version.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Just to be clear for our viewers/readers, my issue where installing TrueNAS on my IBMs "IBM X3100 M4" is ONLY for the CORE TrueNAS-13.0-U3.1 version as the SCALE version installs successfully.

Well, yeah, these operating systems have different heritages and different things are going to be supported.

I didn't/can't/won't try the Enterprise version

Which isn't even an option; TrueNAS Enterprise is only available to customers who have purchased TrueNAS Enterprise hardware platforms from iXsystems.
 

Netfreak

Dabbler
Joined
Jul 22, 2017
Messages
22
In my case there are always option(s):
0. Keep using TrueNAS 11.3 on my IBM's
1 Use TrueNAS SCALE on my IBM's
2. Buy a new/another server special for TrueNAS CORE
3. Purchase an Enterprise machine/license for Enterprise
I will personally in my home/hobby case skip 3.
Glad I have CORE up running on 2 other machines here - a Supermicro xeon board and on an Asus Server board.
 

Netfreak

Dabbler
Joined
Jul 22, 2017
Messages
22
Out of curiosity, I tried the CORE 12 (the old version) on my IBM machine with the hope it would be installed successfully.
It wouldn't even boot. I thought that if was installed successfully then I can perhaps upgrade to TrueNAS CORE TrueNAS CORE-13.0-U3.1
That was my final attempt with CORE and finally decided to go SCALE on my IBM's.
I am quite satisfied with SCALE and glad that TrueNAS/FreeBSD made it possible in SCALE to support a wider range of hardware among others, PCI bridges.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
That was my final attempt with CORE and finally decided to go SCALE on my IBM's.
I am quite satisfied with SCALE and glad that TrueNAS/FreeBSD made it possible in SCALE to support a wider range of hardware among others, PCI bridges.

Implying that this is some sort of decision that TrueNAS made misrepresents the apparent situation here. There is no significant re-authorship of OS bits provided by the upstream operating systems. Nobody at iX decided to "support a wider range of hardware".
 

Netfreak

Dabbler
Joined
Jul 22, 2017
Messages
22
Actually, I wasn't implying anything - I'm just satisfied SCALE can do what CORE can't "pcib3: failed to allocate initial prefetch windows: 0xc0000000-0xc0ffffff" on my IBM's which I can still use for NAS
Thanks
 
Top