New User - wow, eyes opened!

IronDuke

Dabbler
Joined
Jan 23, 2023
Messages
18
Hi, new user here, based in East Tennessee. I've been very impressed with TrueNAS so far. A few bumps in the road, but that's to be expected.

For a year or two now, I've had a VMWare ESXi (free version) installation on the rig that is now my SCALE box. I was running 3D printer control software and using it as an NVR for security cameras at home. The hardware was an HP DL360 G9 that I bought two of on eBay for ridiculously cheap (like $275 ea). I was originally going to sell the other one to recoup some of the money, but then thought it was a good source of spares, at the very least, so kept it. Getting more into photography and videos, I realized I needed to up my game on storage, so I thought I'd set up the spare one as a NAS box. Enter FreeNAS, which I then realized was now TrueNAS CORE.

Well... after installing CORE, it was like night and day compared with ESXi for my use case. I was running 7u3, which looks like something from the 90s to be honest. CORE's GUI is just beautiful, responsive and easy to use. I didn't actually realize that it had virtualization capability, and I ran up a couple of Ubuntu Server test VMs, and it was quite easy, although the Bhyve framework is a bit Noddy in comparison to ESXi. But it's a NAS box, not really a virtualization platform.

After the epiphany, I thought "why not" - I backed up my VMs on the ESXi platform, blew it away and installed SCALE on it. Wow! Even better feature set, with three different ways to run apps - Kubernetes, Docker and full OS virtualization, as well as the ZFS NAS features of CORE. Holy cow, this is sooo much better than ESXi.

I didn't even bother trying to get my NVR VM (Agent DVR) resurrected, because there is a Kubernetes version of it in the catalog. I had it up and running in 5 minutes, imported my config and was viewing cameras within a minute. Wow again!

For disks, I had 4x1TB SATA SSDs in the ESXi box in RAID5 config using the 440ar controller in the DL360, giving me about 2.6TB. I also had a bunch of proper Enterprise SAS Hard disks left over from an old job, small capacity, 300 and 146GB that were unused. I split the SSDs into two for each box, and also put in a few of the spinny disks just to test with. The controllers went into HBA mode, and so both SCALE and CORE boxes have SSD and HD storage.

I rapidly realized that the network was the limiting factor, I was getting 118MB/s from clients, so saturating the 1Gb links. I tried aggregating a couple of them (the DL360s have 4x1Gb on-board), but what I had not realized is that with 2x1Gb in LAG config, you don't really get 2Gb per flow, because each flow (for example sending a 10Gb file to the NAS) will only use one 1Gb link. The advantage of configuring the LAG group is that you could have two clients doing that at the same time. That's not my use case, I wanted rapid file transfer of large files from one client to the NAS.

Off to eBay again and picked up a couple of HP530 2x10Gb NICs for the princely sum of $40 each, delivered. Including SFPs. They're only PCIe Gen 2, but with 8 lanes, that's plenty. My switch had 8x 10Gb ports just sitting there, and I had the SFPs for it and fiber patch cables in a box of "I might need these some day" stuff :) The NICs were brand new, never used, as were the SFPs. These would have been $1000 NICs when they were current HPE models.

Back to the disks... so I took a punt. The SSDs (plain old WD blue) were faster than the spinny disks, obviously, and use way less power. So I bought 16x 2TB cheap SSDs from China on eBay. I'm waiting for these to be delivered. While I don't expect much from them, I also doubt there is a plant in China with a sign outside that says "we make terrible SSDs". We'll see, as I said, a bit of a punt. I can always re-sell them, if they're that bad.

Anyway, there's my introduction. 1x TrueNAS SCALE, 1x TrueNAS CORE.

Looking forward to being part of the community!
 

IronDuke

Dabbler
Joined
Jan 23, 2023
Messages
18
Forgot to mention...

The DL 360s install just fine to USB devices - I tried a flash card (internal slot)), USB stick (both internal and external), USB SSD (external) and NVMe (internal). All installed OK, but none would boot.

I figured out how to do it though by accident - leave the NVMe detached, and boot from the USB installer. At the point where the installer detects the SSDs/HDs installed and asks where you want to install TrueNAS, insert the desired drive (in my case, NVMe on the internal USB3 slots). Then hit "back" on the installer, and "forward" again. Now it detects the NVMe, install there and it will boot.

Bit of overkill with 512GB NVMe boot disk, but that's what I happened to have on hand from my box of Raspberry Pi bits. I will probably go back and put smaller NVMe (say 128GB) in, freeing up the 512s back for RPi projects. But not today, spent enough :)
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680

Be aware that the P440ar is a RAID controller, regardless of any insipid "HBA mode", and isn't acceptable for use with TrueNAS. Please see


In particular, it uses the spectacularly crappy CCISS driver, which is known for burping, farting, and occasionally corrupting data.
 

IronDuke

Dabbler
Joined
Jan 23, 2023
Messages
18
Be aware that the P440ar is a RAID controller, regardless of any insipid "HBA mode", and isn't acceptable for use with TrueNAS. Please see


In particular, it uses the spectacularly crappy CCISS driver, which is known for burping, farting, and occasionally corrupting data.

Yes, I'm aware, thanks. But that's what I had on hand, so as and when get further along this adventure, I'll figure out what controller will work with the HPs. They are quite particular about what cards they will accept. Some non-HP cards cause the system to spin up the fans to max, and that's completely unacceptable too.

It might be that I end up building a third box using "approved" hardware, and migrate away from the HPs. But they're working pretty darn well for now.

Another limiting factor is the 8xSFF drives, if I had set out to build a NAS box (rather than a virtualization platform that I did), I would not have started with a 1U 8xSFF enclosure. More likely a 2 or 4U box with 24 bays and a faster clock speed on the processors rather than more cores.

Again though, it's what I started with, and I'm very thankful to be where I am, despite the imperfections.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I'll figure out what controller will work with the HPs.

You're far from the first HP owner that has appeared in the forums. I believe the normal suggestion is the HP H220, P/N 650933-B21 according to one of the HP experts. You should be able to find them for about $40/ea used.
 

IronDuke

Dabbler
Joined
Jan 23, 2023
Messages
18
You're far from the first HP owner that has appeared in the forums. I believe the normal suggestion is the HP H220, P/N 650933-B21 according to one of the HP experts. You should be able to find them for about $40/ea used.

Thanks for the recommendation. I've been using Proliants (among others) at work for 15-20 years now, through several generations. The one thing that seems consistent is their preference for original HPE parts, and their often outright rejection of third-party hardware.

The fact that there is an HPE card that's known to work with TrueNAS is much appreciated. That they're available for cheap is a bonus! I'll confirm through searching the forum that that's the most appropriate card for my specific models, and get them ahead of the SSDs' arrival.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
The fact that there is an HPE card that's known to work with TrueNAS is much appreciated.

Still has to be crossflashed to IT mode and whatever the current "correct" firmware version is. If you run into problems, there are a few people here who know more about current HP gear than I do. I stopped buying them back around 2009, shortly before the firmware download support contract debacle.
 

IronDuke

Dabbler
Joined
Jan 23, 2023
Messages
18
Still has to be crossflashed to IT mode and whatever the current "correct" firmware version is. If you run into problems, there are a few people here who know more about current HP gear than I do. I stopped buying them back around 2009, shortly before the firmware download support contract debacle.

"Debacle" is a good word for that! Still, HPE do make some nice hardware. I don't care for their business practices though. Why do I need a support contract to get firmware for a 7 year old server that I'm using at home? Seems like they could do an "honor" deal and provide the updates either free or for a nominal charge, as long as it's not for commercial use. But no. Meantime their superseded hardware is cheap as chips on the secondary market.

Looking on eBay, those HP220s can be had for $37, already flashed to the IT firmware. Given that they're PCIe3, with the small bracket (10Gb NIC takes the long slot), I think those will work just fine. At this point I wish my boxes were DL380s with more PCI space and LFF drive slots, but hey ho.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
I'll confirm through searching the forum that that's the most appropriate card for my specific models
Welcome!

The H220 was originally intended for the HPE Gen8 server series, and uses the LSI SAS2308 chipset and driver which has a very long and proven history of success with TrueNAS (and FreeNAS before it) - the card HPE chose to succeed it for the Gen9, the H240, uses a different chipset from PMC/MicroSemi, and (I believe) the same much-maligned ciss driver, which hasn't had nearly the same track record and billions of run-hours. For that reason, the older card continues to be recommended.

Several enterprising users have created a patch set for iLO4 nicknamed "Silence of the Fans" which you may also find relevant for running HPE gear in a home environment:


It might take some tuning, but let's see if we can make your "Iron Duke" more of a "3800 Series II" - if that username is a car reference as I surmise.
 

IronDuke

Dabbler
Joined
Jan 23, 2023
Messages
18
Welcome!

The H220 was originally intended for the HPE Gen8 server series, and uses the LSI SAS2308 chipset and driver which has a very long and proven history of success with TrueNAS (and FreeNAS before it) - the card HPE chose to succeed it for the Gen9, the H240, uses a different chipset from PMC/MicroSemi, and (I believe) the same much-maligned ciss driver, which hasn't had nearly the same track record and billions of run-hours. For that reason, the older card continues to be recommended.

Several enterprising users have created a patch set for iLO4 nicknamed "Silence of the Fans" which you may also find relevant for running HPE gear in a home environment:


It might take some tuning, but let's see if we can make your "Iron Duke" more of a "3800 Series II" - if that username is a car reference as I surmise.

Many thanks, that is great info on the 220s' compatibility, and also "Silence of the Fans" - love it! Will the re-flashed 220s (thus looking like a vanilla LSI card) cause the system to do the "max fan" thing, and thus need the SotF patch, or is that patch for other cards that cause the issue?

As for my username, no, not a car/engine. I am originally British, and ex-military, though I've been in the US 18 years this year (actually quite close to IX's Maryville location). I'm also a Sales Engineer in telecom, which I've been in for a long time, and the switches, servers etc are colloquially known as "big iron". So the username refers to this guy:

Sir_Arthur_Wellesley,_1st_Duke_of_Wellington.png


Field Marshal His Grace The Duke of Wellington, KG GCB GCH PC FRS. Known as the "Iron Duke". Bit of a play on words between big iron and me being ex-military. Though not a Field Marshal (5* rank)!
 

IronDuke

Dabbler
Joined
Jan 23, 2023
Messages
18
Small update - 2x220s on order! $34 ea, brand new and already flashed to IT mode :)
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Many thanks, that is great info on the 220s' compatibility, and also "Silence of the Fans" - love it! Will the re-flashed 220s (thus looking like a vanilla LSI card) cause the system to do the "max fan" thing, and thus need the SotF patch, or is that patch for other cards that cause the issue?

Anecdotally, it seems that the Gen9 systems are a little more tolerant of the reflashed H220s than the Gen8's were - but having that extra ace up your sleeve in the form of the fan adjustment scripts won't hurt.

As for my username, no, not a car/engine. I am originally British, and ex-military, though I've been in the US 18 years this year (actually quite close to IX's Maryville location). I'm also a Sales Engineer in telecom, which I've been in for a long time, and the switches, servers etc are colloquially known as "big iron". So the username refers to this guy:

View attachment 62820

Field Marshal His Grace The Duke of Wellington, KG GCB GCH PC FRS. Known as the "Iron Duke". Bit of a play on words between big iron and me being ex-military. Though not a Field Marshal (5* rank)!
I was going to mention the Maryville office being in TN as well - but despite my years in IT, when I hear "Big Iron" I still think of Marty Robbins first:


Small update - 2x220s on order! $34 ea, brand new and already flashed to IT mode :)

Looking forward to further build updates!
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Small update - 2x220s on order! $34 ea, brand new and already flashed to IT mode :)

Thanks for listening. It gets dreary when people want to argue the point. It's kinda silly too, considering how cheap the fix is.
 

IronDuke

Dabbler
Joined
Jan 23, 2023
Messages
18
Thanks for listening. It gets dreary when people want to argue the point. It's kinda silly too, considering how cheap the fix is.

I definitely get the point, a hardware RAID controller has its place, but not in a ZFS system. Even in "HBA mode", it still adds a layer of obfuscation that renders some of the features of ZFS moot. Somewhat pointless. But that's all I had to get going with. Given the rapid info from yourself and @HoneyBadger re H220s, that was a no-brainer, so thank you.
 

SecCon

Contributor
Joined
Dec 16, 2017
Messages
175
I get the IronDuke, but do you drive a Jaaaag as well?

I drive Sportbrake.

20191009144042-f64adf20-xx.jpg
 

IronDuke

Dabbler
Joined
Jan 23, 2023
Messages
18
I get the IronDuke, but do you drive a Jaaaag as well?

I drive Sportbrake.

20191009144042-f64adf20-xx.jpg

Very nice; a good bit newer than mine...

IMG_0513.jpeg


2006 XK8 Victory.

Full disclosure, I only used the Growler as my avatar because I'm on a different forum, which also uses XenForo, and also has the circular avatars, so I had one ready to go. I'll be selling the Jag this Spring, I just don't drive it enough. Time to let someone else enjoy it.
 

SecCon

Contributor
Joined
Dec 16, 2017
Messages
175
My wife is after a X(K)150, but that one is beautiful... congrats!
 
Last edited:

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
Kubernetes, Docker and full OS virtualization
Careful with docker, it’s going away in the next release. https://ixsystems.atlassian.net/plugins/servlet/mobile?originPath=/browse/NAS-109044#issue/NAS-109044

This might make docker and docker compose even easier as it won’t step on k3s any longer and we may be able to install it on startup via apt with a script - but that’d be unsupported and heavily tbd. Careful with native docker stuff. Docker in a VM all day long of course.
 

IronDuke

Dabbler
Joined
Jan 23, 2023
Messages
18
Careful with docker, it’s going away in the next release. https://ixsystems.atlassian.net/plugins/servlet/mobile?originPath=/browse/NAS-109044#issue/NAS-109044

This might make docker and docker compose even easier as it won’t step on k3s any longer and we may be able to install it on startup via apt with a script - but that’d be unsupported and heavily tbd. Careful with native docker stuff. Docker in a VM all day long of course.

Very interesting, thanks. K8s seem to be working for me, but I have a bit of a blocking issue that’s probably me not having put enough time into it to figure it out… Multiple instances of a K8 application.

My use case is that I have two 3D printers. Octoprint is a print manager for them that is excellent. Officially it doesn’t support multiple printers, it’s a 1:1 relationship with its own printer. I used to have each one with its own Raspberry Pi running Octopi (a version of Octoprint specifically for the Pi), and then I did a fairly well known hack to get two Octoprint instances on the same Pi, thus saving a Raspberry Pi. Then I realized I didn’t need the Pi at all. Why not use a DL360? So I had two Ubuntu VMs under ESXi, each with Octoprint, and all was good.

Now moving to SCALE, I can get one Octoprint instance up as a K8, but I can’t get another one up. I can set it to two replicas, but they don’t seem to be separate. I really don’t know enough about K8s, it may be a setting somewhere that I’m missing. There is a work around of course - I could have the one K8 app, and spin up another Ubuntu VM and run Octoprint within that. But the inner geek in me wants two K8s, because it’s more elegant and is a mental challenge to figure it out.
 

IronDuke

Dabbler
Joined
Jan 23, 2023
Messages
18
In a more general sense, I’m in a bit of a holding pattern with my build. The H220s have arrived, and one is fitted. I’ve made the decision not to proceed with CORE, and stick with SCALE on both boxes. It seems clear that’s the future, and although CORE is considerably more mature, SCALE is much better featured for what I want (ie NAS and VM in same hardware).

What’s holding me up is that I’m waiting on 16x 2TB SSDs, which are literally on the slow boat from China. However, before I ordered the SSDs, I acquired a job lot of SAS 900GB Enterprise 10K HDDs. I knew these were formatted as 520B sectors, not 512. So I used the smaller drives I already had, just to play with TrueNAS. Now I need to stand up a larger amount of storage to back up an iMac that I’ll be selling, that has 2.5TB of data on it.

Enter the 900GB drives. So now I’ve got my ex-CORE DL360 with 1x 1TB SSD and 7x 900GB SAS drives. I put Ubuntu on the SSD, and I’m reformatting all 20 of the 900GB drives that I have. It’s a fairly slow process, it takes 2.5hrs per drive, although I can do 7 in parallel (with 7x ssh windows open!).

The drives are now connected through the H220, and the P440ar is removed. However, it was a bit of a trial to get Ubuntu to boot. I was missing a simple piece of information - the H220 needs to have its boot drive set in the LSI Config utility. Even if there is just one drive in the system, the LSI card will not boot it unless it is set as the boot drive. Once I’d researched it and hit “Option-B” in the config utility, all was well.

So now the plan is to finish doing all the reformatting on the SAS drives (should be done today), and fully populate the ex-CORE box with them. Then put SCALE on it. Back up the iMac to it and then re-format the iMac for sale. When the SSDs turn up, fully populate the other DL360 with them, install SCALE and move the iMac data to this “new” box. Then erase and depopulate the SAS drives, replacing them with SSDs. Then sell the SAS drives. At the end of it, I should have two SCALE boxes, one with 96GB RAM with 8x2TB SSDs, the other with 32GB RAM and the same drive config. Run VMs and K8s on the 96GB box, the other mostly a NAS, unless there’s some resource clash (I’m thinking USB) that would make it easier to use a particular app on the other box. I’m a little bit wary of the RAM, might need to up both, as I am aware ZFS is a bit of a RAM whore. But spent enough for now. Perhaps the sale of the SAS drives will yield enough shekels for another 64GB apiece.
 
Top