OS SSD drive - Why?

asheenlevrai

Cadet
Joined
Sep 28, 2013
Messages
9
Hi :)

I have 0 experience with TrueNAS yet but I'm currently considering using TrueNAS Scale to setup a (couple?) NAS(es) in order to store/share files and maybe another rig for remote backups.

I assumed that TrueNAS would be like many other NAS OSes and would thus run its OS from a USB dongle. Apparently that was the recommendation back in the FreeNAS era of the project... I realize it is now recommended NOT to use a USB dongle but rather an SSD (or a sATA DOM) to increase both performance and reliability.

I understand the concerns but it seems to me that an SSD dedicated for the OS brings its own set of drawbacks:
- 1st of all, this is a waste of an SSD in terms of space (OS requires 8-16GB AFAIU), even if the price of small SSDs is currently low (anything below 120GB today is actually more expensive), most of the space will remain unused.
- it's also a waste of a sATA port. This is particularly relevant in rigs where sATA ports are a limiting factor.
- SSDs (especially cheap ones or old repurposed ones) are also prone to unpredictable/unannounced failure. Using a RAID1 SSD array for the OS (I don't even know if this is technically possible in TrueNAS Scale) would result in even more waste of space and sATA ports.

It seems to me that a decent solution would be to implement something like what was done in Synology DSM where the OS is located on a partition distributed among all (storage) drives. A drive failure thus doesn't result in the loss of the OS. However, I understand that performance-wise this won't reach flash-storage levels. Except maybe for rigs with many drives, which is probably not representative of most home users (running small 2-disks setups).

I'm probably missing something here and I wonder what it is. Maybe something related to how ZFS works since I'm not familiar with it.

Thank you very much in advance for your feedback.
Best,
-a-
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
This has been discussed to death.

It isn't going to happen. iXsystems is developing a product that they sell to enterprises, a market where absolute efficiently using every bit of every resource isn't as important as the architectural cleanness of the current solution. They are selling systems with up to a thousand hard drives, and the "cost" of a pair of SATA ports and 120GB SATA SSD's in order to get reliable boot storage is nothing in such an environment.

You have to remember that they're not developing this OS for you. While they are happy to make nods to home users where convenient to do so, sharing disks means that you would need some mechanism to arbitrate what happened when a device that was a member of several pools failed, etc. From an automation point of view, this is a nightmarish can of worms, and there is no value to iXsystems in dedicating significant developer time to a feature that only benefits the OCD of some free users to "make better use" of a cheap resource.

There's nothing preventing you from manually partitioning the system in the manner in which you'd like, but it comes at the cost of the risk of things not working correctly under degraded situations.

Please refer to previous discussions of the topic and note that the horse has already been flogged to death.
 

mgoulet65

Explorer
Joined
Jun 15, 2021
Messages
95
I am a SOHO user of both Core and Scale. Each of my 2 rigs boots off mirrored SSDs. I use motherboard AHCI SATA ports for the boot drives, so I am not giving up any of my premium/fast ports. I use whatever was cheapest/fastest to ship at the time I built the 2 systems. SSDs are cheap enough that I just don't care that there is wasted space, given that I have a solution that works that I won't likely have to worry about for quite some time.

I never built a NAS in the FreeNAS days, booting off USB. I wonder how much of the interest in that approach is simply habit/inertia?
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
This has been discussed to death.
Especially lately--it's like someone turned the "complain about the OS architecture" dial up to 11.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
booting off USB. I wonder how much of the interest in that approach is simply habit/inertia?

Thumb drive booting of the NanoBSD style was deprecated the better part of a decade ago. The original theory when ZFS boot became "a thing" was that ZFS could still be used on the thumb drives, but practical realities showed that to be largely false. I suspect this was never an issue for iXsystems because they were using SATA DOM devices for boot, which are much closer to full SSD's than thumb drives.

There is actually nothing that stops you from using high endurance USB thumb drives for boot even today. It works fine. The problem is that cheap SSD's are almost always cheaper. This wasn't such a big deal when "cheap SSD's" were 30GB and 60GB, but now it is getting hard to find SSD's that are "small." Even the larger ones are cheap.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
There is actually nothing that stops you from using high endurance USB thumb drives for boot even today. It works fine. The problem is that cheap SSD's are almost always cheaper.
This. A cheap, reputable SSD is almost always cheaper than a known-good USB flash drive.

The only real pain point is the SATA port... But modern motherboards typically have an M.2 slot wired for PCIe anyway, and low-end PCIe SSDs are about as cheap as SATA SSDs anyway, and the slot was not going to be used for much of anything anyway.

Also, SSD failures aren't really that common. Certainly orders of magnitude more rare than USB flash drives failing. Mirroring the boot pool for home use is somewhat excessive, in my opinion. It can be done, it won't hurt, but you'll have a hard time finding anyone who's lost a boot SSD. Even pathological cases like Crucial MX500s with the crazy GC bug that sends write amplification through the roof are unlikely to be a problem in this application.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
you'll have a hard time finding anyone who's lost a boot SSD.
I haven't lost a boot SSD, but I have lost (without warning) the SSD that was running my jails. But as long as you have backups of your config database (and aren't running encryption), loss of a boot device isn't a big deal.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I've thrown up a first pass of a hopefully succinct summary at

 

kiriak

Contributor
Joined
Mar 2, 2020
Messages
122
I'm not an expert, but maybe for a home user, usb drives are fine as boot devices.

I didn't want to waste any of the 4 HDD slots of my Microserver and also I wanted to use the minimum space outside it,
so I bought 2 tiny Sandisk Ultra Fit 32 Gb (for less than 15 euros for both).

I have them as a mirrored pool.
I am prepared to loose any of them at any time, and even to loose both of them, so I keep regular config backups.
But they go fine for a few months now and they are very fast.
If they start to die frequently, I can switch to external SSDs at any time.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
I put a m.2 SATA SSD in my parents' MicroServer Gen8 NAS using one of these:

It's been in service since October with no problems--not much of a service life to base a recommendation on, but I'm happy so far.
 

jeremyoha450

Dabbler
Joined
Mar 1, 2022
Messages
19
On my Microserver Gen 8 i have follow

16gb ram
5 x 6TB Iron wolf (raidz setup)
1 x 10GB Nic
1 x USB 3 2.5" drive caddy with a 64GB SSD in it

No issues at all. its my backup to my main server which is

64GB Ram
8 x 8tb irnon wolf (raidz)
2 x 500gb SSD (OS Mirror)
2 x 250gb M.2 (Mirror for VMs


My point is if you want to use USB just use a USB drive caddy with a small SSD i prefer the reliability of a SSD over a USB stick

i've still learning Truenas but always good to learn
 
Top