TrueNAS Core using PCI-e NVME card with two M.2 SSDs in RAID1 as boot drives

DuskyMender

Cadet
Joined
Jan 19, 2023
Messages
7
Hello everyone!

My current configuration:
Motherboard: Lenovo IS8XM mATX
SATA slots: 3xSATA, 1xeSATA
CPU: Core i3-4130 3.40GHz
RAM: 4GB DDR3
Storage HDDs: 4X 10TB WD Gold (connected to the 3xSATA and 1xeSATA slots) in RAID6
Boot Drives: TBD
Main objective of the NAS: Reliability. Data protection for the 20TB available for storage.


I'm looking to buy two SSD drives and use them as boot drives in RAID1 for redundancy and reliability.
As you've noticed from my motherboard, I've got only 4 SATA ports (which are occupied by my 4 storage HDDs) so I need to expand using my PCI-e slot.
At this point I'm not yet settled on 2x2.5" SSDs or 2xM.2 SSDs.

I'm inclined to:
- Use a PCI-e to 2xNVME such as this one and have them both in RAID1 (preferred option, since it frees up my drive bays)
- Use a PCI-e to 4xSATA HBA controller like the LSI SAS9217-4I4E.

Which is the recommended way to go considering reliability is the primary objective?
I want to avoid using RAID cards and hardware RAID.

Also, any general advice is most welcome from a beginner on TrueNAS that wants to get it right on the first go :)

Thanks in advance!
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
- Use a PCI-e to 2xNVME such as this one and have them both in RAID1 (preferred option, since it frees up my drive bays)
That does not do what you think it does. It's just an adapter for a single M.2 NVMe SSD plus a single M.2 SATA SSD (notice the SATA port on the card edge).
Generally speaking, you're going to have a hard time doing what you want to do with the boot devices economically. The good news is that dual SSDs for boot are a bit overkill, outside of a serious enterprise deployment, so just use a single NVMe SSD with a cheap M.2 adapter and call it a day.
That said, you have bigger issues:

RAM: 4GB DDR3
That's one quarter of the minimum requirement (16GB) and likely to be a serious problem.
 

DuskyMender

Cadet
Joined
Jan 19, 2023
Messages
7
That does not do what you think it does. It's just an adapter for a single M.2 NVMe SSD plus a single M.2 SATA SSD (notice the SATA port on the card edge).
Generally speaking, you're going to have a hard time doing what you want to do with the boot devices economically. The good news is that dual SSDs for boot are a bit overkill, outside of a serious enterprise deployment, so just use a single NVMe SSD with a cheap M.2 adapter and call it a day.
That said, you have bigger issues:


That's one quarter of the minimum requirement (16GB) and likely to be a serious problem.
So basically, if my boot SSD dies, I can replace it with a brand new one, install FreeNAS on it and problem solved?
If so, then I don't need the RAID1 since I don't need the uptime. I just want to make sure I don't lose the data on the 4 HDDs due to the boot SSD failing.

That's one quarter of the minimum requirement (16GB) and likely to be a serious problem.
I was going to expand with 4GB but that's a fair point. I'll make it 16GB to be sure.
 

DuskyMender

Cadet
Joined
Jan 19, 2023
Messages
7
I've also tested the mSATA port for RAID6 and had no issues with it.
Should I expect any issues with it along the way?
I know it's not meant for internal storage but it seems to be working flawlessly.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
So basically, if my boot SSD dies, I can replace it with a brand new one, install FreeNAS on it and problem solved?
Yes, of course. Keep a config backup on hand for faster recovery, but the data is safe either way.
I've also tested the mSATA port
The eSATA port you mean? Hard to say, SATA has pretty tight margins and even something as minimal as "having the chassis open" can cause noticeable degradation and errors. If you're only using four disks, it's tempting to avoid the extra LSI HBA. If you plan to expand beyond four disks anyway, just pull the trigger now and avoid the headaches.
Do you mean RAIDZ2 or what did you test exactly?
 

DuskyMender

Cadet
Joined
Jan 19, 2023
Messages
7
The eSATA port you mean?
Yes, that's right. I apologize for the unfortunate spelling.
It's not actually an external eSATA, it's on the motherboard and I've connected it internally. You can see it in this picture, it's the black SATA port.

If you're only using four disks
I'm using just the four disks + 1 SSD for the OS. No plans to expand this. When storage runs out (10+ years I estimate) I'll create a new NAS with fresh drives and back up the gathered data onto the new one.
it's tempting to avoid the extra LSI HBA

Then I'll steer clear of the the SATA PCI adapter and I'll go with the M.2 PCI adapter with one M.2 SSD. With the config backup stored in my Google Drive. Does that sound good?

Do you mean RAIDZ2 or what did you test exactly?

Come to think of it, it was RAIDZ (RAID5) not RAIDZ2 (RAID6), since one of the HDDs was holding the OS.
But the final build will be RAIDZ2, with two redundant HDDs.

The test:
I've mounted the 4 drives to all my 3 SATA ports and 1 on the eSATA port.
I've then installed TrueNAS on one of the SATA drives and the rest of the HDDs were in RAIDZ.
It worked. The raid seemed stable and when I disconnected one of the SATA drives, the RAID turned to "malfunctioned".
I then connected it back and removed the eSATA drive. Same result - "manfunctioned" (or something in those terms).
I connected all the drives back, the system was healthy again.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
It's not actually an external eSATA, it's on the motherboard and I've connected it internally. You can see it in this picture, it's the black SATA port.
Ah, that should be fine then.
Then I'll steer clear of the the SATA PCI adapter and I'll go with the M.2 PCI adapter with one M.2 SSD. With the config backup stored in my Google Drive. Does that sound good?
Yup.
 

DuskyMender

Cadet
Joined
Jan 19, 2023
Messages
7
Ah, that should be fine then.

Yup.

Just as a quick summary for anyone in this situation in the future:
- PCI-e M.2 adapter cards (even if dual M.2) don't always go on RAID so if you want a RAID boot, just go with SATA;
- PCI-e M.2 adapter cards can be used to boot TrueNAS;
- eSATA works in RAIDZ (tested by myself), provided you keep the drive within the case enclosure;
- Dual SSD RAID for boot is overkill. A SSD with the config backup safely stored (cloud is an option) is ok for most situations where you don't need continuous up-time;

Feel free to add or correct.

I think we can close the thread since there is a clear direction for anyone undecided to go with M.2 or SATA. It's up to you.
Again, you were very helpful and you've clarified a lot of questions I had, which, surprisingly not many people have encountered before.

I'll make sure to do my part for the TrueNAS community going forward.
 

kiriak

Contributor
Joined
Mar 2, 2020
Messages
122
It is not recommended, but I use two tiny USB sticks a a mirrored boot pool.
Main drive behind this was the fact that I wanted an external boot device that could take the minimum space without additional cables.

I works fine for about 15 months until now.
I hope that if one usb stick dies before the other, I could replace and make the mirror boot pool again quickly and easily.
Anyway I am prepared for total loss of them and I keep config backups for this.

The usb sticks are the tiny Sandisk Ultra Fit 32 Gb that cost about 6 € each one.
 

DuskyMender

Cadet
Joined
Jan 19, 2023
Messages
7
It is not recommended, but I use two tiny USB sticks a a mirrored boot pool.
Main drive behind this was the fact that I wanted an external boot device that could take the minimum space without additional cables.

I works fine for about 15 months until now.
I hope that if one usb stick dies before the other, I could replace and make the mirror boot pool again quickly and easily.
Anyway I am prepared for total loss of them and I keep config backups for this.

The usb sticks are the tiny Sandisk Ultra Fit 32 Gb that cost about 6 € each one.
I've heard of this method before but it feels risky. Or better yet, affordable but with high maintenance.
How would you know when one of the sticks fail?
Do you have any notifications set up or do you regularly check the pool health?
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
The boot USB's will be in a pool. If one of them fails, or has errors during scrubs ZFS will let you know about it
 

kiriak

Contributor
Joined
Mar 2, 2020
Messages
122
I've heard of this method before but it feels risky. Or better yet, affordable but with high maintenance.
How would you know when one of the sticks fail?
Do you have any notifications set up or do you regularly check the pool health?

I have set notifications.
The mirror pool is being scrubbed regularly even if I don't remember to have setup this.
It was zero maintenance for the 15 months I use it, but I don't know how long they will last.
If they will start dying or another reason emerges, I can change them to an SSD disk.

All these for a home setup (but with data integrity as my main priority) and accepting the risk of loosing the boot pool (but this can happen with every boot device).
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
As long as:
1. the system dataset isn't on the USB drives
2. do not use identical USB drives, from the same manufacturer - although this can bring other issues with USB drive sizing
3. keep a regular backup of the config NOT on the NAS
4. keep an eye on things
5. accept the risk
6. do not use cheapest chinesium no name USB drives

You should be good
 
Top