Changing from CentOS 7 to TrueNAS

Joined
Aug 8, 2022
Messages
4
So I've made my own NAS that has a bunch of other functions on CentOS and it's becoming less stable, locks up sometimes with a kernel panic, tried all of the available kernels and none work well, now the bootloader is corrupted. I'm thinking of switching to TrueNAS but I'm used to Linux, not Unix.

Specs:
Ryzen 3600
MSI MAG B550M Mortar (Realtek RTL8125B NIC)
16 GB RAM
Nvidia GT 710 (basic video out when needed)
Samsung 980 Pro SSD 500 GB (boot drive)
Samsung 500 GB (write cache)
3x Seagate Ironwolf 6TB (mdadm raid 1)
WD USB 12 TB HDD (offline backup for the raid files)

This thing runs samba, ssh, plex, pihole, wordpress, nginx, and a few other services. I was thinking I could run truenas on the base metal with the seagate drives in a zfs pool (with the ssd write caching) and setup a VM with ubuntu for any services I cannot run in the unix system.

Any hints? Is this possible? What roadblocks should I expect? Should I go with version 12 or 13? Whats the best way to get the mdadm raid files into the zpool?
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
First, read up on ZFS. It's radically different to all that came before. Play with it in a VM, if you can. But, I would not suggest just jumping into TrueNAS because your existing OS blew up.

There is a new type of TrueNAS, called SCALE that uses Linux under the hood. And is designed for containers and other things. It has been released, though some of the fancier features are yet to be enabled / functioning.

Just for your info, RealTek NICs tend to be less performing than server brands, like Intel or Chelsio. And in some cases on TrueNAS Core, (based on FreeBSD under the hood), may not work at all.

Write cache devices on ZFS don't work like most people would think. They are only useful for synchronous writes, (like NFS or iSCSI).

With only 3 x 6TB drives, you don't have many options for a ZFS pool. RAID-Z1 is not recommended with drives larger than say 2TB, due to the potential of unrecoverable read error on another drive during drive replacement. And a 3 way mirror only gives you the space of 1 disk.

Always go for current, stable release of the software.

As for getting the MDADM RAID-1 files into a ZFS pool, their are tricks. Especially if you have backups. But, many of them involve detailed ZFS knowledge that if you have to ask, it may be more complex for you, now.
 
Joined
Aug 8, 2022
Messages
4
I've been contemplating this for a while now since centos is a dead platform. I'm familiar with zfs works from an "I read the documentation" perspective, I know that three drives gives me one parity drive and I only get a single drive failure protection.

Was thinking about adding other drives for additional protection, not sure the best way to handle 6 TB drives other than mirroring and I'm not sure what benefits zfs really gives at that point.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
ZFS does not allow expanding a 3 disk RAID-Z1 into a 4 disk RAID-Z1. It's basically backup, re-create and restore. (There is a project to change that, but it's more than 1 year away, and no specific release date is available.)

Nor does ZFS allow going from single parity RAID-Z1 to dual parity RAID-Z2. Again, backup, re-create and restore.

ZFS does allow adding another vDev to a pool, like a second 3 disk RAID-Z1. Though, again, using disks larger than 2TB is not recommended for RAID-Z1.

And ZFS does allow replacing each disk in a vDev with larger disks. Whence all are replaced, the pool grows it's free space.


So planning a ZFS pool does take some thought. Of course, with just 3 disks your options are limited to 3 way mirror or RAID-Z1.

Now if you can add 1 or more 6TB, (or larger), disk(s), then you can use RAID-Z2. (Any extra space on a larger disk is not immediately useful.)

If more space is required, then replace the smaller disks with larger ones. Don't have to be all at once, or the same larger size. Just that vDev growth is limited to the smallest new large hard drive. So if you have a 6TB disk failure and buy a 10TB replacement disk, it's extra space is not useful now. Only after all the other smaller disks have been replaced.
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
So I've made my own NAS that has a bunch of other functions on CentOS and it's becoming less stable, locks up sometimes with a kernel panic, tried all of the available kernels and none work well, now the bootloader is corrupted.
That it is a very strong pointer to a hardware issue. I don't know how many systems (although mostly VMs on ESXi) with CentOS 7 (and also earlier versions) I have set up over the last 12+ years. Not once have I seen such a behavior without hardware being the root cause.
 
Joined
Aug 8, 2022
Messages
4
That it is a very strong pointer to a hardware issue. I don't know how many systems (although mostly VMs on ESXi) with CentOS 7 (and also earlier versions) I have set up over the last 12+ years. Not once have I seen such a behavior without hardware being the root cause.
Any idea what? They're all fairly new parts from a former windows machine
 
Joined
Aug 8, 2022
Messages
4
IMG_20220807_201144370.jpg
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
Any idea what? They're all fairly new parts from a former windows machine
Your best bet will be to replace various parts one by one. But of course that is only an option if you have those parts available.

The screenshot seem to imply that the problem surfaces at the CPU level. But, if I read this correctly, that would only be a symptom and by no means has to be the root cause as well. In general, the culprits I have seen mostly are RAM modules (including bad seating!), the PSU, and adapter cards with contact problems. But of course the CPU or motherboard could have gone bad as well.
 
Top