Advice on migrating HDDs and data to FreeNAS

Alibuba

Cadet
Joined
Jan 30, 2021
Messages
6
Hi everyone,

I'm planning on retiring my aging CentOS system, and migrating existing data and hard disk drives to a newly built (albeit from old spares) ESXi host.

In addition to the Linux server I'm running a NAS box, used mostly for backups, and a desktop with a Windows Storage Space. My plan is to get rid of these individual storage pools, and use all the drives for FreeNAS.

FreeNAS will be providing NFS shares for DVR storage, and a few ESXi virtual machines, SMB shares for Windows and macOS hosts, and preferably Time Machine storage for the Macs as well. I/O load will be miniscule.

The current configuration of the drives is:

CentOS Linux-server: 5*3TB RAID5
NAS box: 4*4TB RAID5
Windows 10 dekstop: 4*3TB Mirrored storage space pool
Spare drives: 1*3TB

All the drives are WD Reds, with years of power on hours (between 40000 and 68000 hours for the 24/7 drives, significantly less for the Windows drives).

I will be able to suffle data around so that I would be able to free up disks from two of the three systems.

The ESXi server is running dual X5550 CPUs with 48GB of ECC RAM and two LSI2008 HBAs. ESXi boots from an old, but unused, 120GB SSD drive.

I will be able to purchase additional drives (e.g. SSD for FreeNAS and ESXi, and HDD for FreeNAS) as needed.

My questions thusfar are:

1. Even though the SMART statuses for all of the drives show a clean bill of health, would it be risky running such old drives on ZFS specifically?

I have replaced a single drive in the RAID5 MD array during the past 7 years, so the disks have served me well. I will be migrating to newer and larger drives eventually, but I feel like there's a couple of years left in the current drives.

2. What would be the ideal / optimal layout for the pool and vdevs?

I've been reading up on ZFS documentation, and playing around with dozens of virtual disks on my FreeNAS VM, but I'm still having a tough time wrapping my head around all this. My initial, misguided plan was to create an initial pool with 5 disks in RAIDZ2, and then expand the vdev with more disks.

Since this cannot be done, I've been testing the scenario where I would first create pool with two 5*3TB RAIDZ2 vdevs, then get one more 4TB drive and create a third 5*4TB RAIDZ5 vdev to the pool.

Is this at all a sensible approach, or should I think things over?

Thanks a bunch in advance for any and all suggestions!
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
RAIDZ2 is the minimum safe design, so for storage your plan seems fine. But virtual machines prefer (striped) mirrors over RAIDZn so you should rather have at least two pools with different vdev geometries.
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Edited, as the outage truncated my original post.

First, make sure your Reds aren't SMR.


For ESXi use, a mirror topology would be better, and you want the pool to be as wide as possible to maximize your IO. So you have 10x 3TB and 4x 4 TB; this will support a single pool of 7 mirrors of 2 drives each, with the mirrors of 4TB drives only using 3TB, with a total pool size of 21TB. Alternatively, you could have a 5-wide mirror pool of the 3TB drives, totaling 15TB; and a 4-drive RAIDZ2 pool of the 4TB drives, totaling 8TB; for a total of 23TB. The second layout is recommended, as the mirror pool will be better for sequential workloads, and the RAIDZ2 pool will be better for random loads, and you can move workloads to the pool with the better performance for that specific workload.
 
Last edited:

robinmorgan

Dabbler
Joined
Jan 8, 2020
Messages
36
First, make sure your Reds aren't SMR.


For ESXi use, a mirror topology would be better, and you want the pool to be as wide as possible to maximize your IO. So you have 10x 3TB and 4x 4 TB; this will support a single pool of 7 mirrors of 2 drives each, with the mirrors of 4TB drives only using 3TB, with a total poo

Great advice. I’m rebuilding my pool (as you might remember) - my use case: handling mostly large media files and I have a need for speed. Would you recommend the same layout for myself? I have 21x12tb Red Drives and 8x1tb SSD Sammy.
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
My requirements are somewhat similar in that I use FreeNAS mostly for file storage and have a number of small VMs running on ESXi and XCP-ng. My approach is to have a single somewhat decent SSD (e.g. Samsung EVO 860 with SATA) in each VM host. In case of issues, I have daily complete backups and hourly delta backups to FreeNAS.

This combination allows me to have the FreeNAS set up only for reliable storage and sort-of ignore the topic of IOPS.
 

Alibuba

Cadet
Joined
Jan 30, 2021
Messages
6
First of all thank you to each and every reply. Truly a nice welcome to the forum. =)

Edited, as the outage truncated my original post.

First, make sure your Reds aren't SMR.


Thank you for the heads-up. I would've definitely missed that pitfall. Luckily all of my drives are of the EFRX variety, which are CMR (WDC WD30EFRX-68AX9N0, WD30EFRX-68EUZN0, and WD40EFRX-68WT0N0).

For ESXi use, a mirror topology would be better, and you want the pool to be as wide as possible to maximize your IO. So you have 10x 3TB and 4x 4 TB; this will support a single pool of 7 mirrors of 2 drives each, with the mirrors of 4TB drives only using 3TB, with a total pool size of 21TB. Alternatively, you could have a 5-wide mirror pool of the 3TB drives, totaling 15TB; and a 4-drive RAIDZ2 pool of the 4TB drives, totaling 8TB; for a total of 23TB. The second layout is recommended, as the mirror pool will be better for sequential workloads, and the RAIDZ2 pool will be better for random loads, and you can move workloads to the pool with the better performance for that specific workload.

This is excellent advice - much appreciated! I was leaning on two separate pools as well, but would not have been able to come up with the 5-wide mirror pool for the 3TB drives. Having 15+8TB of storage is more than adequate for now, and expanding from here in the future should be fairly painless - even though I will be filling up most of the ports on the LSI HBAs, they are easy enough to come by should need arise.

Should I go along the route suggested by @ChrisRJ , and use SSD as the VM backend, what type of a layout would you suggest if I were to plan the ZFS storage primarily for sequential workloads (DVR, photos, etc. backups)_

My requirements are somewhat similar in that I use FreeNAS mostly for file storage and have a number of small VMs running on ESXi and XCP-ng. My approach is to have a single somewhat decent SSD (e.g. Samsung EVO 860 with SATA) in each VM host. In case of issues, I have daily complete backups and hourly delta backups to FreeNAS.

This combination allows me to have the FreeNAS set up only for reliable storage and sort-of ignore the topic of IOPS.

That wouldn't be a bad approach either. SSDs are cheap, and I really need a few VMs running a couple of Linux hosts – at least for now. Might as well pick up an EVO 860 in any case.
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Unless you're going to get an Optane or some other PCI-E flavor of SSD (e.g., M.2 or NVMe), you'll be limited by the 6Gbps SATA interface on your SSD VM pool. With the 5- or 7-wide mirrors, you'll get 5x/7x 6Gbps IO.

However, only you can say what the projected VM workload is going to be, so a few SSDs may be entirely adequate.
 

Alibuba

Cadet
Joined
Jan 30, 2021
Messages
6
Unless you're going to get an Optane or some other PCI-E flavor of SSD (e.g., M.2 or NVMe), you'll be limited by the 6Gbps SATA interface on your SSD VM pool. With the 5- or 7-wide mirrors, you'll get 5x/7x 6Gbps IO.

Indeed. I have been looking into some (somewhat dubiously) inexpensive PCI-E SSD cards on eBay to use as VM datastore on ESXi.

If IOPS is not a factor for the pool, would it make sense having, for example, two pools with a RAIDZ2 vdev in each, with 7*3TB and 3*3TB+4*TB drives respectively? I see no need for a "continuous" space of more than 10TB, so I'd be happy with dividing the drives to different pools.

Would it be possible replacing the three 3TB drives in the second vdev with 4TB drives, and expand it from 15TB to 20TB that way?

I'm trying to plan well into the future whilst maintaining a moderately high fault tolerance on the storage.
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
If your PCI-E slot supports bifurcation, I'd use a 2-way mirror of SSDs for VM image storage. If you're happy with RAIDZ2 for your 2x pools, I'd go with your suggestion of 7 drives each, but would provision 5 active and 2 hot spares in each pool.
 
Last edited:
Top