jlw52761
Explorer
- Joined
- Jan 6, 2020
- Messages
- 87
Up until about 4 months ago, I had never used ZFS, having come from the "old guard" of SAN configurations using RAID and such. So when I got my new rig, which I will happily call a dumpster dive treasure, I decided that I should probably look into this for my home lab. The reason is I wanted to have volume level snapshots and have good consistency in the event of a power loss or hardware failure. I've always known ZFS fits both those bills, but also knowing that ZFS does not really translate to what one may say are "traditional storage methods". So my journey begins.
Just for context, I was able to get my hands on a sweet rig, with the following specs:
Dual Xeon(R) CPU E5-2603 0 @ 1.80GHz 4-core (No Hyperthreading)
128GB DDR-3 RAM
Dual Intel I350 Gigabit Network Connection Quad Port
Intel RMS25CB080 RAID Module
(x2) 100GB Micron P400m100-MTFDDAK SATA 6Gbps SSD [d0,d1]
(x12) 2TB Seagate Constellation ES ST2000NM0011 7.2k SATA 6Gbps HDD [d2 ~ d13]
I know, it's interesting to say the least, and it was an old storage appliance that was going into the recycle bin that I was able to intercept into my trunk.
Anyhew, my requirements for a home lab storage solution is that it can support multi-protocol (NFS, CIFS, iSCSI) at the minimum, have a simple interface, because I'm getting to old and just want something simple, be able to do snapshots at the volume level, have decent amount of storage (> 6TB), and have reasonable performance as I will only run 1Gbps networking at home, not 10Gbps.
So before going further, I do want to point out that the RMS25CB080 does not have a JBOD mode, so I just made each disk it's own virtual disk, as close to JBOD as I could get it.
So, yeah, this dumpster dive was a treasure, but how to use it. Well, I went down the road of FreeNAS, nowing that FreeBSD had a good reputation with not only network stacks but also with ZFS integration as a first-class citizen. I love me some Linux, Ubuntu mainly, so FreeBSD isn't scary, just different enough to be both amusing and annoying.
So, I installed FreeNAS onto a bootable USB drive, and determined that I will use that for my system drive and dedicate all of the internal storage to, well, storage, and not running the system.
So once FreeNAS was installed, it was time to configure the storage pool. This is where I got lost and had to do some research. I know I wanted the capacity, but I also wanted to have the resiliency in the event of a power outage or hardware failure. I also wanted to get the best bang for the buck in regards to performance. So what I decided to do is create a single pool, using all 12 2TB drives in the pool with a Raid-Z3 configuration. I then added one of the 100GB SSD's as a SLOG, and the other as L2ARC.
My use is I have two zvols shared via iSCSI to my ESXi hosts as datastores, and a couple of NFS datastores, one for Docker container persistent data, and one for general use. I have lz4 compression on, and dedup off.
Things seem to work well, I have a good balance of performance and capacity, and piece of mind. I do have several spare drives sitting in a box, so not worried too much about a single drive failure, but my choice for the Raid-Z3 was I am concerned that with the size of the disks, if I had a failure and replaced that I could have a second drive fail while the resilvering was still happening on the first replacement.
My normal running conditions are as follows:
ARC Size: 77Gb
L2ARC Size: 47GB
ARC hit Ration: 98%
L2ARC Hit Ration: 0%
So after all of that, I'm interested to hear what folks that eat and breathe ZFS and FreeNAS think about this and what possible things would be different. I think that would be a good learning for not only myself but others that are just getting into ZFS and all of it's wonderful oddities.
Just for context, I was able to get my hands on a sweet rig, with the following specs:
Dual Xeon(R) CPU E5-2603 0 @ 1.80GHz 4-core (No Hyperthreading)
128GB DDR-3 RAM
Dual Intel I350 Gigabit Network Connection Quad Port
Intel RMS25CB080 RAID Module
(x2) 100GB Micron P400m100-MTFDDAK SATA 6Gbps SSD [d0,d1]
(x12) 2TB Seagate Constellation ES ST2000NM0011 7.2k SATA 6Gbps HDD [d2 ~ d13]
I know, it's interesting to say the least, and it was an old storage appliance that was going into the recycle bin that I was able to intercept into my trunk.
Anyhew, my requirements for a home lab storage solution is that it can support multi-protocol (NFS, CIFS, iSCSI) at the minimum, have a simple interface, because I'm getting to old and just want something simple, be able to do snapshots at the volume level, have decent amount of storage (> 6TB), and have reasonable performance as I will only run 1Gbps networking at home, not 10Gbps.
So before going further, I do want to point out that the RMS25CB080 does not have a JBOD mode, so I just made each disk it's own virtual disk, as close to JBOD as I could get it.
So, yeah, this dumpster dive was a treasure, but how to use it. Well, I went down the road of FreeNAS, nowing that FreeBSD had a good reputation with not only network stacks but also with ZFS integration as a first-class citizen. I love me some Linux, Ubuntu mainly, so FreeBSD isn't scary, just different enough to be both amusing and annoying.
So, I installed FreeNAS onto a bootable USB drive, and determined that I will use that for my system drive and dedicate all of the internal storage to, well, storage, and not running the system.
So once FreeNAS was installed, it was time to configure the storage pool. This is where I got lost and had to do some research. I know I wanted the capacity, but I also wanted to have the resiliency in the event of a power outage or hardware failure. I also wanted to get the best bang for the buck in regards to performance. So what I decided to do is create a single pool, using all 12 2TB drives in the pool with a Raid-Z3 configuration. I then added one of the 100GB SSD's as a SLOG, and the other as L2ARC.
My use is I have two zvols shared via iSCSI to my ESXi hosts as datastores, and a couple of NFS datastores, one for Docker container persistent data, and one for general use. I have lz4 compression on, and dedup off.
Things seem to work well, I have a good balance of performance and capacity, and piece of mind. I do have several spare drives sitting in a box, so not worried too much about a single drive failure, but my choice for the Raid-Z3 was I am concerned that with the size of the disks, if I had a failure and replaced that I could have a second drive fail while the resilvering was still happening on the first replacement.
My normal running conditions are as follows:
ARC Size: 77Gb
L2ARC Size: 47GB
ARC hit Ration: 98%
L2ARC Hit Ration: 0%
fnas1# zpool iostat
capacity operations bandwidth
pool alloc free read write read write
------------ ----- ----- ----- ----- ----- -----
LocalPool 543G 21.2T 22 250 821K 2.09M
freenas-boot 1.75G 26.7G 0 0 4.76K 727
------------ ----- ----- ----- ----- ----- -----
So after all of that, I'm interested to hear what folks that eat and breathe ZFS and FreeNAS think about this and what possible things would be different. I think that would be a good learning for not only myself but others that are just getting into ZFS and all of it's wonderful oddities.