Everything's here but the RAM. I wound up buying from one of those outfits that you think is in the US, but is actually in China, so I'm dealing with USPS saying "Departed Shipping Partner Facility, USPS Awaiting Item," so who knows when I'll actually get it.
In the meantime, I'm thinking about drive layouts. Looking at SCALE Test (in my signature), iostat says my data VDEVs are 90% read / 10% write. I also have a metadata VDEV that includes files up to 4MB. That VDEV has 260GB of data on mirrored 1TB SSDs and it's the opposite -- 10% read / 90% write. Same for my Applications pool -- 10% read / 90% write.
I doubt any of us using the system would actually notice any performance differences, so this is partially educational for me. I'm thinking about the following configuration:
The X12STH-F motherboard has eight SATA ports. I'll do:
- Mirrored 32GB SATA DOMs for boot
- Striped & 3-Way Mirrored 2TB Samsung Pro 850s for the metadata and "small files." I might include small files up to 8MB. A lot of my thinking on this is to reduce the writes on the data drives.
The 216BE1C-R920LPB chassis has twenty-four SAS3 ports -- I'll go "cheap" on those and fill with 3 VDEVs of RAIDZ2 8TB Samsung 870 QVOs. Bought them from four different vendors to maximize my chances of having different lots.
I'll put the Applications pool on mirrored 2TB Samsung 980 Pro NVMe M.2 placed on a PCIe dual M.2 adapter card on the motherboard's PCIe 3.0 x 4 slot.
That leaves the 360GB Optane on the motherboard's M.2 slot. I have negligible sync writes and I don't see using it for L2ARC. Maybe a scratch or transcoding drive, or take it out and save it for another project.
And then next year, when we move everything back into the new server closet, I'll look at lobotomizing SCALE Test and cascade the new HBA to the drives on it. Seems like that would save quite a bit of electricity by not having that dual Xeon motherboard cooking 24/7.
That's it -- idle thoughts while I wait for RAM...
In the meantime, I'm thinking about drive layouts. Looking at SCALE Test (in my signature), iostat says my data VDEVs are 90% read / 10% write. I also have a metadata VDEV that includes files up to 4MB. That VDEV has 260GB of data on mirrored 1TB SSDs and it's the opposite -- 10% read / 90% write. Same for my Applications pool -- 10% read / 90% write.
I doubt any of us using the system would actually notice any performance differences, so this is partially educational for me. I'm thinking about the following configuration:
The X12STH-F motherboard has eight SATA ports. I'll do:
- Mirrored 32GB SATA DOMs for boot
- Striped & 3-Way Mirrored 2TB Samsung Pro 850s for the metadata and "small files." I might include small files up to 8MB. A lot of my thinking on this is to reduce the writes on the data drives.
The 216BE1C-R920LPB chassis has twenty-four SAS3 ports -- I'll go "cheap" on those and fill with 3 VDEVs of RAIDZ2 8TB Samsung 870 QVOs. Bought them from four different vendors to maximize my chances of having different lots.
I'll put the Applications pool on mirrored 2TB Samsung 980 Pro NVMe M.2 placed on a PCIe dual M.2 adapter card on the motherboard's PCIe 3.0 x 4 slot.
That leaves the 360GB Optane on the motherboard's M.2 slot. I have negligible sync writes and I don't see using it for L2ARC. Maybe a scratch or transcoding drive, or take it out and save it for another project.
And then next year, when we move everything back into the new server closet, I'll look at lobotomizing SCALE Test and cascade the new HBA to the drives on it. Seems like that would save quite a bit of electricity by not having that dual Xeon motherboard cooking 24/7.
That's it -- idle thoughts while I wait for RAM...