new setup evaluation + several questions about external JBOD

Joined
Dec 2, 2018
Messages
9
hello! here is my build currently:

case: cooler master HAF XB EVO on a rack mount shelf (love these cases)
mobo: Huananzhi x79 16D (only thing i could find on newegg that matched my requirements, barely fits in the case btw)
CPU: 2x intel xeon e5-2690V2
RAM: 256GB samsung ECC DDR3
drives: two 1TB WD Blue SSDs

in the PCIE slots:
a 10Gtek internal SFF-8087 to SFF-8088 adapter from https://www.amazon.com/gp/product/B07ZGYXCP6/ref=ppx_yo_dt_b_search_asin_title?ie=UTF8&th=1
a Dell H310 HBA in IT mode from https://www.amazon.com/gp/product/B07JZ6FYVC/ref=ppx_yo_dt_b_search_asin_title?ie=UTF8&psc=1
a dual gigabit intel NIC which i believe is an 82571

the SFF-8088 converter is connected to a supermicro 847E1C-R1K23JBOD, a 4U JBOD unit with 2 SFF-8088 connectors and 44 3.5" drive bays. I've done a couple things to it to reduce noise:
- I replaced the power supplies with two SQ-labeled PSUs, i never checked how loud the original units were but these ones are reasonably quiet so I'm happy with that
- I also removed the fan wall with the little jet engine 1U/2U/whatever they were fans and replaced them with a 3d-printed fan wall holding 5 120mm noctua NF-F12PWM fans (wouldve used six but it wouldnt fit with how the PSU extends down the side of the case)

the JBOD currently holds a bunch of disks, a currently-unknown number of which may have been rendered inoperable by a dastardly inter-tech nas case containing some low-quality 4x horizontal backplane units that i have since learned are extremely unreliable. the disks are as follows:

- 3x 16TB seagate exos x18
- 12x 12TB seagate exos x16
- 6x 4TB wd red

for a current total of 216TB, although this only really fills the front of the case so i have another 20 in the back and 3 i think in the front, and i also plan to upgrade the reds to exos drives in the future.

the server rack housing the NAS and the JBOD has a dedicated UPS for those two devices.

my usage will probably be mostly reads, since this is the backing storage solution for an assortment of self-hosted services that i will eventually expand to hopefully service 20-50 users. i intend to self-host as much of the spectrum of practically-self-hostable services as is possible, and once the system is dialed in I'll roll it out to friends and family and hopefully get them off nonfree cloud services that don't respect their users. I have other servers in my network that will host all my services and whatnot, so I won't be hosting other stuff on this machine, just trueNAS.

my questions are as follows:

- what are your thoughts on this setup? any potential bottlenecks you may foresee? any problematic or incompatible components?
- is the amount of RAM I have good enough to see me through a potential upsize, potentially to as much as 500TB? what about a petabyte? I'm going to check the capacity calculator https://www.truenas.com/docs/references/zfscapacitycalculator/ after i finish this post, but I'm not sure if the `zfs_overhead` field indicates the RAM overhead for the amount of disk space added or if i'm misinterpreting that
- currently, one of the WD blue SSDs is serving as the boot disk. this seems a bit wasteful considering the size of the drive, so I'm considering adding a cheap NVME-PCIE adapter card and putting a WD green NVME drive in it and having that be the boot disk instead. I'd reallocate the current boot drive and the other blue SSD to SLOG. is this a good idea? i've read on this forum that it's not advisable to use a random 1TB SSD for SLOG, partially because of the lack of a backup battery, but this is mitigated in my instance because these systems are connected to a somewhat substantial UPS. when the power issue isn't relevant, does this become a good idea? or is it more performant to let SLOG live in RAM (assuming that's where it is normally unless you configure it to be on an SSD)? I would set them up mirrored since I have two identical drives. if this isn't a good idea, any recs on what drivs would work better? opinions on blue SSDs in general?
- my mobo has a slot for an NVME SSD, so i am considering getting one and using it for L2ARC. I also have 2 free 3.5" slots available at the front of the case, so I might also get some 2.5" SSDs and put those in there as well, assuming that's a smart thing to do. what are hardware recommendations for L2ARC? specifically, what performance parameters should i prioritize (size/throughput/reliability etc) when looking for an NVME drive for L2ARC, and are there any community recommended devices i should look for? furthermore, considering that L2ARC is striped, can I have two L2ARC pools, one pool containing my super good/fast NVME drive on the mobo, then another containing 2 2.5" SSDs loaded into the front of the case? assuming yes, any recs for the 2.5" drives?
- is what i did to the JBOD a bad idea? I am somewhat worried that the changes i made will cause issues with heat buildup. i will monitor drive temperature and do performance tests and all that, but if i'm in a situation where this is a manifestly foolish setup and i should stop immediately before i cause a fire, do please say so
- i anticipate pretty low overall utilization for a while after this system is built, like the first six months or so, so is there some way i can try to improve the longevity of my spinning drives by taking advantage of periods of low system utilization? maybe some kind of scheduled spindown? I have read that for enterprise level drives like the exos ones, spinup/down cycling is worse for the drive than just keeping it running. anyone have thoughts on this?
- the no-good terrible horrible inter-tech IPC case and its rotten backplanes broke some of my disks somehow. factory-new disks were randomly not appearing in the trueNAS UI, and after some digging in forums learned that the bad backplanes were the reason and were potentially killing my disks. this really sucked because i thought the problem was with the disks, and kept swapping them around trying to figure out what was going on, potentially messing up an unknown number of disks. does anyone know exactly what happened here? what did those bad backplanes do? the disks still spin up, they just might not show up in trueNAS. does that mean the circuitry on the outside of the disk may have been damaged, and if so, would it be trivial to fix the disk by replacing a PCB? I RMA'ed a few disks i was sure weren't working right, but the rest are in an unknown state and probably about 4 of them are 'broken' in this unknown capacity. what do? i trust that when i fix the problem described below, i will be able to definitively identify which disks are bad and which aren't, but i am curious if there is a way for me to cheaply repair any bad ones or if i need to just replace them. anyway yeah do not buy those inter tech cases they will mess up your drives

lastly and most importantly, I have a problem: it looks like i messed up somewhere in setting up the JBOD, and none of the disks are showing up in the trueNAS drives panel. part of the issue is that when I got the JBOD, i unplugged most of the stuff inside it to do my modifications, and forgot to make a note of how the SAS cabling was set up, so i may have messed up the connections between the front and rear backplanes. there are a lot of points of potential failure/misconfiguration between the motherboard and the drives, so it'd be best i think to outline the exact path for a drive on the front backplane to the mobo:
drive -> front backplane -> SFF-8087 cable in the RIGHT slot of the front backplane -> BOTTOM slot of the rear backplane -> SFF-8087 cable in the TOP slot of the rear backplane -> SFF-8087 to SFF-8088 adapter (this is built into the back of the JBOD) -> SFF-8088 cable -> 10Gtek internal SFF-8087 to SFF-8088 adapter in a PCIE bracket inside the NAS case -> short SFF-8087 cable ->Dell H310 HBA in a PCIE slot -> mobo
I did the wiring inside the JBOD based on documentation from supermicro on how the SAS backplanes are meant to be connected, and i think the setup is right. it does make logical sense to daisy-chain the backplanes together and have them both connect to the HBA via a single SAS cable, but if this isn't right i can switch it to connect to the second slot on the SFF-8087 to SFF-8088 adapter installed in the back of the JBOD (this has 4 slots) and run a second SAS cable to the SFF-8088 to SFF-8087 adapter because it has two slots. the other two slots on the JBOD SFF-8088 unit are i believe for a cascade setup with a second JBOD, which i do not think i will be needing in the forseeable future.
Ultimately i think the issue is somewhere else in the chain, because even if the front backplane is incorrectly connected to the rear backplane, i should still theoretically be able to put a disk in the rear backplane and have it show up in the disks UI. i tested this by turning off both systems, moving a disk from the front backplane to the rear backplane, then turning on the JBOD followed by the NAS, and didn't see the disk show up. i have tried this with several different disks to try and circumvent the potential bad disk issue described above, to no avail. what is the most likely failure point here, and how can i test this series of connected to components to figure out what is going wrong? I can provide the output of terminal commands if requested, and can also post pictures of my setup if that would help.

i wrote a lot in this post, and for anyone who has made it to the end, thank you very much for taking the time to listen to me. any and all input on any of the things i mentioned here would be greatly appreciated, and I'm very thankful to this extremely helpful, considerate, experienced, and intelligent community for its willingness to help people with everything trueNAS related. thanks again
 

firesyde424

Contributor
Joined
Mar 5, 2019
Messages
155
Top