This is a continuation of my now read-only post here. I felt that some of the points below might be useful to future readers e.g. the power consumption data.
So, continuing right where we left off:
Storage
Thanks @Davvo, I didn’t have TrueNAS Scale installed yet, hence I tried to do it with FreeDOS. I have installed it by now (on one of my old 128GB SSDs, a Crucial M4-CT128M4SSD2, mirrored to another (ancient) 128GB Samsung MS5SPA128HMCD).
When moving from cardboard to case I didn't fully plug one of them in, so the boot pool immediately degraded but the mirroring works, yay ;)
I plan to use the final SSD leftover from my Desktop’s upgrade, a 1TB Crucial MX500 (CT1000MX500SSD1), as a temporary download directory for Deluge and application data in general. However, I also saw here that Crucial MX500s might be causing issues, so I’ll have to test thoroughly if this works, get a cheap replacement otherwise. 1TB is ‘too much’ anyway.
I had quickly considered using the M.2 slots for the boot drive(s), but since I have the HBA, as well as left-over SSDs, that doesn’t make sense to me right now.
IPMI and fans
With TrueNAS installed and thus access to ipmitool on the Linux console, I reset the IPMI logins and it works great so far. Now I no longer need to rig up the old VGA monitor - no regrets getting this feature!
I also set the fan thresholds - thanks for the link! - although they looked pretty good already (likely why the issue with the power cycling occurred only rarely). I lowered them a bit more and now all is fine.
I have since also moved everything into the final case with different fans and those haven’t given me any troubles from the start.
Hardware changes
I got a good price on another two of the same 16TB HDDs, and for the RAM eventually also decided to switch to KSM32ED8/32HC, which wasn't any more expensive and gives me the option for 128GB should I ever need it. It doesn’t run at 3200 Mhz and wont let me select it in BIOS although the board should support it, but I don’t really care either.
The updated system configuration now looks like this:
I ran Memtest on this and it passed just fine. Since yesterday the HDDs are going through disk-burnin.sh.
Once done, I’ll play around with layouts, in particular comparing a 3-way mirror as initially suggested by @sretalla to a 5-wide Raid Z2 which I could now also do in terms of disks available.
Power
Since I was initially concerned about power usage, here’s a few numbers that might be of interest to others. My measuring device is probably not super accurate but it should give a good idea.
Using the temporary PSU FSP350-60HHN
Using the be quiet PSU
These values are not as high as I had initially feared. While 75W idle isn't as low as I would have hoped (in absolute terms), almost of half of that is just the disks, and another good portion the 5 fans. So in retrospect I'm happy not to have gone for a lower TDP CPU/Mobo combo that would have likely limited me down the road or at least been more costly.
As I expected from my experience with the case, temperatures are not a consideration. Both CPU in idle as well as HDDs (independent of load) are only 5-10 degrees over ambient. I haven't tested it yet, but am sure the Thermalright can easily handle the CPU at full load.
Network
Finally, I have a X540-T1 in my client machine and with some initial iperf testing, I only get about 570 MB/s. Increasing the clients in iperf I can saturate the link. Still have to check what’s going on there, but from reading around, fiddling with jumbo frames is apparently not the way to go these days.
So, continuing right where we left off:
Storage
Thanks @Davvo, I didn’t have TrueNAS Scale installed yet, hence I tried to do it with FreeDOS. I have installed it by now (on one of my old 128GB SSDs, a Crucial M4-CT128M4SSD2, mirrored to another (ancient) 128GB Samsung MS5SPA128HMCD).
When moving from cardboard to case I didn't fully plug one of them in, so the boot pool immediately degraded but the mirroring works, yay ;)
I plan to use the final SSD leftover from my Desktop’s upgrade, a 1TB Crucial MX500 (CT1000MX500SSD1), as a temporary download directory for Deluge and application data in general. However, I also saw here that Crucial MX500s might be causing issues, so I’ll have to test thoroughly if this works, get a cheap replacement otherwise. 1TB is ‘too much’ anyway.
I had quickly considered using the M.2 slots for the boot drive(s), but since I have the HBA, as well as left-over SSDs, that doesn’t make sense to me right now.
IPMI and fans
With TrueNAS installed and thus access to ipmitool on the Linux console, I reset the IPMI logins and it works great so far. Now I no longer need to rig up the old VGA monitor - no regrets getting this feature!
I also set the fan thresholds - thanks for the link! - although they looked pretty good already (likely why the issue with the power cycling occurred only rarely). I lowered them a bit more and now all is fine.
I have since also moved everything into the final case with different fans and those haven’t given me any troubles from the start.
Hardware changes
I got a good price on another two of the same 16TB HDDs, and for the RAM eventually also decided to switch to KSM32ED8/32HC, which wasn't any more expensive and gives me the option for 128GB should I ever need it. It doesn’t run at 3200 Mhz and wont let me select it in BIOS although the board should support it, but I don’t really care either.
The updated system configuration now looks like this:
Mobo | Supermicro X12SCZ-TLN4F |
CPU | Xeon W-1270E |
Cooler | Thermalright True Spirit 140 BW (indeed, no backplate issues in my case) |
RAM | 2x Kingston Server Premier DIMM 32GB, DDR4-3200 (KSM32ED8/32HC) |
HBA | AOC-S3008L-L8e |
Case | Lian Li A75X |
PSU | be quiet! Dark Power Pro P7 650W ATX |
Boot Drives | Mirror of Crucial M4-CT128M4SSD2 / Samsung MS5SPA128HMCD |
I ran Memtest on this and it passed just fine. Since yesterday the HDDs are going through disk-burnin.sh.
Once done, I’ll play around with layouts, in particular comparing a 3-way mirror as initially suggested by @sretalla to a 5-wide Raid Z2 which I could now also do in terms of disks available.
Power
Since I was initially concerned about power usage, here’s a few numbers that might be of interest to others. My measuring device is probably not super accurate but it should give a good idea.
Using the temporary PSU FSP350-60HHN
IPMI only | 8-9W |
Idling system w/ 1 case fan, 1 CPU fan (Arctic Alpine 12 CO) | 40W |
Idling system w/ 1 case fan, 1 CPU fan, 2 boot SSDs | 40W |
Idle in TrueNAS shell w/ all of the above and 1GbE port active | 25W |
Idle in TrueNAS shell w/ all of the above and 10GbE port active | 27W |
Memtest | 90-140W depending on the particular test |
Using the be quiet PSU
IPMI | 8-9W |
TrueNAS shell, 4 case fans, 1 CPU fan (Thermalright), 1GbE IPMI + 10GbE | 40W |
above + HBA + 5 16TB Toshiba MG08ACA16TE idling | 75W |
above in Memtest | 185W |
above in disk-burnin.sh (badblocks) on all HDDs | 95-105W |
These values are not as high as I had initially feared. While 75W idle isn't as low as I would have hoped (in absolute terms), almost of half of that is just the disks, and another good portion the 5 fans. So in retrospect I'm happy not to have gone for a lower TDP CPU/Mobo combo that would have likely limited me down the road or at least been more costly.
As I expected from my experience with the case, temperatures are not a consideration. Both CPU in idle as well as HDDs (independent of load) are only 5-10 degrees over ambient. I haven't tested it yet, but am sure the Thermalright can easily handle the CPU at full load.
Network
Finally, I have a X540-T1 in my client machine and with some initial iperf testing, I only get about 570 MB/s. Increasing the clients in iperf I can saturate the link. Still have to check what’s going on there, but from reading around, fiddling with jumbo frames is apparently not the way to go these days.
Last edited: