suggestions for a low power system to replace that old dual xeon macpro

John Doe

Guru
Joined
Aug 16, 2011
Messages
635
thanks for the update, please continue updating us on your build.

Would like to see the final measurement (power) and some performance pictures
 

Jessep

Patron
Joined
Aug 19, 2018
Messages
379

hotdog

Dabbler
Joined
Apr 14, 2014
Messages
44
thanks for the update, please continue updating us on your build.

Would like to see the final measurement (power) and some performance pictures
I will.
That's likely an idle power usage of 2-4x a more modern system.
That was my inital guess too.

I am tending to go with 5 x Crucial MX500 2TB in a raidz1 config.
the data pool will be backed up and we could sustain a total loss, since ongoing projects are always on at least one workstation.

Do you agree such a pool will have a better performance than the striped tripple mirror vdevs made out of 6x 2.5 4TB Barracudas?
 

Jessep

Patron
Joined
Aug 19, 2018
Messages
379
Crucial MX500 have a known issue with FreeNAS.
 

hotdog

Dabbler
Joined
Apr 14, 2014
Messages
44
Crucial MX500 have a known issue with FreeNAS.
hmm. tnx for pointing this out.

reading this thread...

... I must confess that I have forgotten about that 50% rule (for mirrors) and other general considerations like block size.

With 47% usage our current storage pool (of 4TB) is scratching on that limit, but doing no snapshots and basically adding data, overwriting* only a small amount of working files at the moment I don't think we would be affected by any performance loss going over 70% but that's a wild guess only.

* I learned zfs will not overwrite but choose a new location for the data block then because of its copy-on-write behavior.

typical folder is looking like that, it's really mixed, doing 3d stuff, single files can to be larger (up to 2Gb):
Annotation 2020-01-08 122248.png



As I understand MIRRORs are better in handling such data?
AND/BUT:
The plan to use snapshots will fragment the drive because of the releasing will force me to stick to that 50% rule for mirrors, right?

so: maybe I better stick with the spinning drives and get more raw space for the money.
 

Greg_E

Explorer
Joined
Jun 25, 2013
Messages
76
Other than ZFS doing full write before erase and using up all the extra blocks assigned for repair, I don't find anything specific to MX500 with a quick google. So I'd like more specifics too. The closest reference I found was a user saying that they've had them going for around 3 years and that SMART reports that no repair blocks are left, which means the next time a test is run and there are bad blocks, you lose capacity going all the way to complete failure. I have some MX500 in desktops, old units like 250gb that still work, but their usage is much different from a NAS, and especially ZFS.
 

volothamp

Explorer
Joined
Jul 28, 2019
Messages
72
Other than ZFS doing full write before erase and using up all the extra blocks assigned for repair, I don't find anything specific to MX500 with a quick google. So I'd like more specifics too. The closest reference I found was a user saying that they've had them going for around 3 years and that SMART reports that no repair blocks are left, which means the next time a test is run and there are bad blocks, you lose capacity going all the way to complete failure. I have some MX500 in desktops, old units like 250gb that still work, but their usage is much different from a NAS, and especially ZFS.

There's also this


the problem with a specific SMART error to be ignored but to me it's just a small detail.

Greg tell me how your build goes, I'll do the same in a few months.
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,829
I would also consider how the coming special VDEVs (TrueNAS 12.0) might influence your system design and pool layout.

Alternatively, You could consider a small pool that consists of SSDs (mirrored) to allow fast reads and writes with 10+GbE infrastructure while maintaining a bigger spinning pool for long term storage.
 

hotdog

Dabbler
Joined
Apr 14, 2014
Messages
44
UPDATE:

The build is complete and up for about 3 month or so.
I did a complete burn-in as described somewhere here in the forum.
Fan script is running to take care of the HDDs.
Everything works great.
The Server takes 52W while idling.
Feels snappy.
I haven't upgraded to 10GB yet, so bottleneck should be the network infrastructure at the moment.

Here again the parts:

mb - SUPERMICRO X10SDV
memory - Samsung 32GB M393A4K40BB0-CPB
case - Micro-100B Mini Server Gehäuse
psu - Seasonic SSP-300SUG
hdd bay - SilverStone SST-FS212B
pool hdds - 6x SEAGATE Guardian BarraCuda 4 TB
freenas on 2x 64GB Supermicro SATA Doms
jail ssd - SSD WD Green PC 3D, 240GB

a 12cm Fan blowing down on the board
a couple of Noctuas
thin Noctua NF-A12x14 as air intake for the hdd bay (some modding involved)


tnx again for all your suggestions and kind help.

ps: I did return the fractal node, I just didn't like the look of it.
 

Attachments

  • top_view.jpeg
    top_view.jpeg
    200.9 KB · Views: 292
  • bay.jpeg
    bay.jpeg
    274.7 KB · Views: 288
  • front.jpeg
    front.jpeg
    268.7 KB · Views: 291
  • on_table.jpeg
    on_table.jpeg
    257.2 KB · Views: 285

kiriak

Contributor
Joined
Mar 2, 2020
Messages
122
when you say 52 W idling,
you mean with the HDDs spinning or what?
 

hotdog

Dabbler
Joined
Apr 14, 2014
Messages
44
How's the noise @hotdog?
the server is very quiet. we do not have a seperate room for it so this was a must. The fan coming with the silverstone hdd bay was very loud. I replaced it with noctua but found out that the cooling of the drives wouldn't be good enough for the hotter summer days. That's why I decided to mount that intake fan directly in front of the bay. Together with the fan script the temps now are very good at very low noise level.
 

hotdog

Dabbler
Joined
Apr 14, 2014
Messages
44
Update:
my jail ssd (SSD WD Green PC 3D, 240GB) failed a couple of days ago.
I am not sure why - but I had my system dataset on it. (kind of stupid).

But I guess I was lucky, I was able to move the location of the dataset to the main data pool on the spinning hdds. The logging history is gone, but the system seems to work normal again.
the fan script with its logging file also was on the that ssd, I also moved that one.

Now the questions:
system data set: better put on the mirrored 64GB sata doms?

I am not running other jails or stuff at the moment but maybe want to use the nas as a vpn server, because the fritzbox vpn is too slow. So a replacement of the failed ssd is an option, too.
Would you guys then go for a mirrored solution to also put the system dataset there again?
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,829
SATADOMs are expensive but they work very well for system datasets. I use a mirrored boot pool even though that may not always save your bacon due to the limitations imposed by the BIOS. If disk 1 in the boot order always fails somewhere partially into boot, then the NAS will be done until you pull it. Recovery from failure with a mirrored boot pool is quicker though, which is why I use it.
 

hotdog

Dabbler
Joined
Apr 14, 2014
Messages
44
SATADOMs are expensive but they work very well for system datasets. I use a mirrored boot pool even though that may not always save your bacon due to the limitations imposed by the BIOS. If disk 1 in the boot order always fails somewhere partially into boot, then the NAS will be done until you pull it. Recovery from failure with a mirrored boot pool is quicker though, which is why I use it.
tnx for the answer Constantin.
So just to be clear, you suggest to just move the dataset to the sata doms where the freenas system is located then?
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,829
I'm not sure I understand your question or I am misinterpreting, and neither is a good thing.

I would use the SATADOMs as intended - to hold the various system images (i.e. "TrueNAS 12.0 U1") and the system swap files. Nothing more. SATADOMs are not particularly fast SSDs, nor do they tend to be big. I'd dedicate them to the system files needed to boot and run a NAS.
 

hotdog

Dabbler
Joined
Apr 14, 2014
Messages
44
I'm not sure I understand your question or I am misinterpreting, and neither is a good thing.

I would use the SATADOMs as intended - to hold the various system images (i.e. "TrueNAS 12.0 U1") and the system swap files. Nothing more. SATADOMs are not particularly fast SSDs, nor do they tend to be big. I'd dedicate them to the system files needed to boot and run a NAS.
ok maybe to be more clear:
I am not really sure how to configure this setting (System Dataset Pool):
now it's pointing to the big pool (and before to the single ssd that failed), not the satadoms.
so I guess this is ok then.
I thought 64GB sata doms should be big enough to also hold the system dataset.
 
Top