Low eneergy consumption selfBuild FreeNAS

Clexp

Cadet
Joined
Aug 7, 2020
Messages
7
Hi,
I would like to build (my first) FreeNAS box, with a low energy consumption goal.

I gather SSDs are much lower power consumption than disks. Like 8w versus 0.5w or less. If I have a simple 4 or 6 disks system, I might be at 32 + watts on disk alone - while the ssd would be 2+w? If disks are spun down, does that reduce their wattage, I can't find anything useful on this? Anyone comment on this? my training is mechanical engineering, I think the energy consumption in a low friction bearing is very low and most of the heat comes from the IC chips. I can't find any figures on real world energy consumption of disks spinning versus static.

I read that SSDs lie about themselves and can be difficult to use. Seems most FreeNAS builders populate the zpools with disks not SSDs (excepting boot and maybe z-intent-log), so an SSD setup is a bit of an atypical build for a first timer builder? Further - FreeNAS does lots of clever data scrubbing etc, so spinning down is pointless as any disk will be spunup for these tasks, even if SMART doesn't spin it up every 30s. Any commments? Is spindown pointless with FreeNAS? Does this mean FreeNAS forces disks to be at 'higher' energy consumption most of the time? And what does that mean in practice?

Ok, the use case: Just a simple family home NAS, mainly for file serving. It will look after 4 people's home dirs (20gb each over last 10 years), and also 3 shared smallish directories of music (25GB), photos (250gb) and video (1.2TB). I am not expecting to transcode much as the video folder is small relatively, and contains material my children have grown out of. I would be transcoding only occasionally and to (one target ever) a 4k smart TV. Our video consumption at home is mainly streamed from Prime or iPlayer etc. Accessibility: Files might be accessed for 2 hours on 5 nights of the week, so disks are spun down in the current arrangement at midnight and stay down till 5 to 7 pm when we do life-admin after work in the evenings. Photos accessed once a week tops, music rarely though this might increase to weekly/daily with better media serving, and video rarely - though again might increase a little to weekly with a better media serving.

For energy efficiency I read that newer gen processors (?i5/i7/E series) are very efficient and might sit at 4w idle, where a low TDP older processor may actually have a higher idle wattage, so really I want to be buying a new processor. For the rest of the build: will use my existing tower, and get entry level (if that term is applicable) ASRock rack or supermicro, as efficient psu as I can find, and ECC ram as is the received wisdom, plus modern intel NIC. Nothing surprising for the rest of the build - that is as low power as it is going to be, no?

Part 2:
Prior to coming to FreeNAS, for a long time I thought that higher reliability setups include a UPS - so your data is safe when you have power outs. This happens fairly regularly in my house, - not least because I drill a wall cable or something, in addition local supply pops off from time to time. Do people still mitigate disk damage with a UPS or is this pointless now there is C.O.W? If builds do include UPS, is there a tried path here - does it signal to FreeNAS to park the disks?

I would be grateful for pasting your favourite links regarding this, if you don't have time to type anything out.
 

Herr_Merlin

Patron
Joined
Oct 25, 2019
Messages
200
Hi,
Go with UPS. You can connect the UPS directly to FreeNAS. For example for APC and other USVs FreeNAS can control them from USB and react the way you set it up.
Regarding full SSD setup there are no issues there just choose your SSDs wisely and probably go for RAID Z2 or Z3 as you don't need that IOPS anyways.
Just look at the TBW as if those SSDs and choose those with a high 1PB or more as COW will perform some writes to the disk.
 

warllo

Contributor
Joined
Nov 22, 2012
Messages
117
Given your use case, you might consider an X11SCL-F with an i3-9100f. The i3-9100f provides a modest 65w tdp and should provide plenty of CPU power for your use case. Here is a link to some power consumption tests : https://www.servethehome.com/intel-core-i3-9100f-review-for-servers/3/

You also might consider a Supermicro embedded solution with a c3000 series CPU but this increases the cost and would more than likely struggle with your transcoding goals.

Lastly, you could explore the Xeon-D series. This will give you more power than the C3000 series and have everything you should need onboard. An example would be the X11SDV-4C-TLN2F along with plenty of SATA ports you get 10gbs ethernet.

Overall I think the X11scl-f provides the best value if you're looking to stick with new parts. You could always look towards something from ebay to save some money but then you'll likely lose power efficency
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Spin down is a pretty quick way to destroy drives. Take a look at my build. I tried to optimize for low power, and my build idles at 40 W. When doing heavy transcoding in Plex, my build peaks at 65 W.
 

Clexp

Cadet
Joined
Aug 7, 2020
Messages
7
Thank you for such a rapid reply. The USB UPS option seems entirely appropriate for domestic quality electricity reliability; if one goes to the lengths of NAS quality drives, ZFS, ECC memory, supermicro etc; weak link in the chain will be the domestic power supply.

I had not thought about mirror versus raidz2/3, but makes complete sense, don't need the iops.

Few questions, please drop a link over typing out a full reply, if there is an article I should have read.
So to take advantage of ZFS reliability, you need >1 disc per vdev and more than one vdev per pool, so that is a minimum of 4 disks/ssds; set up as a pair of vdevs of 2 disks each. (? correct me?)

I have read about 'hot spare'. I gather this is a non - used pool attached disc that ZFS drops in when there is a failure of any one active disc. ZFS temporarily silvers to it while waiting for a vdev replacement. So the minimum number of discs/ssds is 4+1? (correct me?) (I have seen instance with 1 disc.. ..just a youtube demo video), is this automatic or does is this run by the sysadmin? how often is it used? is it necessary in a domestic (but also production) system?

Also read about having a disc for the intent log. I understand this as being a usually-empty dumping ground disc/ssd for incoming files. ZFS squirts the file to the intent log and reports the file as written, then writes the file to the pool as a background process to capture the ZFS data security benefits, ultimately removing the file from the intent log. So the minimum number of discs/ssds is 4+1+1. Or is it that not necessary if the vdevs are comprised of ssds as the writes are so fast?

In addition you need a boot drive. I have seen this as a usb stick. I do not expect to boot very often so perhaps it does not matter, but given all the freeNAS/trueNAS choices which seems so much more serious than the usual RPI + recycled/shucked drives I see elsewhere, a real boot disc seems entirely appropriate. Again rare booting, so does not need to be ssd, but regardless this is a separate drive? No? And so this is not on the ZFS data sets? So the minimum number of drives is 4(for vdevs)+1(intent log)+1(hotspare)+1(boot disc) = 7 discs?

Sorry about the long post
 

Clexp

Cadet
Joined
Aug 7, 2020
Messages
7
Sorry I could have answered my own questions, have read around a bit since the post above a bit better:
Do I need a minimum of 4 disks in a pool?
No you can have one, but why would you? the redundancy comes from multiple disks to a vdevs and multiple vdevs per pool (less important-more about speed). So I will get 4-6.

What about the other discs?
You do need a boot disc unless you want to go through USB stick - many do. I will probably get an m2 drive for that.
You don't need a disc for zil, unless you are doing a lot of writes of high value data. The zil sits in ram, and if you have another disk for it then it is replicated there for power loss security, prior writing to pool. I am using this for family photos and general purpose docs like children's homework and my tax return. Low rates of data and low to moderate sensitivity of data. Probably don't need one. More for heavy database work - not me.
You don't need a L2ARC unless you are doing a lots of reads requiring high read rates. As above not me.

I have REALLY low need for processing. We will transcode almost never. With almost no database of stored videos we want to watch, minimal inclination to generate it, and the market providing lots of streamed TV including films, I just don't need to transcode anything. This means I have minimal need for processing power.

This means I really can look below an i3 for a processor. Why would I you say? will come to that.

The above suggested motherboard (X11SCL-F - thanks for this suggestions - I would be a little lost otherwise) has an intel C242 chipset which supports 8th/9th generation processors under formerly coffeelake models - including celerons. The LGA1151 (on the X11SCL-F) comes in 2 types - one supporting coffeelake and one supporting pre coffeelake processors. Given the chipset, I think this means the coffeelake type. Reading the X11SCL-F specs I think I should be able to put an 8th/9th gen formerly coffeelake processor (celeron) in it. Website suggests as much.
Here.

The intel celeron website referenced is formerly coffeelake and is 8th- 9th generation. It is LGA1151. It seems to match the X11SCL by specs.
Here.

Question: some sites describes the Celeron G4900 series as "Only Compatible with Intel 300 Series Motherboard". Does this mean only boards made by intel? or does the X11SCL count as one of these? - it seems some references describe "300 series" as "LGA1151". Will it work?

Why would I want to? Good question. Detour through SSDs versus HDD.

So the key point about the energy is about the £££ spend. See the attached sheet. I buy all my energy from green supplier so I don't have to care about carbon foot print. The spreadsheet gives me the 10 year energy cost of the SSDs versus the HDD. While the saving is £270 in 10 years, to get the upfront cost to approximately match the HDD pool size I would be spending 6 * £90 on SSDs. Bottom line win to HDD.

How does this effect the celeron decision? well the differential in cost between the celeron and the i3 is not as big. So the energy saving is higher for celeron, and there is a marked win for the celeron. See sheet.

The build is now going to the X11SCL plus a celeron (if it works - appreciate a comment), an UPS, and 4-6 ironwolves. An M2 ssd.

Next episode: PSU.
 

Attachments

  • power consumption home nas.ods
    11.8 KB · Views: 258

Clexp

Cadet
Joined
Aug 7, 2020
Messages
7
Ok don't reply, answered my own question. No the celeron will not fit into the X11SCL-F. This page shows the motherboard will only take Xeons (click on Compatible Products), and this pages implies the i3900 suggested requires a 300 series chipset (click on Compatible Products), whereas this page shows the X11SCL-F has a 200 series chipset. What am I missing?
 
Last edited:

LeDiable

Dabbler
Joined
May 6, 2020
Messages
36
I'm using the X11SCL-F with an i3-9100F. Been running for a couple months without any problems. Bought my RAM directly from Supermicro to ensure compatibility.
 

Clexp

Cadet
Joined
Aug 7, 2020
Messages
7
hmm. Sooo did you follow Wallio's recommendation, or did you do some spec sheet reading that told you this would work?

If you knew what you were looking for - to know that would work, what was it that you knew?

If it was Wallio's experience, then as in the other 'low energy new build' thread on this sub-forum, Wallio, how did you know it would work?

I ask because I really want to understand how these decisions are made. If the Proc only needs to be a 8th-9th gen LGA1151, then the celeron G4900 series should work. The G4900 series say c300 series chipset only. But then the i9100 is c300 series chipset according to the intel specs should not go in a c242 board...

...am I just coming up against an undecipherable mix of undeclared truth and veiled implications of marketing?

This wiki pages shows (2/3 way down) the celeron G4930s are in the same socket and Gen as the 'proven-to-work' i3 9100F. Now I think I could use the celeron. Does anyone else's head hurt when they read all these info sources?.?!!

I note the i3 9100F may be lower on power consumption than the celeron; as the i3 9100F has no gpu, where the celeron G4930 has a GPU. It will be idling, consuming power. I could really do with a chart of idle power. (Like this but with idle power.) There is an info gap out there - what is the stand alone idle power of all these parts. This gets asked in various places, but no-one has the answer. For the enterprise builds, idle power is not important as the server chips are supposed to be running near full load most the time. But for home servers, they are going to be idling 99% of the time, for 5 - 10 years, so I think home server builds really need idle power requirements. Seriously in a week, how much does the home server get used? Even if you are transcoding when you are at home AND awake that is seriously less than half the time. 1/3 of the day you are asleep so unless backing up (by no means the highest power consuming task) it is idle. When you are at work, your kids are at school, so idle. In the remains of time, normal people don't transcode all their free time and they don't even stream music 50% of the remaining hours of the day. The home server is idle for the vast majority of it's time; therefore I think the crucial figure is the idle power. Anyone have any links to share from their Favourite urls on idle power?
 

elorimer

Contributor
Joined
Aug 26, 2019
Messages
194
This is just one data point for you. I have a J5005-itx board with 16mb and six HDDs and a SSD that idles at 34 watts. One pool has two 7200 rpm 8TB drives in a mirror and two 5400 4TB drives in a mirror, another pool is a single 5400 rpm 4TB for backup purposes, and another is a single 7200 2TB for backup purposes. The SSD is the boot pool. When I had a single z1 pool of 4 5400 4tb drives it idled at 28 watts. I don't think SSD would be cost effective ( might save $40 a year in electricity), and this is plenty for me for the home server solution. I freely admit though, that this is not ECC, and far from office or enterprise level.

Also, redundancy is at the vdev level, and having more vdevs at this level is not necessarily as good a thing as one might think, I think. If you lose a vdev you lose the whole pool. People think about 4 drives for z2 because that is the minimum, but I think a two drive mirror is as good for a home server, with one important proviso. You have to have a backup of your data in place, and an offsite backup of the backup. The backups can be of datasets rather than vdevs, so they can be cobbled together from old, smaller drives. But I would start with the backup plan, and then work backward. I ended up with a pool of two vdevs because I had two different drive types in the mix, not because the drives weren't capable of serving data fast enough from a single vdev.

A very inexpert view from someone in the home server scenario.
 

LeDiable

Dabbler
Joined
May 6, 2020
Messages
36
Wasn't due to @warllo's suggestion. I think I just put my faith in Supermicro's specs which said the board supports 9th gen i3, and there may have been another user that used the same combination.

Since you're looking for low power, I should add that my IPMI shows the system using about 92-96 watts, no matter what. Well, except for when I did the CPU & HDD burn in and the fans were running full speed, then it was more like 130W. I've got 6x4TB Ironwolf drives, a pair of SATADOMs as boot drives, 2x32GB RAM, an HBA, backplane, and dual 920W PSUs. The chassis has 5x80mm fans too. I'm sure I could bring down the power consumption if I moved to a smaller chassis instead of the CSE-836, but I'm really not worried about this level of usage. Every build will be a little different, this is just my experience.
 
Top