Selecting First FreeNAS Server Build

MalVeauX

Contributor
Joined
Aug 6, 2020
Messages
110
Hello all,

Forgive yet another thread of this nature, we all start some where...

Anyhow, I would like to build a home server and after reviewing options I think I've settled on FreeNAS overall as I would like a GUI and console without getting into anything super complex as I'm a novice to this and the last thing I want to do is have no clue on how to use the software when a fault happens. That's what I'm trying to avoid. I have used Linux & Windows mostly and have had basic file servers over the years, but they were just to move data around, nothing important so I wasn't too concerned with data integrity and redundancy. Now, I'm a lot more interested in integrity and redundancy so instead of buying a NAS I think it's time to simply build one with more control over selection of equipment.

Electrical Consumption:

I'm trying to weigh the options between building a modern low power PC basically or going with fairly old and used server equipment. I do care about the electrical foot print of consumption and heat as I don't want a monster electric bill generator along with a heat box here in Florida; it's hot enough. While I do not have a number in my mind, I do at least think less than 100 watts is appropriate and of course would love to target something in the 20's and 30's. That said, after reviewing things, I may end up in the 50's. I'm mostly interested in idle consumption compared to load consumption as it will spend more time idling or under minimal load.

Budget:

Currently I am not setting a budget, but I'm trying to keep the budget foot print on the lower end. This is a home server and not part of a business or anything. Simply to handle redundancy with my photos, videos and my imaging data that we don't want to lose to a single fault. I would rather put money into the hard drives than into a monster system as I'm not going to be running VM's or anything and it's not going to do anything with the internet. Just local home network file sharing basically with integrity checking and redundancy. If you need a number, I'd like to keep the PC around $250 or less and that does not include the data hard drives. That's just the base machine. I will likely be using WD Data Center HDD's (I already use these), and so that will be a separate thing and no budget for HDD's as I want redundancy in that more than anything so good drives are a must and several of them.

Purpose:

I generate a fair amount of data that I would prefer to not lose. While I realize this is not a bullet proof backup system, I just want to get a better redundancy. In the past, I've simply just kept two physical copies of anything important, separate and unplugged. But that is tedious. I also add data regularly so maybe it's time to do an always available option, like a file server over my network. I generally want to explore both integrity of data (to avoid errors corrupting data) and the redundancy of the data to avoid simple loss from a single fault or hardware failure. This data is in the forum of a fairly large library of media content (thousands of AVI's and thousands of FLAC/MP3's) along with our yearly pictures (family and the kids). That's the data that will be accessed by everyone in the house daily. The other data is my photography data that I generate when I'm doing astrophotography (I do ultranarrowband solar photography) which generates about 100Gb of data each session. I later cull the data to keep only the good data I want to process, so the foot print goes down every few weeks when I cull it. But, I'd like to keep the data as I review it sometimes later and it's often used for confirmation purposes of events (I share information and submit with SpaceWeather and other enthusiasts). After managing the data to be culled, ultimately I can fit everything after the past few years comfortably onto an 8TB capacity system. So it's not a tremendous amount of data. But it's still a bit more than a few cheap externals can manage without integrity and redundancy checks. So this file server needs to host data that will stream over my 5G wifi network to other systems in the house and it will store pictures, video and my imaging data that will be less frequently accessed and act as an archive for those with redundancy.

I'm thinking I will start with 8TB with redundancy with the ability to expand, but I don't need major expansion, more like the ability to expand to a 2nd pool and transfer the data there and logically upgrade to larger capacities as I approach the need and as the larger capacities become available and affordable.

Hardware Options:

I have reviewed the links for the minimum/recommended hardware for FreeNAS. I don't know all of it and I have questions. I also have questions about which direction to take with respect to hardware given the purpose above (integrity & redundancy).

So before reviewing some hardware choices, here's some questions that I've seen posed in other ways and answered in other ways, but I think its fair to clear this first:

Would it be fine to use basic low power PC gear, such as a modern AM4 motherboard and modern low power APU (Athlon 3000G) with some non-ECC RAM to generate the bones of the system and focus on having more hard drives with a higher level of redundancy? And if so, what level in ZFS (mirror versus something with even more redundancy than mirror?). Or, would it be better to get older server hardware (Intel XEON platform) with ECC registered memory and be ok with less hard drives with redundancy (the idea being adding integrity to redundancy). My only reason that I'm aware of to go this route would be to get access to an affordable platform with ECC memory, versus new modern ECC capable equipment that is very, very expensive. While I do not have a set budget for this, I would rather most of the budget go into the hard drives themselves (I will not be using cheap drives, I will use Data Center class drives).

Hardware I'm looking at based on the above questions:

AMD modern platform
, low power, but non-ECC, no integrity checks, just build more redundancy maybe:

ASUS Prime B450M-A/CSM Motherboard (6x SATA, 4x RAM slots)
AMD Athlon 3000G (APU) AM4
Silicon Power 16GB (8x2) DDR4 PC4 25600) RAM

I think I would need to add an Intel NIC to this still. But ultimately I think that's all is needed beyond the hard drives and power supply. The guts anyways would be something like the above. I've read about higher end series of AMD CPU's with ECC but it doesn't seem to be confirmed working and simply is there. I realize no ECC with these low end APU options and the above system has no integrity handling at all so the focus would be redundancy only and has plenty of SATA ports to get going. I was looking at the above simply because it's low power (35 watt CPU) that has everything needed and it's lower priced for modern stuff.

Alternatively, I'm also looking at old Intel XEON platforms, old servers. The goal would be to get integrity into the system in the form of ECC affordably.

Intel XEON platform, higher power, adds registered ECC:

Supermicro X8SIL-F Motherboard (6x SATA, 4x RAM slots, Dual Lan NIC)
XEON X3440 CPU
16GB DDR EEC Registered RAM (unknown make on this link)


This above hardware is a single package. My main concerns are whether the RAM is any good and if the ECC is truly real and registered ECC and appropriate. I'm also concerned with the dual LAN and whether I need to upgrade to an Intel NIC or if this board's onboard LAN is appropriate. And of course, overall, I'm just concerned with some really old hardware that has been used heavily and whether it will be a good idea to build a redundancy based system around old used hardware like this. I realize this is often done. But I've not done it so naturally there's some concern there.

Network:

My current network a wireless 5G network between my machines. I'm thinking I will let this particular machine be plugged in via LAN to my router and served out wireless for most of its job to my local machines. For my data that I generate lots of, I will likely look to find a wired way to transfer the data to the NAS, so I may change my current workstation to wired LAN to be able to do that rather than try to move 100Gb at a time over 5G wireless (that sounds like a bad idea and slow). Any suggestions here are appreciated. I would love a 10Gb network, but I'm not there yet. So I need to maximize the 1Gb network.

Thanks for reading so far!

I look forward to any criticism, input and suggestions or direction on this!

Very best,
Marty
 

Evertb1

Guru
Joined
May 31, 2016
Messages
700
Wow that is quite a story. Responding to this can easely take the length of a book :smile: . But I think that you have some wants and needs that are not fully in line with each other. I leave it to others to go into the technical aspects of your plans. I want to tel you why I started with FreeNAS some 6 years ago.

Until that time I was running a Windows home server but was getting tired of the almost endless updates, problems with the filesystem etc. At a certain point I was at a technical congress (I am a software engineer) and some English professor was acting as a ZFS evangelist. And you know what: she convinced me that my data deserved a decent and reliable file system. So taking the message I received back to home I started to look for a Server OS that could provide me with the support for ZFS. In those days I have spend more time reading about NAS software etc. then I care to admit. Long story short, I encountered the FreeNAS forum.

You stated that you care for your data. Well, so do I. So I took the hardware recommendations on the forum at heart. To me that meant ECC memory (and more then the bare minimum), a good motherboard (without unneeded bells and whistles) that supported ECC and a CPU that supported ECC, some nice NAS worthy drives etc. Nothing over the top but also not really cheap. Could I have gone with old PC hardware? Yes I could, but to me my data was worth more than that.
 
Last edited:

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
First, your $250 budget is unrealistic. Even a hand-me-down server with an HBA is going to run around $400-500.
 

MalVeauX

Contributor
Joined
Aug 6, 2020
Messages
110
Wow that is quite a story. Responding to this can easely take the length of a book :smile: . But I think that you have some wants and needs that are not fully in line with each other. I leave it to others to go into the technical aspects of your plans. I want to tel you why I started with FreeNAS some 6 years ago.

Until that time I was running a Windows home server but was getting tired of the almost endless updates, problems with the filesystem etc. At a certain point I was at a technical congress (I am a software engineer) and some English professor was acting as a ZFS evangelist. And you know what: she convinced me that my data deserved a decent and reliable file system. So taking the message I received back to home I started to look for a Server OS that could provide me with the support for ZFS. In those days I have spend more time reading about NAS software etc. then I care to admit. Long story short, I encountered the FreeNAS forum.

You stated that you care for your data. Well, so do I. So I took the hardware recommendations on the forum at heart. To me that meant ECC memory (and more then the bare minimum), a good motherboard (without unneeded bells and whistles) that supported ECC and a CPU that supported ECC, some nice NAS worthy drives etc. Nothing over the top but also not really cheap. Could I have gone with old PC hardware? Yes I could, but to me my data was worth more than that.

Thanks for taking a moment. I appreciate it.

I agree; that's why I'm asking. I'm not sure how old is too old versus getting new server level equipment. And the difference between someone working in this environment, versus someone at home looking to just have redundancy. So the question goes back to getting specific server hardware or if simply having more redundancy on a pool (allowing 2+ drives to fail level redundancy) would be the only real important thing and the rest is a lot of what-if?

First, your $250 budget is unrealistic. Even a hand-me-down server with an HBA is going to run around $400-500.

Thanks for the reply. I realize that this would be a typical response as I've read through other threads. I don't see why a HBA is needed up front when many boards already have 6x SATA ports, as mentioned above and my links go to equipment wise. The hand me down server linked above (the second one, the Supermicro) is well within this budget. Do you have any advice on the above Supermicro setup with a XEON and ECC RAM with 6 SATA ports as a starting point? Or do you think its not safe or too old?

Thanks for taking the time.

Very best,
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Thanks for the reply. I realize that this would be a typical response as I've read through other threads. I don't see why a HBA is needed up front when many boards already have 6x SATA ports

If you're willing to live within the limits of the motherboard's 6x SATA ports, then by all means, go ahead. My home system also has 6x SATA ports, of which I'm using 5x.
 

Evertb1

Guru
Joined
May 31, 2016
Messages
700
(allowing 2+ drives to fail level redundancy)
When I started with FreeNAS I started with a pool of four drives in mirrored pairs. Yes I had 2 drives redundancy but if 2 drives of the same pair failed it was goodbye to the pool. So when I had the budget to buy 2 more drives I decided to go with a RAIDZ-2 configuration with 6 drives. That way any 2 drives could fail before I was in real trouble. Besides of that I like the economy of a 6 to 8 drive RAIDZ-2 configuration. To me it offers a nice balance between redundancy and available storage space. But don't forget redundancy is about keeping your data available. Security of your data is in a decent backup plan.
 

MalVeauX

Contributor
Joined
Aug 6, 2020
Messages
110
If you're willing to live within the limits of the motherboard's 6x SATA ports, then by all means, go ahead. My home system also has 6x SATA ports, of which I'm using 5x.

Thanks, in the original post I went over this, so do you think there's something I'm missing? I plan on having either 2 mirror pools, or a more advanced redundancy system using 4~5 drives for now. I don't plan on expanding rapidly. I see a hardware HBA in the future of course, to expand the number of drives potential. I think however instead of massing lots of drivers over time, I would rather go from one large redudnant pool to a second one with larger capacity drives as a means to expand, rather than just keep adding small capacity disks and increasing the array to a large electric consuming thing. What do you think of that approach? And do you have any comments on the hardware itself linked above?

When I started with FreeNAS I started with a pool of four drives in mirrored pairs. Yes I had 2 drives redundancy but if 2 drives of the same pair failed it was goodbye to the pool. So when I had the budget to buy 2 more drives I decided to go with a RAIDZ-2 configuration with 6 drives. That way any 2 drives could fail before I was in real trouble. Besides of that I like the economy of a 6 to 8 drive RAIDZ-2 configuration. To me it offers a nice balance between redundancy and available storage space. But don't forget redundancy is about keeping your data available. Security of your data is in a decent backup plan.

Thanks, that's the direction I'm thinking. Initially I was thinking mirror, but the more I look into ZFS options, I'm liking the idea of RAIDZ-2 to increase redundancy while maintaining some storage space. I don't need a ton of capacity, as I'm more interested in redundancy for now and would rather use high capacity drives and less of them than a ton of low capacity drives. 8TB drives are affordable for now. In a few years when 12TB or more is affordable I will migrate to a new set of drives likely but I don't think I want 10+ drives ultimately, that's a lot of management, electricity and trouble shooting to fool with. 5~6 drives sounds manageable over time. Do you have any comments on the hardware listed above? Specifically the Supermicro kit I linked?

Very best,
 

Evertb1

Guru
Joined
May 31, 2016
Messages
700
I don't see why a HBA is needed up front when many boards already have 6x SATA ports
An HBA can come in handy. It offers you the sometimes badly needed extra sata ports (it also offers some extra heat to deal with, but OK).

In the early days of FreeNAS an USB stick was promoted as boot device. These days more and more people go with a small SSD. So that is the first SATA port that is not available for your pool. Further more it can be a real pain in the a.... if you, at a certain point, decide to go with a bigger pool. It's best to start out with the right number of disks.

There can also be an ecomic reason to start with some more disks.
Let's say for example 4 10TB WD reds cost 1236 euro while 6 6TB drives set you back 1092 euro (prices on the Dutch market).
With the 10TB drives you have 40 TB raw capacity and with the 6TB drives you have 36 TB raw capacity.
So while the 10TB drives cost more they also offer 4 TB more raw capacity.

But now we vist a RAIDZ calculator ZFS/RAIDZ Capacity calculator.
With 2 drives redundancy in a RAID-Z2 pool with 4 10TB drives you get 14.97 TB usable storage capacity (37.44 % of the raw capacity).
With 2 drives redundancy in a RAID-Z2 pool with 6 6TB drives you get 18.46 TB usable storage capacity (51.29 % of the raw capacity).
See why I like the economics of a 6 drive pool? More capacity for less money and the same redundancy.

You can see that you should not decide to soon about the number of drives in your pool. Doing some calculations can be worth your time. If you wonder why you have so little pratical usable space: you have to deal with so called "slop allocation" and you need to have about a 20% free space limit, or your pool becomes very slow of even become inresponsive if it gets full beyond that free space limit.

An HBA does not need to be expensive. A year ago I was able to buy some brand new Dell perc h310 hba's for as little as 35 euro's a piece. They contain an LSI chip and can be flashed to IT mode (needed for FreeNAS/ZFS) without a problem.
 

MalVeauX

Contributor
Joined
Aug 6, 2020
Messages
110
But now we vist a RAIDZ calculator ZFS/RAIDZ Capacity calculator.
With 2 drives redundancy in a RAID-Z2 pool with 4 10TB drives you get 14.97 TB usable storage capacity (37.44 % of the raw capacity).
With 2 drives redundancy in a RAID-Z2 pool with 6 6TB drives you get 18.46 TB usable storage capacity (51.29 % of the raw capacity).
See why I like the economics of a 6 drive pool? More capacity for less money and the same redundancy.

You can see that you should not decide to soon about the number of drives in your pool. Doing some calculations can be worth your time. If you wonder why you have so little pratical usable space: you have to deal with so called "slop allocation" and you need to have about a 20% free space limit, or your pool becomes very slow of even become inresponsive if it gets full beyond that free space limit.

An HBA does not need to be expensive. A year ago I was able to buy some brand new Dell perc h310 hba's for as little as 35 euro's a piece. They contain an LSI chip and can be flashed to IT mode (needed for FreeNAS/ZFS) without a problem.

Thanks, that was very helpful and helps with the idea of redundancy. I was prepared at first, merely from doing RAID in the 90's, doing straight mirrors of two sets with 4 drives. One to be accessed frequently (to stream to the other machines) and one to be accessed infrequently, archival storage of my past data that I"m not actively working on. So I was prepared mentally to be on 4 drives with 50% capacity of the total but each data set was only able to have a single drive failure. Granted, the likelihood of multi-drive failure is very low, it would not be due to the drives and more likely due to the power supply or controller components giving up the ghost in a spectacular fault. So thinking forward almost 30 years, maybe something better is around and that's what lead me to ZFS and FreeNAS ultimately with a bit more simplicity for someone not working in this industry like myself and without going into RAID5 with a controller. RAIDZ-2 looks like a good way to get a large mirrored pool without needing a ton of drives.

While I definitely appreciate the idea of getting more capacity with the same redundancy with more lower capacity disks, that also means more electrical foot print. I'll have to do some calcs and see what will win out in the end. Thanks for that link to the capacity calculator, that will be handy to get an idea of things.

I agree that the limited SATA ports of a motherboard are a fixed limitation without a HBA, that is something I'll have to get into as well. The question comes down to time then as to when that would be. Right now, I don't even need 16TB of capacity. I'm happy with 8TB capacity at the moment.

An advantage to the newer motherboards is the PCIe SSD drives that don't soak up a SATA port giving up to 6 SATA ports on the motherboard. The older equipment I linked lacks that and will lose a SATA port to the SSD. I don't want to do a USB flash drive long term compared to SSD, so I will have 5 SATA ports to use for the pool. That could be 4 drives and a hot spare perhaps. It's that or put a HBA in early, like the LSI 9211-8i (8 SATA from 2 SAS) and have 5+8 SATA ports or just keep it all on the HBA. It's inexpensive enough to go ahead and add this, assuming it works without a bunch of work-around. I need to learn more about what to get for this purpose and if it will even work with the suggested linked hardware above.

Very best,
 

MalVeauX

Contributor
Joined
Aug 6, 2020
Messages
110

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
That'll be a solid build, and the CPU supports EPT, so bhyve VMs are also on the table.
 

Evertb1

Guru
Joined
May 31, 2016
Messages
700
You can't go wrong with those components. They are solid. And the CPU, while not a real powerhouse (3145 PassMark), should be powerfull enough to service at least one HD media stream. I seem to recall that streaming was one of your wishes. According to the Plex site a 2000 PassMark is needed for one 1080p stream. If you don't do to many extra's like a bunch of VM's your CPU will hardly be tasked. By the way: I am a big fan of Seasonic PSU's.
 

KrisBee

Wizard
Joined
Mar 20, 2017
Messages
1,288
@MalVeauX Personally I'd look for a supermicro x9 series m/board, or newer, if finances allow. If you want another point of reference for server builds then https://www.serverbuilds.net/ and associated forum posts & vids has target s/hand prices in the US market. https://forums.serverbuilds.net/t/g...er-efficient-and-flexible-starting-at-125/667 This https://www.serverbuilds.net/the-original-nas-killer-v10 was based on a supermicro x8sil-f. But perhpas you've already been looking here as you listed a silicon power ssd. A small s/hand intel s3500 SSD would make a good boot drive.

@Evertb1 has put a strong case for raidz2. But if you want another point of view with an advocvate for mirrors see here: https://jrs-s.net/2015/02/06/zfs-you-should-use-mirror-vdevs-not-raidz/
 

Evertb1

Guru
Joined
May 31, 2016
Messages
700
@KrisBee That was exactly the same blog submission that made me decide for mirrors when I started with FreeNAS. I have seen a lot of articles and forum discussions since then about this subject and while there are no absolutes in this life I switched to RAIDZ. I dont mind to spend money on my FreeNAS server but my budget is always limited. So I always balance security/availibility with the costs of a certain amount of usable storage. Never the less it is a very helpful and well written submission and is worth the time of reading it before making up your mind.
 

MalVeauX

Contributor
Joined
Aug 6, 2020
Messages
110
That'll be a solid build, and the CPU supports EPT, so bhyve VMs are also on the table.

Thanks, I have not considered VM's, but maybe I should eventually.

You can't go wrong with those components. They are solid. And the CPU, while not a real powerhouse (3145 PassMark), should be powerfull enough to service at least one HD media stream. I seem to recall that streaming was one of your wishes. According to the Plex site a 2000 PassMark is needed for one 1080p stream. If you don't do to many extra's like a bunch of VM's your CPU will hardly be tasked. By the way: I am a big fan of Seasonic PSU's.

Thanks, I will not be running PLEX, my streaming is just streaming local content (720p) mostly, no transcoding at all, just H264 encoded AVI's (a lot of cartoons that my kids rewatch everywhere in the house, sigh) and our movie marathons (I archive our movie discs to digital format and store the discs). That and FLAC/MP3 streaming but that is minimal and shouldn't stress any system. Maybe one primary streaming client at any given time as we watch together. The rest of the unit will function as an archive for my imaging data, which will take up the majority. The streaming will be done via SAMBA over the network to a Windows machine running Kodi (XBMC) and the client does the decoding of H264. So I'm not certain but I think there should be minimal server CPU needed for this sort of task.

I think I need to look more into how FreeNAS handles file sharing and sustained small transfers (ie, streaming content to a client with no encoding, decoding or transcoding).

I wonder if there is a way to benchmark this and monitor the CPU's use on my end with FreeNAS? I imagine there's a resource monitor in FreeNAS GUI?

@MalVeauX Personally I'd look for a supermicro x9 series m/board, or newer, if finances allow. If you want another point of reference for server builds then https://www.serverbuilds.net/ and associated forum posts & vids has target s/hand prices in the US market. https://forums.serverbuilds.net/t/g...er-efficient-and-flexible-starting-at-125/667 This https://www.serverbuilds.net/the-original-nas-killer-v10 was based on a supermicro x8sil-f. But perhpas you've already been looking here as you listed a silicon power ssd. A small s/hand intel s3500 SSD would make a good boot drive.

@Evertb1 has put a strong case for raidz2. But if you want another point of view with an advocvate for mirrors see here: https://jrs-s.net/2015/02/06/zfs-you-should-use-mirror-vdevs-not-raidz/

Thank you very much for the links; I will certainly look into the x9 series and see what I can find there. I had originally started reading about the NAS Killer blogs and also that JRS blog about ECC vs non-ECC but I think I should go with ECC ultimately as this older server hardware is not that expensive compared to new server hardware (costing thousands) to do such simple file sharing with some integrity checks and redundancy. I have no plans of running lots of VM's, hosting a VPN, and serving data to hundreds of clients over a network. This is really just for my home with no outside internet access to it, local intra-net via wired LAN and 5G wifi from my router, serving 3 client machines, and only 1 of which will likely be streaming local content from it. It will be mostly archival for the other machines. I have a HTPC in my living room running Kodi that will simply stream local content in the form of AVI and FLAC with decoding on client side. I have a Steam Box that we all use for playing games that will not need anything from the server other than maybe some archival use. And my workstation will be primary for use of the server for archival use to store my imaging data redundantly.

Good points both ways on mirror vs RAIDZ2+. The comments about the rebuild are rather important to me. I do not want to suffer a resilver of a disk for hours and hours through the night or longer, just to have a power blink or anything happen. This is hurricane season here in Florida, so having that happen during hurricane season would be horrible as power could be down for days and I'd have to leave the server offline just to hope it doesn't nuke itself. I think for now I will stay focused on simple mirroring with good quality drives.

This prompts a question: with a mirrored disc, if I were to take it out of the FreeNAS box and hook it up to another system, would it be able to read it? Say from a Windows machine or a Linux machine? Other? I'm curious how the ZFS structure works in terms of being readable by other file systems in case I wanted to take a disc and get data from it without using the FreeNAS box. Scenario being that say the FreeNAS box died, the motherboard died, the SSD died, something, and I had to take the drives out. Can those mirrored drives be read? Or do I have to have another FreeNAS system running to read the ZFS file structure? Or other OS that can read the ZFS file system so I can access that data in the case of total hardware failure (but the discs are not failed, just not served)?

Thank you all again for your time.

Very best,
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
Say from a Windows machine or a Linux machine?

Windows, no. Linux, yes, just be sure that zfsutils-linux is installed. There's a caveat here about feature flags, that'll largely be resolved assuming TrueNAS Core and Ubuntu 20.04.

There is a ZFS port for Windows but I'm not sure how stable it is. You're better off with a boot into Linux.
 

Evertb1

Guru
Joined
May 31, 2016
Messages
700
via SAMBA over the network to a Windows machine running Kodi (XBMC)
I am not sure if giving a Widows machine native access to a file share, qualifies as streaming. My Windows HTPC with J-River also has access to all my media files trough a Samba share but I never looked at it as streaming. J-River provides streaming though. Anyway as long as another machine is doing the heavy lifting you are good.
with a mirrored disc, if I were to take it out of the FreeNAS box and hook it up to another system, would it be able to read it?
I have never done it but I suspect that at least in theory that should be possible on any OS supporting ZFS. That being said, ZFS has so called feature flags. I am not an expert, to say the least, on this subject but I suspect that if the ZFS version you are running has feature flags that are not supported on the receiving system you are out of luck.
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
For x9 there's a suggestion over yonder that runs you about 380: https://www.ixsystems.com/community...anges-to-upgrade-as-high-as-512gb-of-ram.110/

Note that you can absolutely drop those costs further by going with 32GiB of RAM and choosing a 4-core CPU, which would be entirely adequate for your use case.

For example https://www.ebay.com/itm/Intel-Xeon...626484?hash=item23d7d09534:g:ss4AAOSwd59eyX65 and https://www.ebay.com/itm/SAMSUNG-32...304181&hash=item3da89715cb:g:bL0AAOSwCQJfJCY5

Further edit: Those x9srl-f are getting expensive. Some more digging to see what's affordable right now :). Could be an x9sri-f, e.g. https://www.ebay.com/itm/Supermicro...tel-C602-ECC-DDR3/264821242181?epid=127348132 for $150. Add CPU and memory and cooler and you are right around that 250 mark.

Further further edit: Maybe the 32GiB max idea?

$75 for board, e.g. https://www.ebay.com/p/127396627?iid=352558745024
$55 for Xeon quad, e.g. https://www.ebay.com/p/10011083316?iid=382486487188
$80 for 16 GiB of UDIMM RAM, e.g. https://www.ebay.com/itm/Crucial-8G...VER-Memory-RAM-8G/383582216503?epid=691116539
$40 for some form of CPU cooler for it, look around. Maybe this. https://www.ebay.com/itm/401123086618 . I assume you have thermal paste, otherwise that's an additional expense.

$250 total, add a boot drive (could be a $25 SSD like Intel 320 40GB), and you are set. An extra 80 gets you more memory, which gives you a bit of headroom. Nice but not necessary.
 
Last edited:

Evertb1

Guru
Joined
May 31, 2016
Messages
700
There is a ZFS port for Windows but I'm not sure how stable it is.
You don't want to go there if the shouting and cursing of a collegue of mine is any indication :tongue:
 

Evertb1

Guru
Joined
May 31, 2016
Messages
700
Or do I have to have another FreeNAS system running to read the ZFS file structure?
You know, in case of a real disaster, temporary installing FreeNAS on another system is not very time consuming. Especially if you have taken care that you saved your configuration file on a save place. It should not take more then 25 minutes.
 
Last edited:
Top