Help deciding on what OS for NAS solution

Which NAS OS should I use?

  • Freenas

  • Openmediavault

  • Unraid

  • Other


Results are only viewable after voting.
Joined
Oct 18, 2018
Messages
969
I'm going to venture to suggest that perhaps we've gotten a bit off track with some exciting banter here.

General reply, I can't afford to go and get another cpu, mobo, ram RN these will changed out over the next few years. Yes I am aware of the risks of using non ECC memory but as long as your vigilant and make backups its not a huge deal it can be I am aware plus also ECC ram isnt completely error proof. I am not storing mission critical data, just media which I still have a lot on disc still.
It sounds like you understand the limitations of your hardware, I think that is the most important thing.

Everyone comes to FreeNAS with different goals, expectations, tolerance for risk, budget, availability of hardware, expertise, and interest. I think it is important to help people get a sense of what risks they are going to have with certain hardware and what kind of performance they can expect. It sounds to me like you, @CrackJack have a pretty good understanding of that.
 

John Doe

Guru
Joined
Aug 16, 2011
Messages
635
your idea is interesting.

maybe keep in mind, even though you are not using the GPU to play games, it will still consume energy. every hardware you will put into your machine will consume energy, no mater whether switched on or off. of course, depending on the set up but you should consider more than 60w!
especially with O/C`d CPU. Freenas or a server runs 24/7 can you handle the cost?

your build will be much more expensive instead of going really low cost freenas and budget gaming pc. Lowcost freenas is 2nd in my signature and cost about 150-300 USD.

if you are aware of all this, consider en ESXi set up.
Set up a hypervisor, virtualize whatever nas OS is most suitable for you and virtualize windows 10 with passed through GPU.
you will want server grade:
NIC
CPU
Motherboard
Memory
(Discs)
(PSU)
HBA

in terms of OS, freenas runs best with ECC memory. Freenas is an enterpise OS, means it shall be used with server grade hardware and you will benefit with enterprise services and speeds.


since you already have hardware, i would recommend you not to use freenas. you will have problem or not gainign the expected speed out of it. go for different nas OS with lower requirements on hardware.
 

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
Hey Crack,

With non-server grade hardware, I would not go with FreeNAS. Unraid is based on Linux and much closer to a typical Linux installation. As such, it will do much better with consumer grade hardware like yours.

Also, did you plan for a UPS ? For a Backup ? Off-site redundancy ? If you are about to put a lot of stuff in that computer, better be sure not to loose it to some obvious and highly probable incidents...
 

CrackJack

Dabbler
Joined
Sep 16, 2019
Messages
20
I'm going to venture to suggest that perhaps we've gotten a bit off track with some exciting banter here.


It sounds like you understand the limitations of your hardware, I think that is the most important thing.

Everyone comes to FreeNAS with different goals, expectations, tolerance for risk, budget, availability of hardware, expertise, and interest. I think it is important to help people get a sense of what risks they are going to have with certain hardware and what kind of performance they can expect. It sounds to me like you, @CrackJack have a pretty good understanding of that.

Yeah pretty much been researching this for months like I said even messed around with OS's inside of a VM to get a feel for them but I made this post to get perspective, I want reduce the risk of making mistakes if I can so asking other people who have more experience than me thats great. The worst thing is when you just know enough for it to bite you in the arse later for not actually knowing at all.
 

CrackJack

Dabbler
Joined
Sep 16, 2019
Messages
20
your idea is interesting.

maybe keep in mind, even though you are not using the GPU to play games, it will still consume energy. every hardware you will put into your machine will consume energy, no mater whether switched on or off. of course, depending on the set up but you should consider more than 60w!
especially with O/C`d CPU. Freenas or a server runs 24/7 can you handle the cost?

your build will be much more expensive instead of going really low cost freenas and budget gaming pc. Lowcost freenas is 2nd in my signature and cost about 150-300 USD.

if you are aware of all this, consider en ESXi set up.
Set up a hypervisor, virtualize whatever nas OS is most suitable for you and virtualize windows 10 with passed through GPU.
you will want server grade:
NIC
CPU
Motherboard
Memory
(Discs)
(PSU)
HBA

in terms of OS, freenas runs best with ECC memory. Freenas is an enterpise OS, means it shall be used with server grade hardware and you will benefit with enterprise services and speeds.


since you already have hardware, i would recommend you not to use freenas. you will have problem or not gainign the expected speed out of it. go for different nas OS with lower requirements on hardware.

Machine only pulls 170-180 watts under stress test, the idle 35 watts add 8 drives to that yeah going to wards a 100 I think.
This is something I will have to look into once the system is built, the oc'ed cpu is purely for Minecraft and Arma and I am considering only having 2 core oc'ed and having to 2 downclocked slightly.

Yeah I already have everything but the drives and the hba

I am leaning more towards OMV
 

CrackJack

Dabbler
Joined
Sep 16, 2019
Messages
20
Hey Crack,

With non-server grade hardware, I would not go with FreeNAS. Unraid is based on Linux and much closer to a typical Linux installation. As such, it will do much better with consumer grade hardware like yours.

Also, did you plan for a UPS ? For a Backup ? Off-site redundancy ? If you are about to put a lot of stuff in that computer, better be sure not to loose it to some obvious and highly probable incidents...
Not really keen on Unraid seems like walled of garden solution not into the idea of that, OMV would be the way I would really go. I live in the UK all plug sockets in my house are surge protected and all plugs have fuses I also am using a surge protector but will be going after UPS as soon as I can afford too.

Currently no backup but that will sorted again when I can afford too, off site depending if I can convince my brother to give me some of his server space he has 48 drive array using windows server.

My understanding is that ZFS is tough SOB to crack even with sudden powerloss or sudden ram ejection for examples.
 
Joined
Oct 18, 2018
Messages
969
Yeah pretty much been researching this for months like I said even messed around with OS's inside of a VM to get a feel for them but I made this post to get perspective, I want reduce the risk of making mistakes if I can so asking other people who have more experience than me thats great. The worst thing is when you just know enough for it to bite you in the arse later for not actually knowing at all.
haha well put. Been there, done that! :)
 

John Doe

Guru
Joined
Aug 16, 2011
Messages
635
Machine only pulls 170-180 watts under stress test, the idle 35 watts add 8 drives to that yeah going to wards a 100 I think.
This is something I will have to look into once the system is built, the oc'ed cpu is purely for Minecraft and Arma and I am considering only having 2 core oc'ed and having to 2 downclocked slightly.

Yeah I already have everything but the drives and the hba

I am leaning more towards OMV
Machine only pulls 170-180 watts under stress test, the idle 35 watts add 8 drives to that yeah going to wards a 100 I think.
This is something I will have to look into once the system is built, the oc'ed cpu is purely for Minecraft and Arma and I am considering only having 2 core oc'ed and having to 2 downclocked slightly.

Yeah I already have everything but the drives and the hba

I am leaning more towards OMV

your rx480 seems to eat up to 110w
cpu probably the same
hba 20w
ram 10w
discs 40w
mobo 20w

means 2/3 peak would be at 310w -> psu shall be more than 700w

eventhough you are not hitting 310w all the time, cooling approx. 150w is not easy if you want to have the "monster" in your livingroom somehow "silent"

hdds shall not hit the 40 degress celsius!

I think you really should consider your project. it is very ambitious in terms of set up, hardware price and price to keep it alive.
in the end you will have something which is running slower than on bare metal, consumes more energy and is ridiculous noisy. At one point you will start switiching it off over night to save power or getting angry while watching a movie on full volume.

just go with 2 devices
 

CrackJack

Dabbler
Joined
Sep 16, 2019
Messages
20
your rx480 seems to eat up to 110w
cpu probably the same
hba 20w
ram 10w
discs 40w
mobo 20w

means 2/3 peak would be at 310w -> psu shall be more than 700w

eventhough you are not hitting 310w all the time, cooling approx. 150w is not easy if you want to have the "monster" in your livingroom somehow "silent"

hdds shall not hit the 40 degress celsius!

I think you really should consider your project. it is very ambitious in terms of set up, hardware price and price to keep it alive.
in the end you will have something which is running slower than on bare metal, consumes more energy and is ridiculous noisy. At one point you will start switiching it off over night to save power or getting angry while watching a movie on full volume.

just go with 2 devices
Yeah the RX 480 can eat up to 180w if allowed but it shouldn't its idles at 3-7w its just going to be used to allow the system to boot I may get small dinky thing eventually and maybe used every now and then for gpu accelerated tasks.

I have read HBA's in IT mode use half as much power and only pull power when necessary.

I would not expect the system to run at 300w more like 200w for hard loads maybe hitting 250w under very heavy cpu load when I was using it as gaming system the system only pulled 390w from the wall max like I said the system Idling will use just less than 100w

I have 650watt gold psu from aerocool not the most reputable brand but hey it works handled an rx vega 56 at 300w plus OC'ed i5 at 150w so I trust it to handle this.

Its going in closet with my other pc I wear headphones 24/7 so noise aint gonna bother me if there is any and with the current system the fans are quiet so thats plus.

I am okay with having hdd at between 40-50 (getting towards and above 50 does annoy me) and to be quiet frank you cant tell me any different, from my obseravations over the years its vibrations and eletrical faults that kill them over heat, I know seagate came out a while ago and said there motors are rated to run at 100c I dont believe this but meh. Modern hdd are tough things, biggest killer is shock then hardware malfuction usually due to bad manufacturing batch. Oh and if you dont believe me they ship hard drives in non enviroment controlled containers meaning the temps can go from -90c to +90c sure the boxes/crates there have insulation but yeah. I tell this to a lot of people heat can and will kill your hard drive if you let it but its most likely to die in another way first

TL:DR - up to 50c is good for hard drives
 

JoshDW19

Community Hall of Fame
Joined
May 16, 2016
Messages
1,077
Hello! I've noticed there were quite a few posts in this thread that were venturing into off-topic / argumentative territory. If you do not have anything helpful to say to the op please do not respond! Thank you.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
I can't necessarily fault some of the users here for stating that VT-d/true PCIe passthrough won't work, because the i7-2600K is listed on Intel ARK specifically as lacking support for it:

1568748686471.png


Source: https://ark.intel.com/content/www/u...-2600k-processor-8m-cache-up-to-3-80-ghz.html

Are you certain you weren't using RemoteFX or some other virtual GPU presentation method when you did 3D in a VM under Hyper-V? I'm not being argumentative, just curious because both the CPU and chipset are telling me "VT-d won't work."

8 x 2tb 7200 rpm 256mb cache seagate drives = these will be ran in a raidz3/equivalent

Danger; "Seagate" plus "256MB cache" makes me think of shingled drives. Do you have an exact model number?
 

CrackJack

Dabbler
Joined
Sep 16, 2019
Messages
20
I can't necessarily fault some of the users here for stating that VT-d/true PCIe passthrough won't work, because the i7-2600K is listed on Intel ARK specifically as lacking support for it:

View attachment 32873

Source: https://ark.intel.com/content/www/u...-2600k-processor-8m-cache-up-to-3-80-ghz.html

Are you certain you weren't using RemoteFX or some other virtual GPU presentation method when you did 3D in a VM under Hyper-V? I'm not being argumentative, just curious because both the CPU and chipset are telling me "VT-d won't work."



Danger; "Seagate" plus "256MB cache" makes me think of shingled drives. Do you have an exact model number?
You might actually be correct, I will need to check again if so that would be truly annoying but not a big deal.
Drive models are ST2000DM008
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
You might actually be correct, I will need to check again if so that would be truly annoying but not a big deal.

You could install ESXi and see if you even have the option to enable PCI passthrough - if "Enable Passthrough" is greyed out, there's your definitive answer.

Drive models are ST2000DM008

Documentation suggests a shingled drive (2 heads, 1 platter) so best to avoid it. Seagate, please label your drives properly so we can avoid this stuff.

Is there a reason behind the 8x2TB layout other than cost? If you're craving IOPS for your gaming VM, I'd just go straight to solid-state.

Upon further review of your initial post though:

1 pcie x1 to x16 riser for the HBA or GPU

This is a bad idea; both of those devices will be badly bottlenecked by the PCIe x1 link. Your board doesn't look like it has PCIe x4 slots that you could Dremel out the end of either.

I know it's not what you might want to hear, but regardless of your software choice this probably isn't the best hardware to build on for a "server" platform. A discount Supermicro X9/X10 era board is probably your best shot, or look to repurpose something like an HP Z420 or similar workstation if you plan to sink a high-powered GPU in and make the all-in-one box. An off-lease server might also be an option, and then just keep your i7/P67/RX480 as a distinct unit dedicated to gaming.
 

CrackJack

Dabbler
Joined
Sep 16, 2019
Messages
20
You might actually be correct, I will need to check again if so that would be truly annoying but not a big deal.
Drive models are ST2000DM008
I can't necessarily fault some of the users here for stating that VT-d/true PCIe passthrough won't work, because the i7-2600K is listed on Intel ARK specifically as lacking support for it:

View attachment 32873

Source: https://ark.intel.com/content/www/u...-2600k-processor-8m-cache-up-to-3-80-ghz.html

Are you certain you weren't using RemoteFX or some other virtual GPU presentation method when you did 3D in a VM under Hyper-V? I'm not being argumentative, just curious because both the CPU and chipset are telling me "VT-d won't work."



Danger; "Seagate" plus "256MB cache" makes me think of shingled drives. Do you have an exact model number?
Just checked it VT-D enabled in bios the only cpu's able to work that bios are Sandybridge I think not 100% certain like I said before Intel like disabling stuff and Asrock loves ignoring them and enabling it same with other board partners. I dont think you would find a Intel, hp, dell board doing this this but they usaully have to be with in spec and board partners play with spec to make a better deal out of that product.

I am 97% certain the drives are not smr
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Just checked it VT-D enabled in bios the only cpu's able to work that bios are Sandybridge

VT-d in the BIOS doesn't mean VT-d working in the OS though; non-K Sandy Bridge CPUs support VT-d so the BIOS would need to support them.

I am 97% certain the drives are not smr

The ST2000DM006 appears to be non-SMR, but the ST2000DM008 specs are checking a lot of the boxes for being SMR.
 

CrackJack

Dabbler
Joined
Sep 16, 2019
Messages
20
You could install ESXi and see if you even have the option to enable PCI passthrough - if "Enable Passthrough" is greyed out, there's your definitive answer.



Documentation suggests a shingled drive (2 heads, 1 platter) so best to avoid it. Seagate, please label your drives properly so we can avoid this stuff.

Is there a reason behind the 8x2TB layout other than cost? If you're craving IOPS for your gaming VM, I'd just go straight to solid-state.

Upon further review of your initial post though:



This is a bad idea; both of those devices will be badly bottlenecked by the PCIe x1 link. Your board doesn't look like it has PCIe x4 slots that you could Dremel out the end of either.

I know it's not what you might want to hear, but regardless of your software choice this probably isn't the best hardware to build on for a "server" platform. A discount Supermicro X9/X10 era board is probably your best shot, or look to repurpose something like an HP Z420 or similar workstation if you plan to sink a high-powered GPU in and make the all-in-one box. An off-lease server might also be an option, and then just keep your i7/P67/RX480 as a distinct unit dedicated to gaming.
Well that 3% doubt just stung me good its not a huge deal these drives perform amazingly well for smr then I have 2 already

8 x 2tb = £430 10tb effective storage and can suffer 3 drive failures replacement cost £53, 2x 10tb = £600, I feel like this is self explanatory

I am aware of x1 interface its pcie 2 so 500mb max, the hba card should be okay as long as i dont plug more than 4 drives into it but I was thinking as its dawning on me now that the gpu should go in the x1 to x16 slot like I said the board wont post with out a gpu
 

CrackJack

Dabbler
Joined
Sep 16, 2019
Messages
20
Yeah my priority here is file server, minecraft/arma server then everything else if possible
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Well that 3% doubt just stung me good its not a huge deal these drives perform amazingly well for smr then I have 2 already

8 x 2tb = £430 10tb effective storage and can suffer 3 drive failures replacement cost £53, 2x 10tb = £600, I feel like this is self explanatory

I am aware of x1 interface its pcie 2 so 500mb max, the hba card should be okay as long as i don't plug more than 4 drives into it but I was thinking as its dawning on me now that the gpu should go in the x1 to x16 slot like I said the board won't post with out a gpu

The performance drawbacks of SMR won't hit you until you have to overwrite previous data ... if you haven't put more than 10TB of total writes to the system it will feel nice and snappy. Once you have to overwrite previous LBAs and reshingle is where you'll feel the pain. SMR is hard to argue with from a $/TB (or £/TB in your case) but I'm not willing to make the compromise that way.

Regarding the x1 to x16 - your board may not be set up to accept anything other than a GPU in the x16 slot, so moving your HBA there may cause it to not POST. Up to you to try it though. Risers in general will be frowned upon because they have the potential to affect signal integrity; yes, ZFS will clean it up, but it's better to not experience it at all.

Given your desired goals I would suggest separating the server(s) from the game box - honestly I don't think 8 threads is enough to run all of that simultaneously, since you'll want to devote 4 threads to your game machine, which only leaves 4 more for your OS, Minecraft, and ARMA to all squabble over. 8 threads for just the servers would be better, you could do 2 for OS, 2 for Minecraft, 4 for ARMA, and devote the entire i7 to gaming on a separate machine and not have to worry about rigging up the RX480 into an x1 slot.
 

CrackJack

Dabbler
Joined
Sep 16, 2019
Messages
20
Its
The performance drawbacks of SMR won't hit you until you have to overwrite previous data ... if you haven't put more than 10TB of total writes to the system it will feel nice and snappy. Once you have to overwrite previous LBAs and reshingle is where you'll feel the pain. SMR is hard to argue with from a $/TB (or £/TB in your case) but I'm not willing to make the compromise that way.

Regarding the x1 to x16 - your board may not be set up to accept anything other than a GPU in the x16 slot, so moving your HBA there may cause it to not POST. Up to you to try it though. Risers in general will be frowned upon because they have the potential to affect signal integrity; yes, ZFS will clean it up, but it's better to not experience it at all.

Given your desired goals I would suggest separating the server(s) from the game box - honestly I don't think 8 threads is enough to run all of that simultaneously, since you'll want to devote 4 threads to your game machine, which only leaves 4 more for your OS, Minecraft, and ARMA to all squabble over. 8 threads for just the servers would be better, you could do 2 for OS, 2 for Minecraft, 4 for ARMA, and devote the entire i7 to gaming on a separate machine and not have to worry about rigging up the RX480 into an x1 slot.

I am aware of SMR's draw backs they were horrid 10 years ago but they have got better so this doesnt bother me as i am technically archiving stuff also since there will quite a few drives and there all 2tb rebuilding the pool shouldnt actually take that long. Thank you for catching that and pointing it out alternative is toshiba but they have 1 year warranty over 2 so and there the same price.

You maybe right which would suck, yeah I would prefer not to use one but its hardware I have and can afford I will tossing it as soon as I have the money to.

Not fees able currently sadly. The Arma and MC servers wont be on 24/7 or on together its will be on or the other so this shouldnt be issue. Most people have there nas running on 2 core 4 thread I3's far as what I have seen.

I am also building a new ryzen machine for myself for my gaming needs.
 

tfran1990

Patron
Joined
Oct 18, 2017
Messages
294
The ST2000DM006 appears to be non-SMR, but the ST2000DM008 specs are checking a lot of the boxes for being SMR.

I have a drive wester digital says is not an SMR drive, but it is.
 
Top