Re-purpose old i7 rig into replacement NAS

Status
Not open for further replies.

wooley-x64

Dabbler
Joined
Dec 6, 2018
Messages
22
Hello,
I am looking to build a real DIY NAS and replace my aging JBOD Shuttle KD20 OmniNAS.
https://www.amazon.com/Shuttle-KD20-2-Bay-Network-Storage/dp/B00BXAVDLQ

It is currently housing 2x4TB WD Reds with no redundancy so 8TB of pure storage goodness. I can't say that the omniNAS is garbage. I think I paid $70 on sale new 4-5 years ago. I have had a couple different drives in this box. It will win no performance awards. Only my undying love for the fact that it has never failed me. For sure the stupid door flap (clicky hinge) died years ago, but the box has been running non stop for years now and I often even forget its there. (on my desk in plain sight with a tiny usb fan connected to it for my own personal comfort on those summer days)

Anyway, I just built a new gaming pc and I figured why not build a proper NAS out of my old hardware.

Here is the List:
What I Have:
Case: Cooler Master Storm Scout v1
Fans: 2x Scythe Gentle Typhoon-AP-15s (two front and 1 rear) Plus whatever Fans come with the Noctua D-15
Cooler: Noctua D-15
Motherboard: Gigabyte UD4H (z87 haswell board)
CPU: Intel Core i7 4770k
Memory: Kingston HyperX Fury 1866 DDR3 CL10 2x8GB
PSU: Corsair some lowly variant 600 watt thing I have used for the purpose of testing.
HDD: I am presently testing a build of FreeNAS 11.1 on this hardware using some old HDDs I have acquired over the years. I have 5x500GB in a Raidz2
SAMSUNG Spinpoint F DT HD502IJ 500GB 7200 RPM 16MB
Performance on these using the Solnet-Array-Test is around 85MB/s
Real world performance with file transfers I have hit 112MB/s (Not bad for disks that are going on a decade in age and 3 of the 5 are crying about SMART errors)

What I'll Buy:
HDD: 8x4TB HGST Deskstar
Controller: HP H220 6Gbps SAS PCI-E 3.0 HBA LSI 9207-8i P20 IT Mode for ZFS FreeNAS unRAID
Cables for drives: SFF-8087 x2
Memory: Kingston HyperX Fury 1866 DDR3 CL10 2x8GB Another set of 2x8GB sticks to bring me up to 32GB which is the max for the system.
HDD hot swap bay: Rosewill 3 x 5.25-Inch to 4 x 3.5-Inch Hot-swap
I am purchasing this bay since my case has those horrid hot swap bays with the lock in sliding plastic in the 5.25 slots. I figured it would be easier to secure a huge 3 bay enclosure into place than trying to futz around with 4x 3.5 to 5.25 metal brackets for each drive. It also has the added benefit of squeezing another drive into that small space. It also allows for quick swap out in the event of a drive failure. (I know, now that I have said that...whatever drives I put in this hot swap bay will last until the end of time...the drives that are internal to the case will die annually -_-)

Current use: I store all of my data on the OmniNAS. I have the system alittle over half full and I want to plan for great expansion. I have a few seed box based Raspberry Pi systems with 1-3TB USB disks hanging off of them who rsync down to the NAS daily. I have media stored within the OmniNAS and it is read from multiple systems via Plex and KODI. (never more than 2 clients at one time)

Proposed use: I would like to combine the function of the Pis and have the seedbox run either in a VM or on FreeNAS itself. I am looking to move away from my dedicated Kodi laptop and migrate to Plex only. In my current testing environment I have Plex configured on FreeNAS and somehow and I cannot remember how I accomplished this exactly. The box is hooked into the OmniNAS and is able to stream all of my media using Plex on PS4. (I think I mounted the CIFS share) It is horrendously slow in this configuration. I assume that Plex would respond much better if the data was actually stored on the FreeNAS device.

I want to configure system backups for all of my workstations. I truly despise windows shares. It is more of the wonkiness inherent of Windows. Things like I do not like connecting to a share at the same address if you use different accounts to access those shares.

Ideally I'd like to configure certificates so that credentials can be bypassed and access can be more fluid.

I do not know if the 4770k has the power, but I would also like to standup a few test VMs off the box for doing some work tasks. Things like standing up an RSA appliance or a sandbox for windows 7- windows 10. At a minimum, if the i7 is not up to the task, I have a 2700x in my current system. If possible I'd like to have the VMs stored on the NAS and run from my main rig. I guess if that seems like too much. I can just be sure to have backup tasks to offload the data from my main rig (4TB drive) so that it is at least backed up regularly.

Honestly my backup plans have been pretty spotty in the past. I want to consolidate and centralize my data so I have a single place to store and then a single place to backup yet again. (Either offsite or just another array of secondary disks for critical data)

32TBs is pretty much the max I think I will need for the next 5 years. (Famous last words)

32Gb is the max the system can support so I figure, max out both and be set for awhile. No more room to upgrade, if I need a new box down the line it will probably end up being when my next gaming rig gets build and the current hardware will again be re-purposed.

I am planning to run 8x4TB in Raidz2. ( I don't know if this is a good idea or bad idea, its an idea is all I know)

Other notes: I have 2 250GB Samsung 840 Pros sitting around that I ended up not re-using in my new main rig build. Since I am using an HBA card I figure I could also put those on the actual mobo ports and setup a striped array for performance and use that for the VMs. I assume some task could be configured to write that data into the main storage pool at regular intervals. (for the purpose of backup in the event one of those dies)

My budget is pretty much dedicated to the storage. $1200. I know at one point I was toying with the idea of cutting it in half and purchasing WL refurb drives, but I don't think I want to take on that risk at this time. With my luck, it would end up costing more over the next 5 years in replacing those drives.

What I want to do:
Point blank, I want to stream TV & Movies via Plex, (1 client) I want to have a central repository and a new seedbox which can utilize openVPN. I want to be able to connect to shares using certificates instead of credentials. I am also interested in encrypting all of this data. With the idea of also encrypting it at rest in the event I want to send it up into the cloud somewhere.

Let me know what you think,
-Wooley
 

rvassar

Guru
Joined
May 2, 2018
Messages
972
The i7 and that motherboard do not support ECC memory. Yes, it will likely run, but you'd be skipping an important safeguard again data corruption. It's your data though.

1GbE networking will saturate with the 8 x 4Tb spinning rust disks. Unless you have a 10GbE or Infiniband fabric network the SSD's would be wasted.

As for the VM's... I tried running a VM on mine, and got hobbled by Bhyve's inability to pass thru the underlying system's RTC. It drifted off by several minutes in just a week or so. I tried configuring NTP on the VM, and the time wobbled around in a 500ms band, and step-corrected every few minutes, which prevented it from polling at an interval longer than 64 seconds. I decided it wasn't worth it, since I also have an ESXi host that doesn't have these problems.
 

wooley-x64

Dabbler
Joined
Dec 6, 2018
Messages
22
I am aware of the ECC issue. Problem is my only ECC capable hardware is an old Dell PE 6950 which does have 32GBs of ECC RAM (DDR2). However, it is one HUGE box and has dual 1500watt PSUs and is very loud. It basically sits as a dust collector in a small cabinet in the garage. In its hay day it ran full blown ESX 4.x. It also hosted many game servers and supported a gaming community. (MW2 dedicated servers- RIP AlterIWnet) My only network access in the garage is a powerline adapter. I currently rent (townhouse) and have no way to sure up a better connection out there. I also do not want to attract unnecessary attention to my unit with a server going 24/7. Also for a 4U server it only support 5 drive bays and it currently has the cancer of a Dell Perc 5i card. (From what I read no IT/HBA mode) it would have to go. I guess I would have to come up with ways to find a place for 3 more drives.

IIRC it has 4x AMD opteron quad cores. I remember calculating back in 2013 a cost of 70/month to run that system. I am assuming that the re-purposed i7 is going to be at most half that to run.

If I had a larger budget, if I had newer hardware that ran ECC...in a heart beat I would run ECC. I am an IT security engineer by trade. I understand the risks and I will take actions to mitigate those risks. I will replicate all important data elsewhere. If it dies, it dies. (on the non ECC FreeNAS)

Comment for the VMs.

These would not be 24/7 running it would be more of a I need to test a patch, spin up VM apply patch reboot....kick tires...shut down. (1-2 times per month)
Or in the case of sandbox, load the windows 7 FLAREVM copy file run it. Perform analysis, generate report...shutdown. (1-3 times per quarter)

For the seedbox, I have not decided on if it would be a VM running 24/7 on the FreeNAS (some linux distro) or just having it native to FreeNAS via plugin. I think I read that was supported.
 

rvassar

Guru
Joined
May 2, 2018
Messages
972
I am aware of the ECC issue. Problem is my only ECC capable hardware is an old Dell PE 6950 which does have 32GBs of ECC RAM (DDR2). However, it is one HUGE box and has dual 1500watt PSUs and is very loud.

My first build was a Dell SC1430... 16Gb ECC DDR2 and dual 120 watt X5355's... Been there, done that... o_O

If I had a larger budget, if I had newer hardware that ran ECC...in a heart beat I would run ECC. I am an IT security engineer by trade. I understand the risks and I will take actions to mitigate those risks. I will replicate all important data elsewhere. If it dies, it dies. (on the non ECC FreeNAS)

I picked up a used Supermicro X9SCL for less than $60. Getting the ECC UDIMM's actually cost more, and I paired them up with a $20 i3 CPU I already had. The total budget was less than $200 to get proper ECC, and a server grade motherboard. I can get away with this CPU because I don't transcode, but for $40 more I could get a used Xeon. I do regret not holding out for a X9SCL-F or X9SCM-F, which would have gotten me IPMI. But again, your data, and you understand the risks.
 
Status
Not open for further replies.
Top