Joining the FreeNAS family

Status
Not open for further replies.

PhilZJ81

Explorer
Joined
Mar 29, 2016
Messages
99
Hello everyone, I've been lurking around the forums for about a year, hashing out which hardware I want to go with.

I'd like to use FreeNAS primarily as a storage device, but seeing as PLEX is so well viewed, I'd probably use it for that as well. I currently use a Windows 7 pc's built in DLNA feature to stream to TVs in the house.

I've gone back and forth a few times going from Xeon to Atom processors, but decided to stay with Atom, I would like very much to build a box that uses as little power as possible as it will spend the majority of its life in idle.

I'll list below my currently list of components, I would greatly appreciate any input and advice from the veterans.

Board: ASRock C2750D4I Mini ITX Server Motherboard
The price went up big time since I priced the NAS, Was $349, now it's $418. I see there's also a Gigabyte option but that's more pricey still, and then there's the SuperMicro option, but that uses 204 pin ram, and finding that in ECC is horrible
RAM: Crucial 16GB Kit (8GBx2) DDR3/DDR3L-1600MT/s (PC3-12800) DR x8 ECC UDIMM Server Memory
(I can go to 32GB if you guys think it would be better, as you know you read that you need 1GB per 1TB of storage, then others say they only used 16GB with more than 16TB and it's working great
Case: SilverStone DS380B. EDIT: The Case turned out to lack the ventilation needed for the HGST drives. I changed it for a Lian Li PC-Q26B case.
PSU: SILVERSTONE SX500-LG 500W SFX-L 80 PLUS GOLD Certified Full Modular
The biggest gold certified SFX format psu I could find at newegg. EDIT: Since the case was different, I bought an EVGA 550W GoldPlus power supply. As it turns out, my annoying noise I mentioned was caused by the Silverstone power supply.

Drives:
option 1) 6 x HGST Deskstar NAS 4TB $159 each
option 2) 6x HGST Deskstar NAS 6TB $259 each
option 3) 6x Seagate NAS HDD $225 each (roughly, I have a coupon valid till 4/1 so option might not be valid long)



UPS: CyberPower PR1500LCD [pure sine wave, if anyone cares this is a great unit, PM if you have questions]

Network:
Router: Verizon Actiontec MI424WR
Switch: Cisco SLM2008T
WirelessAccessPoint / Switch 2: NetGear WNDR3700 (it's my old router used as an additional wireless point and 4 port switch. Our house is 1 floor but is a bit longer and the verizon router is pretty far so signal wasn't strong at the other end of the house.) I have CAT6 cables running through the attic to pretty much every room in the house, so TVs and all the gizmos (except tablets) are hard wired.

Well, that's it for the hardware, as far as configuration and usage, if I understood correctly, I would have to create to zfs pools, one used for network storage, the other for PLEX server. Is that correct? So, I'd have 2 copies of say my 2015 Christmas video.

Also, regarding configuration, as mentioned earlier, I would like to setup this box to use as little power as possible, so I would like for the drives to spool down 15-20 minutes after use. I don't mind if there's a delay when I need to access it. But here comes the question, in reality, do the drives 'stay' spooled down? I'm asking because I have an old Synology 4 drive box, and with nobody at the computers, it seems like the home network, TVs, computers, whatever hunts around for devices to update their 'network devices' table and it constantly spools up the Synology drives even though nobody's pulling any real data from it. On the other hand, I also have a little network attached drive to the NetGear router via the USB port and that stays in 'sleep' mode till someone actually uses it.

Thank you again for reading, and thanks in advance for any feedback!
-Phil



------_____________------ Follow up after ordering parts ------_____________------

Edit: I just wanted to follow up to this thread with my build after purchasing the components.

So I bought the 6x HGST Deskstar 6TB drives, the rest is as listed above.
I ended up buying a Noctua NF-A8 PWD (4pin) fan to put over the large cpu heatsync.

My configuration is 6x6TB HGST RaidZ2 and added another 6TB Seagate 7200rpm I had laying around as a single drive open to network access.

Badblocks test took 76 hours to complete on each drive if I remember right, (remember to set the blocks to 4096 for you guys with large drives).

The good: performance is great, 100+ MB/sec copying data from my windows 7 machines. (using CIFS share). This performance holds true for the single drive, as well for the RaidZ2 array.

The bad: the HGST drives run HOT! I seem to have received from 2 batches. 2 drives have a serial starting with NCG, these drives run cooler (41-42C idle, 43-44C badblocks load), but seem slower (badblocks took 2hours or so longer to complete) the others start K1G, but they run hot (46-47C idle, 49C badblocks load). Meanwhile the Seagate 7200RPM is right in the drive stack sitting happy at 36C.

The case I use is a Silverstone DS380S. I had to resort to installing a cardboard divider to force the 2x120mm fans to blow through the HDD cage. This significantly cut into the airflow over the cpu (temps went up 6C roughly), so I had to buy a Noctua fan to pull some air over the CPU, now all is good.

I also have an odd "trrr-ttr-trrr' sound coming from the box, but it's not coming from the HDD cage, so I have NO clue what's causing that nose. I seem to only hear it outside the case, as soon as I open the case and put my ear in there to pin point the noise, the fans seem to drown it out. But it seems to be coming out of the motherboard somewhere.

In conclusion, the performance and everything is great, I can't wait to set it up the way I want to. I am trying to find a solution to my CIFS shares question (one share to allow guests, the other to require a user account, but to be able to have them both open at the same time). Hopefully I'll come up with something soon, then I'll try to install Plex and figure out how that thing works.
 
Last edited:

m0nkey_

MVP
Joined
Oct 27, 2015
Messages
2,739
Welcome to the forums! Seems like you're off to a great start.

Well, that's it for the hardware, as far as configuration and usage, if I understood correctly, I would have to create to zfs pools, one used for network storage, the other for PLEX server. Is that correct?
Jails/Plugins can share a pool, you can simply share the storage (http://doc.freenas.org/9.10/freenas_jails.html#add-storage), unless of course you have a reason to have two separate pools.
Also, regarding configuration, as mentioned earlier, I would like to setup this box to use as little power as possible, so I would like for the drives to spool down 15-20 minutes after use.
Don't bother. It's going to cost you a few pennies extra per year to keep them spinning. Your pool will never be inactive enough to actually spin down, especially if you decide to use jails or plug-ins, since log files are written, cron jobs executed, etc.
 

PhilZJ81

Explorer
Joined
Mar 29, 2016
Messages
99
Thanks for the feedback, and the heads up on the shared storage. Sharing a folder for PLEX will be much more convenient, however, is there a risk that PLEX might alter the files?
I don't have 'that' much data to backup, maybe 3-4TB, at the moment, I'm just going overkill on this because I want to set it up once and have it going for a very long time. I'd rather spend a bit more money now and not have to re-create my arrays in a few years by adding drives or getting a new set of bigger drives.

As far as power consumption, I wasn't really doing the math in term of finances. I have a large PV solar array on the roof, and as you said cost of spinning those drives will be pennies, but I have a thing for efficiency, so I'm just trying to satisfy that irrational fixation, especially when I know that I'll probably be using those drives a couple of hours a week at most.
 

m0nkey_

MVP
Joined
Oct 27, 2015
Messages
2,739
You can mount plugin and jail storage as read only. This is what I do for all my media jails, to ensure that it cannot make any changes. Any metadata you modify, such as titles, artwork, etc. will be stored in the jail.
 

PhilZJ81

Explorer
Joined
Mar 29, 2016
Messages
99
Don't bother. It's going to cost you a few pennies extra per year to keep them spinning. Your pool will never be inactive enough to actually spin down, especially if you decide to use jails or plug-ins, since log files are written, cron jobs executed, etc.

Would it be possible for me to add an SSD and configure the system to use that as log space or general task io space and keep the spinny drive array strictly for network / storage tasks ?

Trying to see if there's a way to let the drive spool down since they will rarely be used, hoping it'll extend their life.

thanks
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
Would it be possible for me to add an SSD and configure the system to use that as log space or general task io space and keep the spinny drive array strictly for network / storage tasks ?
Yes, just use the GUI option to move the system dataset to the SSD pool. And you can use the same SSD to mount your jails.
 

pirateghost

Unintelligible Geek
Joined
Feb 29, 2012
Messages
4,219
Would it be possible for me to add an SSD and configure the system to use that as log space or general task io space and keep the spinny drive array strictly for network / storage tasks ?

Trying to see if there's a way to let the drive spool down since they will rarely be used, hoping it'll extend their life.

thanks
Spinning disks up and down doesn't extend their life...
 

PhilZJ81

Explorer
Joined
Mar 29, 2016
Messages
99
Spinning disks up and down doesn't extend their life...

you're right, and if I endup using the drives more often, I would re-configure it to keep them rotating. However, if my usage of the freeNAS box is similar to my synology box, I'd be using it once a week for approximately 1 hour of backups and that's it. So with that in mind, seems a bit useless to keep the drives spinning.
 

PhilZJ81

Explorer
Joined
Mar 29, 2016
Messages
99
Yes, just use the GUI option to move the system dataset to the SSD pool. And you can use the same SSD to mount your jails.
Thanks for the feedback, I'm going through the documentation as m0nkey suggested, but all your replies help!
 

PhilZJ81

Explorer
Joined
Mar 29, 2016
Messages
99
Me again, spamming my own thread.

I have a question for any of you using the c2750 Atom based boards. I'm starting to read about performance issues related to CIFS on these lower clock processors. I would really want this to have 80MB/sec + transfer rates when copying files from my computers. We have windows computers here, and a couple of iPads which I would like to also have the ability to connect to the shared folder (via applications like FileBrowser).

thanks
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Have you read the FreeNAS Mini review I wrote? If not, google it. I did 350MB/sec with CIFS on my C2750 system. :)
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
I'm starting to read about performance issues related to CIFS on these lower clock processors.
Where? That's definitely not the general experience.
 

PhilZJ81

Explorer
Joined
Mar 29, 2016
Messages
99
Have you read the FreeNAS Mini review I wrote? If not, google it. I did 350MB/sec with CIFS on my C2750 system. :)
is this the one : (https://cyberj0ck.wordpress.com/2014/05/05/my-review-of-the-freenas-mini-part-1/)
I see your review you got 200MB/sec + so honestly that's good enough for me. I wonder why others see worse performance. Even a 1GB LAN should do 100MB/sec.

Where? That's definitely not the general experience.
That started with me reading the documentation that mentioned the ATOM processors in the CIFS share section (10.4 I believe). Then I searched the forum for performance c2750 and there were a few threads about it where users were getting poor performance. I don't remember the details but it was 70MB/sec or less.
 
Last edited:

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Well, a lot of others do things they shouldn't do. Stuff I've personally seen be responsible:

1. Realtek NIC on the desktop.
2. Realtek NIC on the server.
3. Turning on the "Green" features on a desktop NIC.
4. Using "Green" switches.
5. Using programs like Teracopy that claim to speed up performance (but they were proven to be the cause of the poor performance)
6. Horribly underpowered desktop
7. Horribly underpowered server (sometimes even below the minimum specs)
8. Failing hardware (especially hard drives)
9. Very high packet loss due to failing network gear(>5%)
10. Using very crappy cabling (often custom-made or very old cat5 cabling).
11. Tuning their ZFS without having a clue.
12. Tuning their networking without having a clue.

The lost goes on and on. I think I made the point though :P

IT geeks have a hard time accepting that "the default should work well for most people" (yes, I fell into that category when I first built my FreeNAS machine too). They are often convinced that if they tweak the system they can get "more powah" from their system. It may be true, but if you don't have lots of BSD experience, you're just guessing. And your ability to guess correctly for your exact situation when you've got 2 whole weeks of experience with BSD is pretty close to impossible.

For people at home just wanting to share out some files, your best performance comes by using the defaults, using the hardware we recommend, and actually taking the mistakes that others make to heart.

The FreeNAS Mini has the relatively wimpy Intel Atom 2750 CPU, and despite having only 4 disks and no slog or l2arc, using CIFS you can saturate 1Gb LAN easily, and possible both of the LAN ports. It doesn't really take rocket science to make a great FreeNAS box. It just takes you doing what is "right". That's all.
 

PhilZJ81

Explorer
Joined
Mar 29, 2016
Messages
99
Okay, thanks that's great to hear. Going from memory now, I believe that I can sustain 110-125MB/sec at home on transfers between my laptop and a desktop being used as file share/ storage. (Both laptop and desktops are intel i7 processors).

Can someone chime in on the RAM question (16 vs 32 gb), I'm going with either a 6x4TB or a 6x6TB (probably in RAIDZ1) configuration, and perhaps another 1TB drive standalone that'll be used for logs and things I'm not attached to. (i'm still being bullheaded about having my 6 large drives spool down when not in use...)
 

Sakuru

Guru
Joined
Nov 20, 2015
Messages
527
16 GB of RAM should be fine to start with.

RAIDZ1 is a risky choice with today's large drives. It's WAY safer to go with RAIDZ2.

It's generally a bad idea to have your drives spin up and down all the time. What's more expensive, some extra electricity, or replacing failed drives?

I just bought a few of these White Label 6 TB drives. They should show up soon. I'm excited to see how reliable they are. There's also a 4 TB version.
 

PhilZJ81

Explorer
Joined
Mar 29, 2016
Messages
99
16 GB of RAM should be fine to start with.

RAIDZ1 is a risky choice with today's large drives. It's WAY safer to go with RAIDZ2.

It's generally a bad idea to have your drives spin up and down all the time. What's more expensive, some extra electricity, or replacing failed drives?

I just bought a few of these White Label 6 TB drives. They should show up soon. I'm excited to see how reliable they are. There's also a 4 TB version.

Are two drive failures common? I generally buy an extra drive so that if something happens i can immediately swap. In the example of this FreeNAS build, I would probably wait a month or so then buy an additional spare drive, and probably 6-8 months after that would buy a 2nd spare drive just to have in case of emergency. For example, currently for my Synology CS407, I have 4x1TB drives in it, and I also have 2 more of the same drives sitting in their plastic bag waiting to be swapped in the case of a failure. At any rate, I suppose that 24TB is still a nice chunk of disk space... but IT'S LESS THAN 30 DANG IT! ha ha ha. I mean I'm really trying to set this up so that I build it now and I don't have to futs with it for a very VERY long time.

Regarding the electricity cost and all that, that's not really part of my equation. What goes more into it, is that the box will be sitting in my office, so no drives spinning is quieter than 6 drives spinning (or maybe 1 drive spinning if I go HDD vs SSD). I know the case has fans for cooling, but depending on how loud they are I might get some Noctua fans that are nice n quiet. I'm also getting that power supply partially because it has the large 120mm fan (hence the SFX-L form factor), not the 80MM buzzers.
Also, I was basing the hard drive spin down on my current usage of the Synology NAS which stays offline for long periods of time. I use it maybe once a month now, I do plan on using the new FreeNAS build more frequently, but I still estimate using it maybe once a week. So considering that, I thought it would be more efficient (longer life) to have the drive stationary for the week and only spool up when I run backups.

On the other hand, you're right, if it turns out the FreeNAS box gets hit frequently during the day, and move all my MP3s from my windows 7 box that sits in the closet and start streaming from there, or if it turns out that my TVs can work nicely with a PLEX server, then yes, I would likely opt for keeping the drives spooled.

Also, I was reading around the documentation, I've not yet finished, but I was wondering what's the best option for backing up the contents of a large RaidZ array? My Synology has no where near the capacity required to back this up, but that's not really the issue. I want the ability to back it up onto external hard drives (currently I have an eSata / USB dock where I swap drives). And I'd like to continue using this process. Reason being, I have all this raid stuff at home, and well if my house catches fire, a RAIDZ400Million isn't going to recover, so I keep my Synology stuff backed up on those hard drives and I store them at the bank's safety deposit box.
 
Last edited:

PhilZJ81

Explorer
Joined
Mar 29, 2016
Messages
99
I just noticed this board is for sale at amazon now http://www.supermicro.com/products/motherboard/Atom/X10/A1SAM-2750F.cfm. The price is a fair bit better than the very popular ASRock equivalent. Biggest difference to me, is that it doesn't have the 6 Marvell Sata ports. The only 6 ports it has (2x SATA3 and 4xSATA2) I'm assuming are the intel build in ones, right? All it says on the web page is "
  • C2000 SoC SATA3 (6Gbps), SATA2 (3Gbps)
"
I'm guessing that stands for intel C2000 Atom series? What do you guys thing.

What I like about this supermicro is that it has 240 pins for the ram, not 204 like the MBD-A1SAi-2750F-O. I also would have preferred a larger heat sync so that I can mount my own, larger fan to it.
I'm thinking of starting with 6 disks as i mentioned above, so in the future if I want to add more drives, which card would you recommend that would be compatible with this board?

Thanks
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Are two drive failures common?
Not particularly.

The problem is the relatively high likelihood of small errors on one of the surviving disks (due to their size). With RAIDZ1, these would be permanent. With RAIDZ2, they're easily fixed - plus you can survive two disk failures, but that's a smaller contribution to the probability of data loss.

I just noticed this board is for sale at amazon now http://www.supermicro.com/products/motherboard/Atom/X10/A1SAM-2750F.cfm. The price is a fair bit better than the very popular ASRock equivalent. Biggest difference to me, is that it doesn't have the 6 Marvell Sata ports. The only 6 ports it has (2x SATA3 and 4xSATA2) I'm assuming are the intel build in ones, right? All it says on the web page is "
  • C2000 SoC SATA3 (6Gbps), SATA2 (3Gbps)
"
I'm guessing that stands for intel C2000 Atom series? What do you guys thing.
It's a nice board. Don't think I've seen anyone use it, but the platform is stable and it should be a good choice.

so in the future if I want to add more drives, which card would you recommend that would be compatible with this board?
Your pick of whatever LSI SAS 2008 or 2308-based card you can find.
 

PhilZJ81

Explorer
Joined
Mar 29, 2016
Messages
99
Ok, thanks for the reply, but those are so expensive... I think I'd better just stay with the AsRock that has the extra 6 Marvel SATA.

Does anyone on the forum use those connectors, I know Marvell isn't as good as intel, but was wondering if anybody uses those and isn't having any problems. Perhaps I could just use those as network drives, but not in any sort of array.
 
Status
Not open for further replies.
Top