Updating from Nexenta to a new, upgraded FreeNAS build

NASbox

Guru
Joined
May 8, 2012
Messages
650
@droeders , the OP was already planning to use Icydock MB153SP-B units to get hot-swap bays. Unless the plan changed even more since the original post.
If he plans on frequent swaps, that's not a good design.

Have a peek at the tray. https://www.icydock.com/images/201007/1279860040835574394.jpg The back is open which means that the very fragile SATA connector on the HD plugs directly into the backplane.

SATA connectors are very fragile and have a rated life of about 125-150 insertion/removals - just fine for normal use. How often is a hard drive going to be swapped? A handful of times over the life of the machine.

But for a use like a "backup cartridge" it's a very different story. Do the math based on how often they will be swapped.

Weekly swap, kills the connector in 2-3 years, daily swap less than 6 months!

What's worse is that Murphy's law would state that the time when they need to be reinserted for restore is the time the connector will fail!

Somebody warned me, so I'm paying it forward....
 

Jatrabari

Contributor
Joined
Sep 23, 2017
Messages
100
Sorry, for some reason I thought you had planned more drives.

No problem and you are right, originally I thought to have more drives but realized that I have no need for more drives with my data consumption. IMO, no reason to keep extra drives spinning in the main pool for nothing if I can't utilize then fully in the future. For the backup I was thinking maybe mirror setup with the 2 drives but now when I checked the ZFS calculator Raidz-1 could be better so I have to add one more drive to the list.

Sure, not arguing at all with that calculation. I still think it best to have the extra headroom as a precaution because I don't like to stress the equipment.

I agree and I am going for the 650 W especially that I am adding one more disc to the backup pool.

I don't claim that it will make the supply last longer because I have no evidence of it, but that is my intuitive thought, if it is less stressed, it should last longer and, more important in my mind, be reliable.

That makes two of us if somebody else in the forum has some other information to the contrary...

Here's another option using some drive caddies from Addonics:

https://forums.freenas.org/index.php?threads/how-to-backup-to-local-disks.51693/#post-428001

Based on the post, @NASbox sounds pretty happy with it.

Yup! All the benefits of an external USB drive, with all the benefits of a native disk. Just make sure you have a GOOD hot swap caddy (most are NOT built for this type of use, they are for quick recovery after the occasional disk failure NOT routine planned hot swaps!)

I'm still struggling a bit with scripting, but the data is not high turnover, so at this point I'm not doing it very often. As I get proficient with ZFS snapshot replication, I'll get a much more automated system-or you could just rsync (I used to, but ZFS Send/Receive is MUCH faster and more accurate). Anyway you'd have the same problems with USB drives (plus a bunch more, like a potentially unreliable drive.)

@droeders , the OP was already planning to use Icydock MB153SP-B units to get hot-swap bays. Unless the plan changed even more since the original post.

If he plans on frequent swaps, that's not a good design.

Have a peek at the tray. https://www.icydock.com/images/201007/1279860040835574394.jpg The back is open which means that the very fragile SATA connector on the HD plugs directly into the backplane.

SATA connectors are very fragile and have a rated life of about 125-150 insertion/removals - just fine for normal use. How often is a hard drive going to be swapped? A handful of times over the life of the machine.

But for a use like a "backup cartridge" it's a very different story. Do the math based on how often they will be swapped.

Weekly swap, kills the connector in 2-3 years, daily swap less than 6 months!

What's worse is that Murphy's law would state that the time when they need to be reinserted for restore is the time the connector will fail!

Somebody warned me, so I'm paying it forward....

@droeders Thanks for the information and @NASbox A very good point, haven't thought about that at all, thank you.

I am still using the Icydocks for main pool and the backup pool but your solution for the offsite backup looks and feels more solid and secure. That said, my data turnover is quite low so I would do a offsite backup and hotswap maybe every fortnight or once a month. So I think I can manage with the Icydock solution for now but I will definitely think about your solution to be implemented in the future as soon I have some picture to what is available here.

Question: Is the gaping hole on the front of the case a dust magnet as there is no cover for it and is that small fan in the caddy capable of keeping the drive cool enough?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Question: Is the gaping hole on the front of the case a dust magnet as there is no cover for it and is that small fan in the caddy capable of keeping the drive cool enough?
I had a similar product from another company and there was a flap that closed the gaping hole in the front of the case, but it wasn't good enough to keep dust out.
This product appears to have a flap also:
http://www.addonics.com/products/diagrams/diamond/cradle_diagram_sata.gif
I imagine that it will still let some amount of dust into the case.
 

NASbox

Guru
Joined
May 8, 2012
Messages
650
I agree and I am going for the 650 W especially that I am adding one more disc to the backup pool.

That makes two of us if somebody else in the forum has some other information to the contrary...

@droeders Thanks for the information and @NASbox A very good point, haven't thought about that at all, thank you.

I am still using the Icydocks for main pool and the backup pool but your solution for the offsite backup looks and feels more solid and secure. That said, my data turnover is quite low so I would do a offsite backup and hotswap maybe every fortnight or once a month. So I think I can manage with the Icydock solution for now but I will definitely think about your solution to be implemented in the future as soon I have some picture to what is available here.

Question: Is the gaping hole on the front of the case a dust magnet as there is no cover for it and is that small fan in the caddy capable of keeping the drive cool enough?
Two things that I've learned over the years about power supplies (1) Surge capacity (2) Capacitor quality.

I borrowed a power meter from the library to check on the power usage of my systems, and it shows peak as well as average and instantaneous usage values. Startup current is over 2x the "running value". My FreeNAS which has 10 drives (2 of which normally sit powered down) hits about 225W (at the wall plug) on startup and between 28W and 115W depending on what is going on. I suspect that number would be higher if I had jails or other CPU intensive activity going on.

I have a background in electronics going way back, so I'm not afraid to pick up a screwdriver, a volt meter or a soldering iron. Over the last 15 years I've had 4 power supplies die, and they all died on system boot. I opened them up to find out why, and they have all been because of cheap Chinese capacitors.

(Visual inspection shows the cans bulging in many cases - google "bad capacitors" or "badcaps" - there is at least one web site devoted to it. I've resurrected both my current monitors LG/Samsung for about $10 in parts from DigiKey and an afternoon with a soldering iron. I've got 3 power supplies waiting for the same treatment, but I need access to an desoldering gun to remove the bad parts without damaging the boards.)

I haven't looked for about 1-2 years, but last time I looked Seasonic was one of the few high quality builds that has good quality parts, is super stable when the load changes (transient response), and is stands up well to overloads. When looking for a power supply, I spent a few hours looking at test sites (they do full tear down and examination as well as temperature and load tests - jonnyguru.com/tomshardware.com and there may be others that escape me. From my experience their commentary is sound and based on hard test data and not "marketing BS" like a lot of review sites. Even if you don't understand all the nuances, read the reviews to get a sense and look at the conclusions. Unfortunately even some "name brands" have their stuff built by "Sum Cheap Crap Manufacturing" or get their capacitors from "The Lucky Capacitor Company" in China. Given that a bad PS can literally "fry" your entire system, it's one place where I am "super picky".

As for the dust magnet. If your ventilation is designed properly, the case will be positive pressure. 2 big intake fans (with filters) at the front and one exhaust at the back, power supply in it's own isolated loop. On the Addonics bays, there is a spring loaded flap over the opening that shuts when the drive bay is out.

The Addonics cartridges are very heavy aluminum as is the base, so they do a great job of heat sinking. The Hot Swap bays I bought are the fanless models (they do have ones with fans), but a 10TB WD Red only reached about 40-41C (going from memory here-Ambient about 26-28C) after over a day of "badblocks". My drives NEVER work even close that hard in real life-The worst trauma they are exposed to is full scrubs which last about 4-5 Hours at the moment.

if you are running a 24/7/365 database application with constant load, (or drives that run hot-WD Reds seem to run VERY Cool-I have two Hitachi Drives that are about 7-8C hotter), then it's a different matter.

From what I can see, my guess is that the "IcyDock" is going to need fans to even come close on temps - and I assume this is what the main drives live in.

Hope these comments help (Click thanks to let me know if you find this useful).
 

NASbox

Guru
Joined
May 8, 2012
Messages
650
Hello all,
Power: Enermax EMD625AWT

This power is from a old server and I am planning to repurpose it to this new build but I still have to calculate if it is enough and suited for this build and future expansion or just invest straight to a Seasonic…

Network: Intel D33682 PRO/1000 PT Dual Port Server Adapter (on old server ATM but MB has dual LAN ports so this is not necessarily needed)
Boot device: 16/32 GB SSD

Is there anything that could be improved in this setup so it will have a long lifespan hardware malfunctions or breakages aside?

Hi @Jatrabari, for what it's worth, here's a review of the PS you are considering:

https://www.hardocp.com/article/2008/04/16/enermax_modu82_625w/3

From a quick scan of the conclusion it doesn't look bad... if your unit is older, it may have different caps... read the whole article to see if they say more. The TLDR; the parts are good, assembly may be a bit on the rough side, but unless you get a lemon, it should be ok.

Given it's a rebuild, if it were me, and cash flow was an issue, I'd likely use it, and put a few $ aside each month till I could replace with a Seasonic.

As for lifespan, case ventilation, working environment and parts quality are key. I went from 6 drives in a small case to 10 drives in a full tower case and my temps are much better in the full tower case <40C for the most part. WD Reds are a good choice as they run very cool in my experience. I have 8x6TB and they are great. I also have 2 Hitachi drive that I use to keep my coffee warm ;-) - Hence they stay spun down when not in use, and I am thinking about pulling them completely.

Unless you are going to have noisy fans or the mounting has a lot of aluminum, the hot swaps may make the drives run hot, and the vents could let in a lot of dust unless they have a filter. (Any incoming fan needs to be filtered and the filter cleaned regularly.) I personally avoided bays for that reason. They add cost, and my use case is similar to yours. I'm happy to power off for an hour to swap a drive manually. If it were in a business with several employees and I had to do maintenance during business hours, then I'd go with hotswap as the downtime would be costly.

It was more good luck than good engineering, but my airflow is out and around the aluminum on the hot swap bay, so it stays cool with no fan. I'm not sure I would be happy if I had 8 of them all together. In my case I have one for backup, and it's great - quiet and cool enough (~40c). A bit of air flow over a drive makes a big difference as does the mass of a large aluminum carrier.

Something to think about if you want it expand to 16 drives is vibration. I don't know much about it except that WD Reds are rated for 8 drives in the chassis. Maybe someone with experience can chime in on this issue.

Rest of your stuff looks good, but I'm not an expert on those parts, but I'm sure there are lots on this forum who are.
 

Jatrabari

Contributor
Joined
Sep 23, 2017
Messages
100
I had a similar product from another company and there was a flap that closed the gaping hole in the front of the case, but it wasn't good enough to keep dust out.

Good to know. What company and which model?

I borrowed a power meter from the library to check on the power usage of my systems, and it shows peak as well as average and instantaneous usage values. Startup current is over 2x the "running value". My FreeNAS which has 10 drives (2 of which normally sit powered down) hits about 225W (at the wall plug) on startup and between 28W and 115W depending on what is going on. I suspect that number would be higher if I had jails or other CPU intensive activity going on.

I have calculated my peak + idle power using mostly manufacturer's spec sheets to be 465 W for 10 drives.

(Visual inspection shows the cans bulging in many cases - google "bad capacitors" or "badcaps" - there is at least one web site devoted to it. I've resurrected both my current monitors LG/Samsung for about $10 in parts from DigiKey and an afternoon with a soldering iron.

Great that you can do this and won't have to buy new to replace. I would like also to have the tools and the skill to do this. I have seen some pictures of "fat caps".

I haven't looked for about 1-2 years, but last time I looked Seasonic was one of the few high quality builds that has good quality parts, is super stable when the load changes (transient response), and is stands up well to overloads.

I am going with Seasonic for sure as to the hardware recommendations and also their great reputation and long warranty.

What PSU do you use at the moment?

The Addonics cartridges are very heavy aluminum as is the base, so they do a great job of heat sinking. The Hot Swap bays I bought are the fanless models (they do have ones with fans), but a 10TB WD Red only reached about 40-41C (going from memory here-Ambient about 26-28C) after over a day of "badblocks".

Didn't find Addonics in my neck of the woods but this is more like to be available. I have their external enclosure in use and it has been great. Nice led effect around the enclosure.

I don't know but transiently hitting temps >= 40 C is maybe not catastrophic?

From what I can see, my guess is that the "IcyDock" is going to need fans to even come close on temps - and I assume this is what the main drives live in.

Icydock enclosure has a 80 mm fan for 3 drives and I am swapping the stock fans with Noctua's NF-A8 80 mm fan.

Hope these comments help

Yes, they did, thank you.

Hi @Jatrabari, for what it's worth, here's a review of the PS you are considering:

https://www.hardocp.com/article/2008/04/16/enermax_modu82_625w/3

From a quick scan of the conclusion it doesn't look bad... if your unit is older, it may have different caps... read the whole article to see if they say more. The TLDR; the parts are good, assembly may be a bit on the rough side, but unless you get a lemon, it should be ok.

I won't be using the old PSU with my new build. I posted my updated configuration a few posts ago but now have also updated the first post. I am getting a Seasonic Prime Ultra Titanium 650 W.

As for lifespan, case ventilation, working environment and parts quality are key. I went from 6 drives in a small case to 10 drives in a full tower case and my temps are much better in the full tower case <40C for the most part.

My old server is in a full tower case and I am going to use that for the new one.

Unless you are going to have noisy fans

I'm going all Noctua in this build so noise shouldn't be an issue.

Unless you are going to have noisy fans or the mounting has a lot of aluminum, the hot swaps may make the drives run hot, and the vents could let in a lot of dust unless they have a filter.

@Ericloewe What are your experiences regarding dust and temp with these Icydock cages?

Something to think about if you want it expand to 16 drives is vibration.

My max is now 10 and I can't see that I will go over that as my data requirements demand no more.

Rest of your stuff looks good, but I'm not an expert on those parts, but I'm sure there are lots on this forum who are.

Thanks a lot for the long replies.
 
Last edited:

NASbox

Guru
Joined
May 8, 2012
Messages
650
I have calculated my peak + idle power using mostly manufacturer's spec sheets to be 465 W for 10 drives.
Always good to design to the specs + margin of safety - I did as well. The reason that my measured results are lower is because of 1-Staggered start up, and 2-Power saving throttle down when the CPU is not in use. At the current price of power that's a good thing.

I am going with Seasonic for sure as to the hardware recommendations and also their great reputation and long warranty.
What PSU do you use at the moment?
Seasonic X-560 SS-560KM - running great since 2012.

Didn't find Addonics in my neck of the woods but this is more like to be available. I have their external enclosure in use and it has been great. Nice led effect around the enclosure.
I don't know but transiently hitting temps >= 40 C is maybe not catastrophic?
Icydock enclosure has a 80 mm fan for 3 drives and I am swapping the stock fans with Noctua's NF-A8 80 mm fan.
Yes, they did, thank you.
Experience is king - in looking at it, I would have thought that it would run hotter. Are the drive connectors protected? If you are going to be doing frequent swaps, they should be. As for availability I had to have it shipped at great inconvenience, but I believe it was worth it.

I won't be using the old PSU with my new build. I posted my updated configuration a few posts ago but now have also updated the first post. I am getting a Seasonic Prime Ultra Titanium 650 W.
My old server is in a full tower case and I am going to use that for the new one.
I'm going all Noctua in this build so noise shouldn't be an issue.
Good choices - Noctua fans are great - I have a couple, and will likely get more as I need to replace fans. Not cheap, but you get what you pay for, and their heat sinks are awesome. They pull a ton of heat with very minimal fan rotation. If you are OK with things running a bit hot (but still well within spec), you could ditch the fans in many applications. That power supply will be great. I'll bet if you put a watt meter on the input you won't be over 200W steady state. That will keep it cool, and it should last a very long time unless you get very unlucky and get a lemon (which can happen with any manufacturer).
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Dust is not disastrous and temperatures are not a problem. Worst the drives have seen is ~36 degrees Celsius during the summer (with ambient temperatures in the high 20s).
 

Jatrabari

Contributor
Joined
Sep 23, 2017
Messages
100
Dust is not disastrous and temperatures are not a problem. Worst the drives have seen is ~36 degrees Celsius during the summer (with ambient temperatures in the high 20s).

Ok, thanks for the info. Are you using stock Icydock fans or something else?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Yeah, but I think their bearings may be deteriorating too much for my taste (I'll have to investigate that when I next clean the servers). I've looked at replacing them with similar San Ace models.
 

Jatrabari

Contributor
Joined
Sep 23, 2017
Messages
100
After almost 2,5 months of waiting I finally got the final pieces for my build, pulled the trigger last week and can now post my final configuration:

MB: Supermicro X10SRL-F
CPU: Intel Xeon E5-1650 V4 6-core 3.50GHz
Heatsink: Noctua NH-U9DX i4
Memory: 64 GB ECC (2x 32GB Crucial CT32G4RFD424A DDR4-2400, Micron part number MTA36ASF4G72PZ-2G3A1)
HBA: LSISAS9201-16i SAS card (will be flashed into IT firmware)
HDDs:
  • 4 TB WD Red x 6 as Raidz2 (primary pool) + tested cold spare
  • 6 TB WD Red x 2, first as 2-way mirror + tested cold spare, later 3 disc Raidz1 when expansion is needed (backup pool)
Case: Fractal Design R6 with 5 additional HDD trays
Power: Seasonic Prime Ultra Titanium 650W
Boot device: 2x Supermicro 32 GB SATA DOM mirrored

Still coming: removable HDD cage for off-site backup, Noctua fans and UPS

Got very lucky with the memory as the price went up about 80 euros day after purchase and supplies dried out almost everywhere except Amazon.

I will post a build log to this thread as I get along with the tinkering of my new FreeNAS.

At this point I want to thank everyone that has given me feedback and advice about my parts and configuration. I welcome more comments as I start actual assembly and post my progress here.
 
Last edited:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
later 3 disc Raidz1 when expansion is needed
The standard guidance is to not use RAIDz1 with disks larger than 1TB. You can not move from a mirror pool to a RAIDz2 pool without destroying the pool and starting over.
 

Jatrabari

Contributor
Joined
Sep 23, 2017
Messages
100
The standard guidance is to not use RAIDz1 with disks larger than 1TB. You can not move from a mirror pool to a RAIDz2 pool without destroying the pool and starting over.

Yes I am aware of this guidance.

The 2-way mirror will suite my backup needs for a while and I will have a cold spare for the backup pool (also have one for the primary pool). When needed I will convert it to a Raidz1 pool with a third drive to accommodate primary pool data backup needs and just sync the primary pool back to the backup pool.

Of course I will add the off site backup in time also.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Yes I am aware of this guidance.
As long as you know...
The 2-way mirror will suite my backup needs for a while
- snip -
When needed I will convert it to a Raidz1 pool with a third drive to accommodate primary pool data backup needs and just sync the primary pool back to the backup pool.
You can't *convert* from one kind to another. I guess you get that, but it's not what you are saying.

Sent from my SAMSUNG-SGH-I537 using Tapatalk
 
Last edited:

twinscroll

Dabbler
Joined
Mar 12, 2018
Messages
11
The backup pool is a weekly backup of all the data in the storage pool.
Normally the second NAS (currently offline) is synced with the primary NAS storage pool every hour so that if the primary was down for any reason I would lose access only to the most recent changes.
This normally gives me all of my data in three functionally separate pools, but it would be better to have one of them be offsite. I just don't have a good place to put it.

Chris, do you happen to have a write up on your system configuration - like what you're doing there.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Chris, do you happen to have a write up on your system configuration - like what you're doing there.
I am not sure what you are looking for. I have a storage pool that houses all my documents, videos and music and has around 50% free space. Then I have a backup pool that does nothing but backup the storage pool and then there is my iSCSI pool that I use for housing block storage. I have a couple 'drives' in there that are connected to other systems by iSCSI over a 10Gb network.
That is what I do at home and the hardware is undergoing some changes right now, so I don't really have it all nailed down but if you have specific questions I will do my best to answer.
I am a bit paranoid with protecting my data. That is probably why I ended up being a storage admin at work.
 

Jatrabari

Contributor
Joined
Sep 23, 2017
Messages
100
As long as you know...
You can't *convert* from one kind to another. I guess you get that, but it's not what you are saying.

Sent from my SAMSUNG-SGH-I537 using Tapatalk

Yes, convert is the wrong word of course. As you said destroy the mirror pool, start again and create the Raidz1 pool.

Chris, do you happen to have a write up on your system configuration - like what you're doing there.

@twinscroll There is one of them in his signature
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
My backup pool (just tonight) saved me some pain because my wife mistakenly deleted an entire directory of files from the NAS and I was able to recover it in a matter of seconds from the online backup. It is a nice safety net.
 

warllo

Contributor
Joined
Nov 22, 2012
Messages
117
My backup pool (just tonight) saved me some pain because my wife mistakenly deleted an entire directory of files from the NAS and I was able to recover it in a matter of seconds from the online backup. It is a nice safety net.

Not trying to hijack this thread but how would one go about creating a "backup pool" perhaps on one disk for taking offsite?

I have a equivalent to a raid 10 running on 4 2TB WD reds and would like to take the data offsite.
 
Top