FreeNAS Upgrade

Status
Not open for further replies.

Simon Sparks

Explorer
Joined
May 24, 2016
Messages
57
Hi Guys,

I am in the process of upgrading my FreeNAS system

I currently have a Synology DS1813+ with the 2 x 5 Bay Expansion Modules and 18 x 3TB WD RED HDDs which I use for blu-rays and general data.
I currently have a DELL PowerEdge R510 with 14 x 120GB Samsung 840 EVOs and an INTEL X520-SR2 running FreeNAS 9.10.x which I use as an iSCSI target for some ESXi hosts in my home lab.

My plan is to migrate both of these systems to a brand new solution:

To accomplish this I have bought the following components:

1 x DELL PowerEdge R810
4 x Intel Xeon E7-4870 ( 10 core hyper-threaded 2.4GHz, overkill but I intend to enable the highest level of compression )
32 x 16GB DDR3L PC-8500R 1.35v ( this is the maximum the server can handle with the processors that I have chosen )
2 x 16GB SanDisk Extreme Pro SD Cards ( to run FreeNAS on )
2 x INTEL X520-SR2
1 x LSI SAS HBA 9200-8e ( external dual port firmware P20 already in IT mode )
1 x LSI SAS HBA 9210-8i ( internal dual port firmware P20 flashed into IT mode )
1 x DELL PowerVault MD1220 ( 24 x 2.5" Drive Bays - for 12 x SATA SSDs for virtualisation and 12 x 1.2TB SAS for general data and virtualisation )
1 x DELL PowerVault MD1200 ( 12 x 3.5" Drive Bays - for 12 x WD RED 8TB HDDs - blu-ray storage )
2 x Lycom PCIe to NVMe Adapter Cards
2 x Samsung 950 PRO NVMe SSDs ( for L2ARC and ZIL )

After all the data has been migrated I plan to sell the Synology DS1813+ and the 2 x 5 Bay Expansion Modules and re-purpose the DELL PowerEdge R510 as a backup target for my home lab with the best 12 out of the 18 x 3TB WD RED drives and a pair of SSDs in the 2 x internal 2.5" bays for the L2ARC and ZIL.

I would love to hear your thoughts...

My complete home lab can been seen on my blog http://www.vcoflow.co.uk/category/home-lab-series/
 
Joined
Feb 2, 2016
Messages
574
That's a more powerful NAS than I've seen in Fortune 500 corporate environments. We have 150 users and a dozen or so VMs on a FreeNAS server not even a tenth of what you are proposing for your home lab.

If you have the 12Gbps version of the MD1200, you may want to consider the 12Gbps LSI HBAs and disks instead of the 6Gbps versions you have picked out. Never too much overkill, right?

Cheers,
Matt
 

Simon Sparks

Explorer
Joined
May 24, 2016
Messages
57
The DELL PowerEdge R810 has the following PCIe slots:

5 x PCIe Gen2 x8 Slots
1 x PCIe Gen2 x4 Slots
1 x PCIe Gen2 x4 DELL Proproiety Storage Slot

Therefore I am limited to the 6Gbps SAS cards as the 12Gbps SAS cards all require PCIe Gen3.

Thanks for your thoughts though.
 
Last edited:

Simon Sparks

Explorer
Joined
May 24, 2016
Messages
57
How much of your "Kingston Digital HyperX Predator 240 GB PCIe Gen2 x4 (slog)" is actually being used ?

Assuming 2 x 10GbE ports and 4 x 1GbE ports onboard the calculations are as follows:

2 Ports X 10,000 Megabits/s = 20,000 Megabits/s

20,000 Megabits/s / ( 8 bits in a Byte ) = 2,500 Megabytes/s

The general rule of thumb is allow for 10 seconds of writes just in case there is a delay in writing to the spinning disks.

2,500 Megabytes/s X 10 seconds = 25,000 Megabytes in 10 seconds

Therefore the ZFS Intent Log or ZIL for a pair of 10GbE links should not need to be any bigger than:

25,000 Megabytes / ( 1024 Megabytes in a Gigabyte ) = 24.414 Gigabytes in 10 seconds
 
Joined
Feb 2, 2016
Messages
574
How much of your "Kingston Digital HyperX Predator 240 GB PCIe Gen2 x4 (slog)" is actually being used ?

Not much at all. We use it in front of the SSDs which are VM storage. Our VMs don't write much; bulk data which the VMs process is stored on the conventional disks. While there is a theoretical improvement, I'm not sure there is much practical improvement.

Since my signature was created, we dropped the L2ARC, too. With or without the L2ARC, our ARC hit ratio is 95% and we think we're better off having more ARC and less L2ARC. Our L2ARC hit rate was in the 20% range.

I keep thinking we'll move the Kingston from in front of the SSDs to in front of the conventional disks serving the bulk of our data. Though, performance is really good as-is so I'm not inclined to move it around unnecessarily. Toss enough conventional spindles at a performace problem and, except in some specific edge cases, it's fast enough. VMs are a case where SSDs shine. For most other stuff, the drives aren't the bottleneck. At least in the small and medium business environments I've seen.

Cheers,
Matt
 
Last edited by a moderator:

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Samsung 950 PRO NVMe SSDs ( for L2ARC and ZIL )

is not a suitable drive for SLOG. It has not Power Loss Protection. Have a look at once of the Intel PCIe NVMe SSDs. The AIC versions won't require the lycom adapter.

Also, the Lycom adapter doesn't have any heatsinks. If you're going for overkill maybe you should consider the Angel bird adapter.
http://www.angelbird.com/en/prod/wings-px1-1117/

I believe you can use PCIe3 devices in a PCIe2 slot. Maximum disk bandwidth will be reduced about 50%.

Maybe its worth working out if you really need/want E7s and their PCIe2, when it seems like the actual bottleneck would be the PCIe bandwidth rather than core count.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194

Simon Sparks

Explorer
Joined
May 24, 2016
Messages
57
is not a suitable drive for SLOG. It has not Power Loss Protection.

I have a "VERY LARGE" APC Smart-UPS looking after all of my home lab servers and networking and all the kit it monitored and has dual power supplies, which is why I opted for the slightly cheaper option of the Samsung PROs instead of the Samsung Data Center versions.

I believe you can use PCIe3 devices in a PCIe2 slot. Maximum disk bandwidth will be reduced about 50%.

Yes PCIe Gen 3 work in PCIe Gen 2 slots but it is as you said pointless as the single lane bandwidth of a PCIe Gen 2 slot is 500MB/s and the single lane bandwidth of a PCIe Gen 3 slot is 984.6MB/s it would just mean spending money on a faster SAS HBA that could not be used to its full potential.

If you're going for overkill maybe you should consider the Angel bird adapter. http://www.angelbird.com/en/prod/wings-px1-1117/

I wish I had found these sooner they look awesome compared to the Lycom, I wish they would make a 2 x 4 PCIe lane card which supports 2 x PCIe 4 lane NVMe SSDs that goes in a PCIe X8 slot.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
I have a "VERY LARGE" APC Smart-UPS looking after all of my home lab servers and networking and all the kit it monitored and has dual power supplies, which is why I opted for the slightly cheaper option of the Samsung PROs instead of the Samsung Data Center versions.
That's very nice and does exactly zero to protect you against data corruption casued by kernel panics or power loss due to a problem after the UPS or if the UPS shuts down after detecting a short-circuit, or any number of scenarios.

At that point, you might as well do sync=off.
I wish I had found these sooner they look awesome compared to the Lycom, I wish they would make a 2 x 4 PCIe lane card which supports 2 x PCIe 4 lane NVMe SSDs that goes in a PCIe X8 slot.
That would need a PCI-e switch for guaranteed compatibility with all motherboards, which adds considerable cost. Some motherboards support hacky distributions of PCI-e lanes from single ports, but this is far from standardized.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
I wish I had found these sooner they look awesome compared to the Lycom, I wish they would make a 2 x 4 PCIe lane card which supports 2 x PCIe 4 lane NVMe SSDs that goes in a PCIe X8 slot.

You mean like this?

http://amfeltec.com/products/pci-express-gen-3-carrier-board-for-2-m-2-ssd-modules/

And here's the 4x 4x = 16x version

http://amfeltec.com/products/pci-express-gen-3-carrier-board-for-4-m-2-ssd-modules/

This is why I have a 16x slot in my server ;)

And yes, these use PCIe switches so basically work everywhere.
 
Joined
Feb 2, 2016
Messages
574
Those are sweet, @Stux. Any hint as to pricing? I did a quick search and they seem only to be available through the manufacturer and with an 'email to order'.

Cheers,
Matt
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Those are sweet, @Stux. Any hint as to pricing? I did a quick search and they seem only to be available through the manufacturer and with an 'email to order'.

Cheers,
Matt

The previous generation had a lot of 'airplay' on barefeats.com (a mac 'overclocking' website).

http://barefeats.com/hard210.html

(those benchmarks are with relatively old M.2 SSDs (sm951). I'd expect up to 14GB/s with latest gen)

I'm fairly certain in various mac forum threads and such they mentioned pricing for the older PCIe2 generation. The PCIe3 generation came out (was press-released) in July, so is still new, and I don't know pricing.

The previous gen boards are/were all the rage for upgrading Classic Mac Pro's to stupid-fast ;)

128GB of DDR3, 24 3.4ghz logical cores and 6GB/s storage makes a fairly relevant machine for something from 2009.

(I have a similar machine)
 
Last edited:
Joined
Feb 2, 2016
Messages
574
That is insanely fast (and expensive - bare card seems to be $700?). I'd have no use for it in FreeNAS but, in a computer we use for video editing here at the office, it wouldn't be excessive. Can't wait to see the pricing on the new version. Thanks for the information.

Cheers,
Matt
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
That is insanely fast (and expensive - bare card seems to be $700?). I'd have no use for it in FreeNAS but, in a computer we use for video editing here at the office, it wouldn't be excessive. Can't wait to see the pricing on the new version. Thanks for the information.

Cheers,
Matt

Yes, crazy expensive, but you could now achieve 8TB in a single slot and maybe 14GB/s
 

Simon Sparks

Explorer
Joined
May 24, 2016
Messages
57
Just an update to my storage infrastructure, I picked up a Cisco DS-C9148-48P-K9 48-Port (48-Active) Multilayer 8Gb Fibre Channel Switch with 48 8GbFC SFP+ Optics on eBay for £610 GBP with 12 Months Warranty http://www.ebay.co.uk/itm/282391318004 so looking around I found the QLogic QLE2564 ( quad port 8Gb fibre channel cards ) were ridiculously cheap £50 GBP each so I bought one for each of my FreeNAS boxes and also 1 for each of my ESXi hosts and also another 1 for my KODI box which accesses the FreeNAS box to stream raw blu-rays, so 11 in total. I picked up a load of brand new OM3 fibre optic cables for pennies on eBay and now I am in the process of migrating my ESXi LUNs over from a dual port 10Gb iSCSI to a quad port 8Gb Fibre Channel solution.

FreeNAS - Backup

1 x DELL PowerEdge R510 with 2 x Intel Xeon X5675 ( 6 core hyper-threaded 3.06GHz, overkill but I intend to enable the highest level of compression )
8 x 16GB DDR3 PC-10600R 1.5v ( this is the maximum the server can handle with the processors that I have chosen )
2 x 8GB SanDisk Cruser Fit USB Sticks ( to run FreeNAS on )
2 x INTEL X520-SR2
1 x QLogic QLE2564 ( quad port 8Gb fibre channel card )
1 x DELL PERC H200 Flash to LSI SAS HBA 9210-8i ( internal dual port firmware P20 flashed into IT mode )
12 x WD RED 3TB HDDs - backup storage in 2 x RAID-Z2 vDevs each containing 6 HDDs
2 x Intel DC S3710 400GB SATA III 2.5" SSD ( for L2ARC 752GB STRIPE and ZIL 24GB MIRROR )

FreeNAS - Primary

1 x DELL PowerEdge R810
4 x Intel Xeon E7-4870 ( 10 core hyper-threaded 2.4GHz, overkill but I intend to enable the highest level of compression )
32 x 16GB DDR3L PC-8500R 1.35v ( this is the maximum the server can handle with the processors that I have chosen )
2 x 16GB SanDisk Extreme Pro SD Cards ( to run FreeNAS on )
2 x INTEL X520-SR2
1 x QLogic QLE2564 ( quad port 8Gb fibre channel card )
1 x DELL PERC H200 Flash to LSI SAS HBA 9210-8i ( internal dual port firmware P20 flashed into IT mode ) <-- Using this to run the 6 x 2.5" internal drive bays
6 x Samsung PM863 1.9TB 2.5" SSDs in 2 x RAID-Z1 vDevs each containing 3 SSDs - for Virtualisation Gold Tier
1 x LSI SAS HBA 9200-8e ( external dual port firmware P20 already in IT mode )
1 x DELL PowerVault MD1220 ( 24 x 2.5" Drive Bays - for 24 x 1.2TB SAS in 4 x RAID-Z1 vDevs each containing 6 HDDs - for general data and Virtualisation Silver Tier )
1 x DELL PowerVault MD1200 ( 12 x 3.5" Drive Bays - for 12 x WD GOLD 10TB HDDs in 2 x RAID-Z2 vDevs each containing 6 HDDs - blu-ray storage )

2 x Lycom PCIe to NVMe Adapter Cards <-- Planning on Removing in favor of a pair of Intel PCIe NVMe Cards
2 x Samsung 950 PRO 512GB NVMe SSDs ( for L2ARC 976GB STRIPE and ZIL 24GB MIRROR ) <-- Planning on Removing in favor of a pair of Intel PCIe NVMe Cards
 
Status
Not open for further replies.
Top