Backing to FreeNAS! Help with topology

Status
Not open for further replies.
Joined
Dec 30, 2014
Messages
45
Hi friends, It's been a long time!

I've been away from FreeNAS for a while because I was not able to include the solution in my projects, but now I have two servers. One in production and one in backup !!

I need some help to setup my production topology. This server will serve data area for a staging environment.

So, let's to specs:

The atual config:
Dell 2900 (really oldschool, but with 2 CPU and 2 hot swap power supplies
16GB HYNIX FB DIMM
2 onboard NIC gigabit
Out:
PERC 5i
Original SAS cable
4 300GB SAS Seagate

IN:
16GB HYNIX FB DIMM
Dual gigabit NIC Intel
Dell LSI SAS 9207-8i 6GB/s SAS/SATA PCI-E Host Bus Adapter HBA 4TMJF
Mini SAS cable SFF-8087 36 / SFF-8484 32
6 4TB SATA WD RED Disks

RAIDZ2 without ZIL & SLOG
vLAN for iSCSI traffic

It will be consumed by a VMWare environment!

What do you think guys?

Thank you!
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
It will be consumed by a VMWare environment!
I don't imagine that will perform very well. If you are sharing iSCSI to VMWare, you will likely need sync writes and that means you need a SLOG device, and it needs to have Power Loss Protection (PLP), probably a PCIe slot NVMe drive like the Intel DC P3700 series card; and a L2ARC which you would want to be fast also, but you could probably get away with using something like a set of Samsung 960 Evo in a stripe set.
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
Not nearly enough RAM.
Not nearly enough drives/vdevs.
No SLOG (you have a ZIL, that's in-pool)

And, the 2950 is really pretty long in the tooth. Still capable, but getting old.
 
Joined
Dec 30, 2014
Messages
45
I don't imagine that will perform very well. If you are sharing iSCSI to VMWare, you will likely need sync writes and that means you need a SLOG device, and it needs to have Power Loss Protection (PLP), probably a PCIe slot NVMe drive like the Intel DC P3700 series card; and a L2ARC which you would want to be fast also, but you could probably get away with using something like a set of Samsung 960 Evo in a stripe set.

Thank you! I will run a maximum of 7 virtual servers in a staging environment. What do you suggest for SLOG?

Not nearly enough RAM.
Not nearly enough drives/vdevs.
No SLOG (you have a ZIL, that's in-pool)

And, the 2950 is really pretty long in the tooth. Still capable, but getting old.

I'm going to run some tests and analyze the response time. I will redesign the server. One question: Will this HBA solve my problem?
> Dell LSI SAS 9207-8i 6GB/s SAS/SATA PCI-E Host Bus Adapter HBA 4TMJF
 
Joined
Dec 30, 2014
Messages
45
I don't imagine that will perform very well. If you are sharing iSCSI to VMWare, you will likely need sync writes and that means you need a SLOG device, and it needs to have Power Loss Protection (PLP), probably a PCIe slot NVMe drive like the Intel DC P3700 series card; and a L2ARC which you would want to be fast also, but you could probably get away with using something like a set of Samsung 960 Evo in a stripe set.

The Intel DC P3700 device is so expensive, I'm in Brazil. Can you tell me another one, similar but less expensive? :-/
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
There are ways to cut the cost but it will reduce performance. You have a older system with limited RAM capacity, so that is going to be a bit of a limit.
I will look up some options once I get to my office.
I would suggest using the drives in mirror sets to increase the vdev count. That will help increase the IOPS your system can get, but it will still only be around 300.

Sent from my SAMSUNG-SGH-I537 using Tapatalk
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
PS. What is the reason for the 4TB drives?
How much storage do you require?

Sent from my SAMSUNG-SGH-I537 using Tapatalk
 
Joined
Dec 30, 2014
Messages
45
I would suggest using the drives in mirror sets to increase the vdev count. That will help increase the IOPS your system can get, but it will still only be around 300.

Sent from my SAMSUNG-SGH-I537 using Tapatalk

I didn't understand the "drivers in mirror", sorry.

There are ways to cut the cost but it will reduce performance. You have a older system with limited RAM capacity, so that is going to be a bit of a limit.
I will look up some options once I get to my office.
I would suggest using the drives in mirror sets to increase the vdev count. That will help increase the IOPS your system can get, but it will still only be around 300.

Sent from my SAMSUNG-SGH-I537 using Tapatalk

Well, I need 10TB liquid. I chose to use only 6 server bays, for eventual need to connect 2 SSDs.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I didn't understand the "drivers in mirror", sorry.



Well, I need 10TB liquid. I chose to use only 6 server bays, for eventual need to connect 2 SSDs.
HDDs = drives. Does that chassis have 5.25 inch drive bays?

Sent from my SAMSUNG-SGH-I537 using Tapatalk
 
Joined
Dec 30, 2014
Messages
45
HDDs = drives. Does that chassis have 5.25 inch drive bays?

Sent from my SAMSUNG-SGH-I537 using Tapatalk

What do you think about Lenovo m1215 instead of Dell LSI SAS 9207-8i 6GB/s SAS/SATA PCI-E Host Bus Adapter HBA 4TMJF?

I can get this M1215 now, the 9207 I need to buy in eBay.

Thanks!
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
The M1215 is an LSI3008, which should work fine. However, just replacing the HBA isn't going to fix your problems.

You say you need 10TB of usable storage. Because of the way ZFS works, you never want to exceed 50% capacity on your pool. If you're only intending to use 6 drives in striped mirrors (RAID-Z is a non-starter for iSCSI use), you would need to run, at minimum, 8TB drives.

I'm running an older Intel S3700 DC 2.5" SATA drive for my SLOG. It works, but I know it could be better. If you look, you can find these on eBay with low hours/drive writes from time to time. This might be a cheaper solution for you, but you must understand the limitations.
 
Joined
Dec 30, 2014
Messages
45
The M1215 is an LSI3008, which should work fine. However, just replacing the HBA isn't going to fix your problems.

You say you need 10TB of usable storage. Because of the way ZFS works, you never want to exceed 50% capacity on your pool. If you're only intending to use 6 drives in striped mirrors (RAID-Z is a non-starter for iSCSI use), you would need to run, at minimum, 8TB drives.

I'm running an older Intel S3700 DC 2.5" SATA drive for my SLOG. It works, but I know it could be better. If you look, you can find these on eBay with low hours/drive writes from time to time. This might be a cheaper solution for you, but you must understand the limitations.

I will back study my topology for now.
Thank you!!!
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Ok, but mirror in HDDs? Using the controller for this?
I am not talking about using hardware RAID to mirror the drives. I am saying to use ZFS to set the storage drives in a striped mirror.
You could use something like this:
https://www.neweggbusiness.com/product/product.aspx?item=9b-17-994-174
to put SSD drives in the 5.25 inch drive bays and dedicate all the 3.5 inch bays to storage drives.
If you did that, you could use 8 drives of 6TB each for your storage pool (in striped mirrors) and still have the 10TB of usable space that you need. Then put your SSDs for SLOG and L2ARC and boot drives in the IcyDock bays. Total drive count would be higher, but it might perform better.
@tvsjr , what do you think?
 
Last edited:

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
That would work... although honestly I wouldn't put a ton of money into a PE2900 at this point. If you can make it work without much money as a test platform, that's one thing... but dumping a ton of money into a box that old makes little sense.
 
Status
Not open for further replies.
Top