FreeNAS for iSCSI - VMware ESXi storage

Status
Not open for further replies.

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
If you’re really concerned about IOPS, use enterprise SSDs to build the array and don’t bother with SLOG

Eg, a mirrored p4800x will probably have nearly a million IOPS. (I didn’t bother looking up the specs.... but they’re about as good as you can get)
That's exactly what I intend to do... when they slide the decimal point to the left a digit on all those enterprise SSDs! I've got 4 PCIe slots ready, as soon as I can afford it.
 

curruscanis

Dabbler
Joined
Feb 15, 2018
Messages
17
Re slog, these days the p900 is better than the p3700, BUT you need to check if it’s compatible with ESXi yet.

I don’t care about the cost option is the p4800x

Stux, thank you for the additional input, the p900 is certainly a better price point $388.00 vs $660.00 for the p3700, that would imply that two could be purchased one for each SLOG and L2ARC if needed.

I don't know if an ESXi requirement is needed as I would be building a FreeNAS box separate and present the iSCSI to the ESXi hosts.
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
We need to take 10 giant steps back. Lets help build an engineered solution and not just max out the budget.
My specific goal for a new FreeNAS is that of a "SAN" for connecting multiple VMware ESXi hosts.
How many hosts and what will we the storage network look like? With iSCSI I'm would guess its switched, but is that on dedicated and redundant switches? Have you thought about COS or VLAN configurations? Also keep in mind for high IO and ultra low latency you may not want jumbo frames especially if your DB transactions are smaller than 9kb.

VM counts to be in the 10-20 range varied applications from light Web server use / SQL / AD / File Print Services
If your only hosting 10-20 VMS (per-host or total?) how much space do you really need? perhaps you would benefit from multiple zpools for your datastores with different vdev layouts. I.E. one pool mirrored strips with battery backed SSD SLOG for the DBs and a parity (RAID 5-6) for the non-IO intensive workloads. This may impact your drive selections significantly and save/cost $$$. You could also use resource pools to allow your hypervisor to manage your IO resources during times of contention. Don't forget, resource contention is not a bad thing. It means you didn't over spec you systems. Resource sharing and management is what VM consolidation is all about.

The big thing here and overall is that you NEED to profile your applications and design your storage system to fit. Personally, I would speck out IOPS based on the raw disk capability in a given set up and give yourself room for 2 years growth + 20%. Once you add in the ARC, l2ARC (if needed) and SLOG you should be quite comfortable. With ZFS this can be a bit tricky but benchmarking with Bonnie++ can help determine if you hit your IOPS targets with "real world work loads".

We can't tell you anything about your need as I have VMs that consume 50k IOPS during normal use (all day) and ones that use next to nothing. Even telling us its a DB says very little as we don't know anything about the DB or how its used or the number of users.

2 x 10GB SFP+ networking for redundant iSCSI path
Please consider using two one port cards or for the cost difference, just get two dual port cards and have each card connect to both switches. This prevents and one card, cable, or switch from causing your environment from experiencing a PDL.

Also keep in mind ESXi will NOT saturate multiple links even if you play with setting MPIO to use RR per 100 IO. This CAN (but not always) help with IOPS but won't do much for throughput.

There are a million other points that should be brought up when building/designing a proper SAN for a production environment and thats whey Dell, IBM, EMC demand such a premium. I hate directing people to buy "prefab" solutions but it may be worth consulting with Xi-Systems. You will have a validated solution and support when things go south.

[edit]I also wanted to mention that dual port SAS disks are essential if you need the system to not go down. you would also need two disk controllers and a backplane that supports multipathing but if you don't and a SAS controller goes wonky for some reason (It happens) you lose all of you disks and your SOL. My understanding is that (and Ill admit Im a bit fuzzy here) FreeNAS will automatically manage DAS multipathing using mulipathd as long as the disks are local (connected via SAS) and correctly detected by camcontrol. This is somthing that SATA disk can NOT do without using sata to sas interposer cards.
 
Last edited:

NAS-Plus

Explorer
Joined
Apr 15, 2017
Messages
73
Just curious, how does FreeNAS compare, IOPS wise, to Windows Server or other purpose built SAN devices with comparable hardware specs? I know that FreeNAS offers lots of advantages over other storage solutions. How does it complete with regard to IOPS performance?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Just curious, how does FreeNAS compare, IOPS wise, to Windows Server or other purpose built SAN devices with comparable hardware specs? I know that FreeNAS offers lots of advantages over other storage solutions. How does it complete with regard to IOPS performance?
It depends on the way you build it.

Sent from my SAMSUNG-SGH-I537 using Tapatalk
 

NAS-Plus

Explorer
Joined
Apr 15, 2017
Messages
73
That makes sense. With an equal amount of money invested in hardware (RAM, drives. processor, etc.), and built well in each case, which solution do you think (FreeNAS, Windows Server, or commercial SAN) do you think would win IOPS wise?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
That makes sense. With an equal amount of money invested in hardware (RAM, drives. processor, etc.), and built well in each case, which solution do you think (FreeNAS, Windows Server, or commercial SAN) do you think would win IOPS wise?
Windows Server licenses are very expensive, and that cost could be dedicated to hardware to improve IOPS in FreeNAS. I think (without having made any direct comparison, that a properly configured FreeNAS system would beat a Windows server if you had budgets that were capped at the same dollar amount, but you would need to spend wisely. As for a commercial SAN solution, I am pricing out solutions at work and I have quotes on my desk from iXsystems, HP and Dell that range between $98K and $157k to provide around 500TB of usable storage. I didn't even bother contacting companies like NetApp (netapp.com), Panasas (panasas.com) or Sun/Oracle because those would be well outside our budget. It really depends on your goals. What do you want your storage to do for you?
 

NAS-Plus

Explorer
Joined
Apr 15, 2017
Messages
73
Chris,

Thanks, this is very helpful. I know that any piece of hardware / software has a limited number of core competencies, i.e. areas in which it really shines. This same hardware / software solution may be able to perform in a much longer list of applications, though may not necessary be the best choice in these cases. I'm trying to determine in my mind if FreeNAS can do iSCSI too or if it can do this better than any other operating system can. The reasons that I wonder if it might not be the best solution is the fact that limiting storage to 50% of the volume, the recommendation to use mirrored pair vdevs (instead of raidz1, 2, or 3), fairly high RAM recommendations and the need in some cases for ARC, l2ARC, and SLOG. To be fair I'm not certain if other operating systems have a similar set of recommendation like these that apply as well.

To be fair, I know that not all applications require the extraordinarily high IOPS.

In an application that we're working we need to replace an old HP ProLiant 320s server. It has a single processor, runs Windows storage server, and has about 8 GB RAM. It has 12 drive bays with 6 in use at this time. The first two drive bays have the OS in a RAID 1 configuration. The remaining 4 drives contain the iSCSI array in a RAID 5 array. It serves as an iSCSI storage for about 3 virtual machines (VMware) currently. Performance while not great has not been an issue. We need to replace this with storage for about 6 VM's and a total usable storage 8TB.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
The fact that ZFS is a copy-on-write filesystem and creates checksum data for every write introduces a good bit of overhead. It makes the system more reliable in the end, but you do have to build the system "stronger" to get the same performance you might get from a lesser system using some other file system. Because of ZFS, FreeNAS is not always the answer. I love it, but you may be able to create a solution with less time and money invested using a different platform.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Reliability is key to my purchasing of any hardware, with the noted exception of not going with a major manufacturers system...
Did you get your questions answered?
 

NAS-Plus

Explorer
Joined
Apr 15, 2017
Messages
73
Yes, sure did. Thanks. I think that your summary makes sense. The fact that there are some benefits to the FreeNAS solutions helps to justify some additional system requirements for the same performance. On the other hand I wonder if a FreeNAS solutions requires less processor capability which might allow a single processor server to do what might otherwise require a dual processor configuration.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
On the other hand I wonder if a FreeNAS solutions requires less processor capability which might allow a single processor server to do what might otherwise require a dual processor configuration.
It has been my observation that FreeNAS does require some level of compute capacity due to the fact that it computes checksum values for every chunk of data stored and that the default configuration is to compress data written to disk. The CPU on many servers is under utilized because much of the actual work is handed off to a hardware RAID controller, but FreeNAS is doing that work using the CPU and RAM. The cost savings with FreeNAS is in the fact that you do not need to spend for the Operating System and you don't need that fancy hardware RAID controller. Then there is the greater security that comes from knowing that FreeNAS is checking every read against the checksum to ensure that what is read back is the same thing that was stored. I spent years using and managing regular Windows servers using hardware RAID and they can (and do) get the job done, but there are many benefits to ZFS. I especially like that I can take a snapshot of the current condition of the file system when I want (on a schedule) and then mount that snapshot any time I want to go back to what the configuration was at that time. It is like having a backup that only costs me some space on disk to maintain the amount of change since the snapshot. I can also take that snapshot and do a ZFS send / receive with another server using the ZFS file system and have an exact "moment in time" copy of what was going on when the snapshot was made. There are literally books on all the features of ZFS and as wonderful as I think it is, it may not always be the solution. If you only need 8TB of storage for iSCSI to run 6 VMs, you might come out better to purchase a QNAP system. We have some of the rack mount QNAP systems where I work and they are real servers even down to the Xeon CPU. They just have a custom version of Linux running the web UI and managing all the interfaces.
There is also the option of getting something like a FreeNAS Certified system from iXsystems. I believe that they can configure it to spec. Have you looked at those? Their sales team will talk to you about your needs and come up with a solution to fit.
http://www.freenas.org/freenas-certified-servers/
 

NAS-Plus

Explorer
Joined
Apr 15, 2017
Messages
73
Chris?

Thanks for the excellent feedback. Thanks for the positive feedback concerning QNAP. Do you feel that their devises are more capable than Synology? Yes, I have spoken to IX Systems. I plan to visit with them further.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Do you feel that their devises are more capable than Synology?
I don't like the way Synology does hybrid RAID. It may be perfectly fine, but I think it is just asking for problems.
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
I don't like the way Synology does hybrid RAID. It may be perfectly fine, but I think it is just asking for problems.
I had a Synology box. It was... flaky at times. Definitely don't like the hybrid RAID. And it wasn't a cheap box, either. I think I dropped about $800 for a 5-bay unit. For not much more, I can buy an off-lease 24-bay or 36-bay Supermicro box and have a TON more capability, plus the knowledge that I can take those drives anywhere, mount them, load FreeNAS, and be able to read my data.

Oh, and btrfs? Barf.
 

NAS-Plus

Explorer
Joined
Apr 15, 2017
Messages
73
Thanks. What does QNAP use RAID wise?
 
Status
Not open for further replies.
Top