Best Practice vmware 6 and isci setup.

Status
Not open for further replies.

DeWebDude

Explorer
Joined
Nov 2, 2015
Messages
52
Hello All,

I have read through several threads and articles but many have mixed information because of different versions and such.
I'm looking to utilize FreeNAS as an iscsi device for several vm's.
I'm setting up HPDL180 w/HP410 using Raid 5 (maybe 6) to utilize all the drives and then make it available to vmware. I will be putting a separate drive to run FreeNAS.

In respect to connectivity I will use several ethernet ports built into the server and later look at getting a 10gb ethernet card or infiniband to improve performance between the array and the servers.

I used FreeNAS one time in a test environment many years ago, so I don't have recent version experience and looking for guidance here since I have a short timeline to install and setup.

Any suggestions, reference material and such is appreciated!

thank you!
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
You can use a separate ethernet port hardwired to each server if you need performance in excess of gigabit. LACP isn't suggested for iSCSI.

FreeNAS is designed to run the base system off a small storage device, such as a USB thumb drive or SATA DOM.

Use mirrored vdevs instead of RAIDZ1 or RAIDZ2 if you care about performance.

Add LOTS of memory if you care about performance. 64GB or more, and if you can fit your entire working set into ARC and possibly L2ARC, that'll work out very well.

For proper handling of sync writes, enable sync=always and add a decent SLOG device.

None of this is "mixed information" and if anything you've heard contradicts what I've said here, you can safely disregard the entirety of the other article, because whoever wrote it was probably very wrong.
 

DeWebDude

Explorer
Joined
Nov 2, 2015
Messages
52
Thanks for the quick reply jgreco!

I consider my self a rookie with FreeNAS, not to technology, servers, programming, sysadmin, system/network design etc, so this is all a learning of syntax and best practices as well as smart design from the beginning.
Because of this I may ask some funny questions but will hopefully have all this sink into the brain cells.

Also note that I did invest the time to go through cyberjock's excellent PowerPoint presentation which is a great resource! ( Thanks cyberjock )

Currently looking at 18TB of raw space over 6 drives and have 48GB of ram with dual quad core processors, so I think server wise I should be in good shape.
When reading previously I found that the recommendation is 8GB + 1GB of ram for every 1TB of storage capacity but we all know them more memory the better.
Does that sound good?
( For educational purposes, there is nothing needed to be configured if I simply upgrade ram right?)

I should configure RAIDZ2.
For my purposes serving isci I should get 2 small ssd drives (enterprise or slc ) to mirror the zil for safety and the zil will increase performance of my writes?
For L2ARC, I should do the same thing and get 2 small ssd drives to mirror it and this is to increase the performance of reads?
After reading the above you maybe saying I'm either on target or way off, assuming I understand what I've been reading.

Now the whole not being able to change anything within a vdev is very different than traditional raid so I would ask, are there methods/tools to move data from one vdev to another?
scenario:
Lets say I just want to put in some new drives and increase the amount of drives in a datastore for performance and maybe increase space as well, lets say it's assigned to zpool1 which is using zdev1.
I know I can increase the size of the pool, by simply installing bigger drives for every drive in a dev but this works if it was just a capacity issue, but in my scenario, I want to increase spindles.
Can I build either another pool, or dev and tell the system to move from one dev to another or one pool to another?

Another question is, am I building the raid array from the raid controller or simply letting all the drives be managed within FreeNAS?

What is the best method to create the pools? I of course don't want to split the disk space into a lot of pieces and I'm guessing a giant pool isn't the best idea either, so I'm trying to understand the best method and the reasoning behind it.

Thanks in advance!
 

jjonsson

Dabbler
Joined
Jul 23, 2015
Messages
17
For L2ARC, I should do the same thing and get 2 small ssd drives to mirror it and this is to increase the performance of reads?
You don't need to mirror cache. It's read only. ZIL is write, so a mirror is recommended.
 

toadman

Guru
Joined
Jun 4, 2013
Messages
619
Currently looking at 18TB of raw space over 6 drives and have 48GB of ram with dual quad core processors, so I think server wise I should be in good shape. When reading previously I found that the recommendation is 8GB + 1GB of ram for every 1TB of storage capacity but we all know them more memory the better. Does that sound good?
( For educational purposes, there is nothing needed to be configured if I simply upgrade ram right?)

No, you don't need to reconfigure anything when you add (or subtract) system ram. Unless you manually set parameters like arc_max, etc. Then you probably want to adjust them as needed.

I should configure RAIDZ2.

As jgreco mentioned, not if you want best performance for iscsi. Use multiple mirrored vdevs. Yes, your effective storage/raw storage ratio will go down.

For my purposes serving isci I should get 2 small ssd drives (enterprise or slc ) to mirror the zil for safety and the zil will increase performance of my writes?

Depends. Assuming you configure a zvol(s) on freenas, by default it will be sync=standard. The ESXi boxes will not send sync writes over iscsi, so using an SLOG will be useless for that use case. However, if you want to have the most data security possible, you would set sync=always on the zvol. Then all writes would be forced to sync, and then yes, the SLOG would speed things up dramatically. You would want an SSD with pfail (enterprise) in this case. And the writes will be very latency sensitive, so consider a PCIe SSD vs SAS or SATA (in the latter two you are adding latency through an HBA in the data path).

For L2ARC, I should do the same thing and get 2 small ssd drives to mirror it and this is to increase the performance of reads?

As mentioned above, no need to mirror an L2ARC. You'd be better off striping it. Or better yet, spend that cash on more RAM. Get it to 64GB, then think about an L2ARC.

Now the whole not being able to change anything within a vdev is very different than traditional raid so I would ask, are there methods/tools to move data from one vdev to another?
scenario: Lets say I just want to put in some new drives and increase the amount of drives in a datastore for performance and maybe increase space as well, lets say it's assigned to zpool1 which is using zdev1. I know I can increase the size of the pool, by simply installing bigger drives for every drive in a dev but this works if it was just a capacity issue, but in my scenario, I want to increase spindles. Can I build either another pool, or dev and tell the system to move from one dev to another or one pool to another?

Depends what you want to do really. All of what you mention can be done, sort of. If the question is "how can I best add spindles (for datastore performance)" then I'd say you're best off with the mirrored vdevs mentioned above. You simply add more disks as new mirrored vdevs for the original pool. This way the original datastore (which exists as a zvol on the pool) gets the benefit of the extra spindles. You don't have to reconfigure anything on the freenas side (other than add the new vdevs) or the esxi side.

If you went RAIDZ2, under this same logic you could add another RAIDZ2 vdev to the original pool.

While you could add another pool and either create another zvol for a new datastore, or if the pool is faster move the old zvol to the new pool, I wouldn't recommend that because it's a lot more configuration and you could have just added disks to the original pool. (But again, it depends. If you were creating a new pool out of SSDs for example, then sure, that would blow away a disk based pool.)

Another question is, am I building the raid array from the raid controller or simply letting all the drives be managed within FreeNAS?

No raid controller. If it doesn't operate in pass through mode (like IT mode on an LSI), take it out of the system (or disable it). I'm pretty sure this is covered in cyberjock's tutorial. Give Freenas direct access to the disks.

What is the best method to create the pools? I of course don't want to split the disk space into a lot of pieces and I'm guessing a giant pool isn't the best idea either, so I'm trying to understand the best method and the reasoning behind it.

One giant pool. But again, it depends on use case. With one pool the advantages are: (1) all your resources go to making the fastest pool possible, and (2) ease of managing the storage (you don't have to think about what goes where and running out of space on some pool and moving stuff to another pool).

The cases I can think of for separate pools are when the useage of each is radically different and/or you are constructing them from different hardware. For example, in my setup I have one pool today. It's got 10 disks in 5 mirrored 2-drive vdevs. It hosts everything. Media, active userdata, backups, and a zvol for esxi iscsi. At some point I plan to go from one pool to two pools when I add an all SSD pool. I will use the SSDs exclusively for serving iscsi to my esxi servers. I'll reconfigure the old pool likely into RAIDZ2 for better use of disk space (since I don't need the multiple vdevs for iops).
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472

DeWebDude

Explorer
Joined
Nov 2, 2015
Messages
52
Thank you everyone, will go through the suggested posts, read some more and potentially revise questions and ideas!
 

DeWebDude

Explorer
Joined
Nov 2, 2015
Messages
52
Hello All,

Back with a few more questions.

I will try to organize this into a few sections, as you can imagine a lot of questions when digging into the details.

Notes:

In order to play around with FreeNAS I had to go into my controller (HP 410 ) and create each drive as it's own virtual drive so I can see it within FreeNAS.
Not sure if this will effect the performance, but is one way to dive in and play around with FreeNAS before building a stable environment that you can't risk any mistakes.

I also did revisit the powerpoint a few times to make sure I read before I asked these questions and the verbiage is a little different with volume, zvol, dataset vs zpool, zdev etc.


Performance:
------------------------------------------------------------
I'll start with this experiment for the quick performance test:
I created a pool with 5 x 3tb drives (7200rpm/SAS/6gb) in a raidz and used a windows share to copy a large file getting about 77MB/sec ( this was of course with each drive presented to FreeNAS by creating individual virtual drives in the controller )
I then trashed the configuration and created the raid 5 within the controller, (no battery backup so there was no caching) and presented 1 big drive to FreeNAS.
I configured a share, and ran my same test resulting in pretty much the same performance.

One more test was putting 2 vdevs with mirrored drives into 1 pool (same drives in all tests), then copying to that share resulting in again the same performance.


On an older inhouse server which has older drives (7200rpm/SATA/3gb) and controller, slower processors and much less ram, running running vmware and mapping to a guest Windows Server on that same box.
Copying the same file I'm getting 93MB/s which I think is not that good, but its certainly faster than my initial test under FreeNAS.

Confirming that the theory here is that FreeNAS becomes the RAID controller and it's a dual xeon, 48GB ram controller it should outperform the HP410 but in my small test, however I didn't see that.

Am I looking at this wrong? Should I have seen better performance?
Another performance question is should I have turned off the default compression lz4?


Thanks!
 
Last edited:

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
In order to play around with FreeNAS I had to go into my controller (HP 410 ) and create each drive as it's own virtual drive so I can see it within FreeNAS.
Not sure if this will effect the performance, but is one way to dive in and play around with FreeNAS before building a stable environment that you can't risk any mistakes.

Yes, that will affect performance (and reliability). of course, for experimenting and figuring out how FreeNAS works, the fact that it may not be reliable isn't a big deal.

The other thing about your test is that you didn't define the testing parameters to 100% be identical with the exception of the file share being different. Running in a VM can make things faster (or slower) compared to bare metal. There's tons of factors that can affect all sorts of things, so its hard to say 'yep, that's the problem'.

But, being that you're forced to use every disk as its own virtual disk, you're severely hampering ZFS' performance just by doing that. The RAID controller tries to cache the writes, and ZFS' scheduler takes turns between reads and writes. Guess what happens with the RAID controller? ZFS writes some data (but it goes into the RAID controller's cache) and then when ZFS tries to read some data (the zpool *should* be idle since ZFS thinks it owns the disks) the RAID controller starts issuing write requests to the disks to clear the dirty write cache.

In short, don't expect to get "good" performance numbers as you've created some major performance-killing conflicts between ZFS and your RAID controller. The fix.. don't use the RAID controller.

Like I said above, for experimenting with FreeNAS your configuration is safe. For doing performance testing or in a production environment, you would be making a very big mistake.
 

DeWebDude

Explorer
Joined
Nov 2, 2015
Messages
52
Thanks cyberjock, I agree with what you are saying 100%, I'm now trying to figure out what controller with jbod/pass through capability will support the backplane of the dl180 and it's 12 drives.
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
You can use SAS expanders to accomplish your goal, along with the venerable 9211-8i HBA or one of its equivalents. I'm running 36 bays on a single 9211-8i in a Supermicro 4U chassis. They use a 24-port expander in the front and 12-port expander for the rear bays.

You won't run into any performance issues doing this unless you get up to an insane quantity of drives, or are building an all-SSD array using very high-end gear.

Search around on here, there are quite a few threads on the topic of expanders.
 

DeWebDude

Explorer
Joined
Nov 2, 2015
Messages
52
Thanks tvsjr, I did finally figure out using a 9211-4i will work, and have ordered one.
Hopefully I can finally start testing FreeNAS the way it was meant to be run once we get it.
 
Status
Not open for further replies.
Top