freenas 9.3 best perfomance for vmware and dell servers

Status
Not open for further replies.
Joined
Sep 23, 2013
Messages
35
hi, i find this a very specific and complicated scenario
first the storage servers
a couple of R710 servers with 48gb memory each and 6 hard drive slots (2tb each) and a perc controller; and 6 NICS (1gb each)
first question: hard ware raid or zraid? (looking for perfomance and hotswapping in case of failure, right now i got a raidz1)
second: nics, so far i made a "management" lagg (failover) with bce0 and 1. then a lacp lagg with bce2 and 2 and another lacp lagg with igb0 and igb1 (i should mention bce are the intergrated 1gb broadcom and the igb are 1gb intel)
i read somewhere that freenas doesnt really do loadbalance, so i figure lacp should help.. sorta

this r710 will server the entire space via ISCSI to a vmware cluster of 3 servers.
each server has 4 cards dedicated to iscsi (10gb/s cards, mind you, and all the switches are manageable and 10gb/s)
to make mpio work i made it so 2 cards are in the 192.168.130.x range and 2 in the 192.168.131.x range.
in freenas lagg1 is 130.x and lagg2 is 131.x
i already set mtu = 9000 in each lagg and every switch and everywhere in vmware

from the reports screen i barely make it 123mbits/s. shouldnt i get closer to 1gb/s ?
im planing on setting nas4free in the other server for perfomance testing too.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
first question: hard ware raid or zraid? (looking for perfomance and hotswapping in case of failure, right now i got a raidz1)

That question has to be asked at *least* 250 times in this forum, and is probably mentioned in every single document we've written. I'm sorry, but you need to read our documentation and not ask a question that we've documented ALL over the place.

I ask that nobody else answer that question so that you are forced to actually utilize the tools we've provided for you so you don't have to be "yet another guy that can't read our documentation".
 
Joined
Sep 23, 2013
Messages
35
all i see are opinions.. people arguing how raid5 is dead. seems to be pushing their own agendas and not giving facts.
this is a 6x2tb storage.. no need for raid6 or z2.
also IOPS matter.. hardware raid might give more of them.
the raidz checksum thingy seems more glofiried than worth it.. hardware raid its been around for decades with no problems
now taking in consideration that this is a perc.. and no JBOD is available..
6x raid 0 for zfs or straight raid 5 for hardware raid
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
now taking in consideration that this is a perc.. and no JBOD is available..
6x raid 0 for zfs or straight raid 5 for hardware raid

No, you need to read our already written documentation. *I* personally have written stuff up about percs.. at least 20 times. I bet if you used my username and searched for perc you'd probably see a hint of what our docs say...

So yeah, more research needs to be done on your part. Even less confidence in your research than your first post since you mentioned a model of card that has been talked about so many times I couldn't even give you a number....
 
Joined
Sep 23, 2013
Messages
35
then theres something wrong with the indexing.
xko4D1u.png
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
I get seven pages of results searching for "PERC" - search the entire forum, not just a subsection, and you'll see many more posts. All of which will probably tell you the same thing - don't use this card unless it's been reflashed to IT mode.

Now with that said, to quote your avatar - "listen here, noob." You're doing all kinds of things that are horrible for performance.
  • Running VMs on parity vdevs over iSCSI
  • LACPing iSCSI traffic
  • Jumbo frames on 1Gbps
  • Broadcom NICs
  • Using a very bad controller for ZFS
@cyberjock isn't just being rude or snarky here because he likes to, but because you've made a whole raft of mistakes that would have been addressed by reading the documentation.

NAS4Free or any other storage solution won't fix any of those issues (barring maybe the "ZFS hates your RAID card" one)
 
Joined
Sep 23, 2013
Messages
35
ok.. i will check each one of your points. somethings i just cant do anything about it (the cards and the controller)
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Well, in order, here's what you can do:

1. Ditch the RAIDZ1 and switch to mirrored vdevs. Random I/O will improve.

2. Lose the LACP trunks for your iSCSI setup and configure MPIO.

3. Do A/B testing to see if jumbo frames really help you as much as you think. Generally you don't need them until you're at 10Gbps.

4. The Broadcoms aren't as bad as, say, a Realtek, but just bear in mind that they may cause lower throughput.

5. If it's a PERC H200 it can actually behave as an HBA but since you're mentioning RAID5 I have a feeling it's not. Buy an H200 and leave the devices unconfigured.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
You didn't select the whole forum when you did you did your search...


2015-03-09_12h46_06.png
 
Joined
Sep 23, 2013
Messages
35
1- will check about what exactly is mirrored vdevs
2- im already using mpio, the storage has 4 cards so i made 2 laggs of 2 cards each and they mpio into the vmware server. i cant do 4 subnets.. so i have to find a way to bond them or something..
3- also the storage is 1gbits, the whole network is 10gb (cables, switches and vmware servers)
4- i dont have a choice
5- its an perc7 (h700), no cant buy anything at all. have to work with what i have.

my mission is to just serve all the space as iscsi datastore for vmware. nothing else. i wonder if windows might actually work for this...
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Oh, and the MTU thing has already been discussed to death. Again, search the forums. Your questions are all run-of-the-mill questions, and a little seaching is all that needs to be done. You are lucky as hell you aren't being flamed to death for what you are asking for. You should have done MUCH more research before you started asking questions.

Ideally, you'd have read our forum stickies, or docs, etc and already know this before you even installed FreeNAS, but you're asking on a system you already are running FreeNAS on... so its a bit late for that.

I can sense lots of pain and misery as you are trying to do what is basically the worst workload you can possibly put on ZFS, and you clearly haven't read all those stickies and docs that would have made you consider if your hardware is even capable of handling the workload... so yeah.

And if buying hardware is out of the question, you are probably going to try this for a while and figure out that you cannot just grab hardware and use it and expect it to work unless you get *really* lucky and your hardware happens to be very compatible and friendly with FreeBSD. That's not the typical scenario unless you build the system for FreeNAS specifically or are very lucky. So you might want to save yourself some trouble and just give up on FreeNAS right now or look at appropriating some funds for this.

But your first post indicates you are in for a very bad uphill battle that will require blood, money and time to make work properly. I've seen dozens come in here trying to do VMs with "hardware I had and can't spend money" and it doesn't end well 99% of the time. I've seen people try for months, convinced they'll get it working and all will be well, and end up defeated.

Good luck.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
1- will check about what exactly is mirrored vdevs
2- im already using mpio, the storage has 4 cards so i made 2 laggs of 2 cards each and they mpio into the vmware server. i cant do 4 subnets.. so i have to find a way to bond them or something..
3- also the storage is 1gbits, the whole network is 10gb (cables, switches and vmware servers)
4- i dont have a choice
5- its an perc7 (h700), no cant buy anything at all. have to work with what i have.

my mission is to just serve all the space as iscsi datastore for vmware. nothing else

1. Make three vdevs of two mirrored drives each, should look like this in zpool status:

Code:
        NAME                                            STATE     READ WRITE CKSUM
        mushroom                                        ONLINE       0     0     0
          mirror-0                                      ONLINE       0     0     0
            gptid/aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa  ONLINE       0     0     0
            gptid/bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb  ONLINE       0     0     0
          mirror-1                                      ONLINE       0     0     0
            gptid/cccccccc-cccc-cccc-cccc-cccccccccccc  ONLINE       0     0     0
            gptid/dddddddd-dddd-dddd-dddd-dddddddddddd  ONLINE       0     0     0
          mirror-2                                      ONLINE       0     0     0
            gptid/eeeeeeee-eeee-eeee-eeee-eeeeeeeeeeee  ONLINE       0     0     0
            gptid/ffffffff-ffff-ffff-ffff-ffffffffffff  ONLINE       0     0     0


2. Ditch the LAGGs entirely, put two interfaces on each subnet and get redundancy that way.

3. Disable jumbo frames in your storage VLANs then and leave them enabled elsewhere.

4. Not an issue specifically, just stating.

5. This is an issue. The issue, really. You need to either find a way to replace this card with something else (the stated PERC H200s are dirt cheap) or you will be risking data corruption using that card with ZFS. If you absolutely can't replace it then you'll probably have to use a non-ZFS solution like Linux/ext4/SCST.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
just because of his avatar I would lock the thread...

I can't even see his avatar (or yours).

I have a crapload of websites blocked though. ;)
 

NickB

Dabbler
Joined
Sep 7, 2014
Messages
25
Here are my thoughts and suggestions:

If you're going for a lab environment for yourself, and trying to learn, and you have good backups, then what you hand will work, but you're not going to get anything faster than what you're getting now.

As mentioned by cyberjock (a little rough around the edges, but you gotta love the guy for all the work put into this project), you don't have the best hardware setup.

Personally, if this is supporting a customer, I'd probably not go with FreeNAS, unless you really needed the data replication for off-site purposes (big attraction for me working with NetApp).

If your going for iops, I'd recommend raid 10, but that does come with risks that need to be weighed. But, if you're willing to go with raid 5, then 10 isn't much of a stretch. But you only really see the iops gain on writes.

And if you need faster throughout, you need 10 gb nics on your storage.

Another option, which is probably the one I'd go for from what you mentioned so far, it's to take the memory and hard drives and put it into your esx environment. Then buy the licensing for vSphere VSAN or virtual SAN. Memory is what you need a lot of for VMs. And with VSAN, you can take advantage of those 10 gb nics.

And as for software raid vs hardware, with today's processing power, there isn't much of a difference. Hardware might have some benefits with multiple connections, but software gives you some benefits like direct access to smart info.
 

NickB

Dabbler
Joined
Sep 7, 2014
Messages
25
And when I said 'not go with FreeNAS,' that was because of the current hardware selection, not because if FreeNAS itself.
 
Joined
Sep 23, 2013
Messages
35
im currently testing nas4free on the second r710.
it has the option to just serve the raw array (raid6).
i wish i could do the same with freenas. no zfs.. just raw space.
and no.. i cant add those drives to the vmware cluster.. those are blades and the r710 got 3.5" hdd's
i dont see how can 10gb's cards will help if i still havent capped the 1gb
 
Joined
Sep 23, 2013
Messages
35
well.. i tested both solutions (nas4free and freenas)
so far freenas seems more mature, however there are things i prefer from nas4free.
good things about nas4free: the interface is more polished, and the perfomance graphs are live. also, he lets me share the whole device without formating at all.
bad thing: the iscsi solutions is very buggy... from 5 server only 3 saw the space.. and from those 3 only one was able to connect to all 4 cards.

good freenas: iscsi works so much better... (altho some times i need to reboot the storage or the server to make the connections work properly.. but its ok)
bad: doesnt let me share the whole device as raw.. it requires me to format it.
 
Joined
Sep 23, 2013
Messages
35
im trying to setup a raid 50.. is making two raidz = raidz50?
Code:
zpool status
  pool: Pool02
 state: ONLINE
  scan: none requested
config:

        NAME                                            STATE     READ WRITE CKSUM
        Pool02                                          ONLINE       0     0     0
          raidz1-0                                      ONLINE       0     0     0
            gptid/8f2395b3-ce4c-11e4-8e45-d4ae526e496a  ONLINE       0     0     0
            gptid/8f454b15-ce4c-11e4-8e45-d4ae526e496a  ONLINE       0     0     0
            gptid/8f644578-ce4c-11e4-8e45-d4ae526e496a  ONLINE       0     0     0
          raidz1-1                                      ONLINE       0     0     0
            gptid/8f865614-ce4c-11e4-8e45-d4ae526e496a  ONLINE       0     0     0
            gptid/8fa58f48-ce4c-11e4-8e45-d4ae526e496a  ONLINE       0     0     0
            gptid/8fc88404-ce4c-11e4-8e45-d4ae526e496a  ONLINE       0     0     0

errors: No known data errors

  pool: freenas-boot
 state: ONLINE
  scan: none requested
config:

        NAME                                          STATE     READ WRITE CKSUM
        freenas-boot                                  ONLINE       0     0     0
          gptid/ece7580c-ce41-11e4-9c25-d4ae526e496a  ONLINE       0     0     0

errors: No known data errors

 
Status
Not open for further replies.
Top