Wanting some serious speed

Status
Not open for further replies.

sillyfrog

Cadet
Joined
Jan 11, 2015
Messages
9
I'm wanting to build a seriously fast server, this will be for VMware hosts to connect to it.

It's with work, and we do a lot with IBM/Lenovo, hence using that for the server (we are in Australia, and so far I don't see a practical way to get a iX server with on site warranty, so a custom build it is).

The current config I'm looking at is:
  • X3550 M5 server (this supports up to 8 PCI cards)
  • 2x Xeon 8C E5-2630v3 2.4GHz/1866MHz/20MB
  • 64GB RAM (32GB per CPU), DDR4
  • 6x Intel 1.2TB SSD 750 Series (hence the need for a lot of PCI slots)
  • Mellanox ConnectX-3 Pro ML2 2x40GbE/FDR VPI Adapter
I'm thinking we would put the SSD cards in a RAID-Z config, and have the 40GbE there for raw throughput to the VMware hosts.

This, in theory, based on my limited experience, should be seriously fast.

We currently use FreeNAS with 12X 500GB SSD's, however still regularly have odd slowdowns - however I suspect the issue here is the RAID card.

Is there anything this config would need to more fully benefit from using the Intel 750 series. The specifications for these cards is insane compared to using SSD SATA/SAS drives with RAID cards, and relatively good value for that type of performance. I'm assuming here that FreeNAS would also be able to max out the system to get the throughput I'm hoping for (ideally, maxing out the 40GbE, I would be using Jumbo Frames).

I generally prefer to use NFS for VMware as it's often handy been able to modify .vmx files etc, would NFS be a lot of overhead for this type of system, or should I be looking at iSCSI or something else entirely?

Any thoughts or feedback would be appreciated before we buy!

Cheers.
 

zambanini

Patron
Joined
Sep 11, 2013
Messages
479
mirrored vdevs, much much much more ram and knock on grinch door...you are using a raid card with freenas?
 

nemisisak

Explorer
Joined
Jun 19, 2015
Messages
69
+1 what Zambanini said. For a given number of disks, a pool of mirrors will significantly outperform a RAIDZ stripe. ZFS loves RAM so the more you have the better. Also depending on how much RAM you allocate to each VM I dont think 64GB will be nearly enough on top of this. Saturating 40 GbE I think would be difficult even with that setup. Thats roughly 5GB/s. Also Im not an expert with high speed network configurations but I believe there was some kind of problem using Jumbo frames. Im sure someone else will chime in.

Also when you say RAID card I hope you mean HBA :)
 

MrVining

Cadet
Joined
Oct 17, 2012
Messages
9
I don't think the VMs will be sharing the RAM of the storage server.

Also, he has about 9GB or RAM per TB of storage. I'm a little surprised to hear you saying way more RAM. That's more than enough to store the table, anything beyond that will be used to cache the data itself.

I agree with the recommendation for mirrored stripes. RAIDZ doesn't seem to preform very well in FreeNAS compared to other systems I've tested it in.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
With VM storage you want RAM, and craploads. I think he's using the Intel cards as the storage for the VMs, so he may need less RAM than we might otherwise expect. But if high I/O is what is desired, I'd definitely go with more RAM.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I don't think the VMs will be sharing the RAM of the storage server.

No of course not. In the memory tiering it will be accelerating the disk layer where lots more data is accessed... more important here!

http://www.datacenterjournal.com/wp-content/uploads/2012/09/lsi_fig1.jpg

Also, he has about 9GB or RAM per TB of storage. I'm a little surprised to hear you saying way more RAM. That's more than enough to store the table, anything beyond that will be used to cache the data itself.

What "table"?

The amount of ARC and L2ARC is hard to determine but you definitely benefit from having the working set in {L2,}ARC for most workloads. More RAM is rarely a bad thing there. There's a point where it may not be doing you any GOOD but it won't hurt.

I agree with the recommendation for mirrored stripes. RAIDZ doesn't seem to preform very well in FreeNAS compared to other systems I've tested it in.

Really? They suck no matter where I try them, given equivalent configs.
 

sillyfrog

Cadet
Joined
Jan 11, 2015
Messages
9
Thanks all.

We'll look at mirroring - it's a shame RAIDZ has such overhead. I guess it would be 3 mirrors in the pool?

We are not using a RAID card (once bitten twice shy), these Intel SSD's are PCI devices, so there is no SATA/SAS/RAID card to get in the way (see here: http://ark.intel.com/products/86739/Intel-SSD-750-Series-1_2TB-12-Height-PCIe-3_0-20nm-MLC ). They can do 440k IOPS read, and 290k IOPS write - compared to any SATA/SAS device I have seen, these are very fast. The server we have selected has lots of PCI slots, with them balanced between the 2 CPU's.

This will just be the storage server, we won't run any actual VM's here, the VMware hosts will connect via 40GbE (is InfiniBand much better, I think these cards also support it, but know zero about it). Happy to increase the RAM if it'll help - however the VMware hosts I think will do a level of read caching as well (they'll have at least 256GB each).

Thanks!
 

sillyfrog

Cadet
Joined
Jan 11, 2015
Messages
9
mirrored vdevs, much much much more ram and knock on grinch door...you are using a raid card with freenas?

In our current system it does have a RAID card, but the disks are all configured as JBODS. This is also an IBM server, and that's the only way to get that many devices connected in the host.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
An HBA can connect lots of drives too. The RAID is likely to end up being a problem in some form, and may well kill performance.
 

sillyfrog

Cadet
Joined
Jan 11, 2015
Messages
9
An HBA can connect lots of drives too. The RAID is likely to end up being a problem in some form, and may well kill performance.
Based on my experience, I think you are 100% correct. Unfortunately when doing that previous build, we were going with 100% IBM parts, and that was the best they had. I'm hoping this build sorts that out by completely cutting out the controllers.

(As I'm still coming up to speed, I'm assuming you are meaning a SAS/SATA HBA?)

Thanks!
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Yes. IBM's HBAs are made by LSI; the M1015 and M1115 are basically the OEM version of the LSI 9240-8i.
 

sillyfrog

Cadet
Joined
Jan 11, 2015
Messages
9
Yes. IBM's HBAs are made by LSI; the M1015 and M1115 are basically the OEM version of the LSI 9240-8i.
Well that sucks getting the wrong information (at the time, from IBM), but I'm determined not to make the same mistake again! :)

Thanks again for you help, much appreciated!
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Well, don't be too shocked. IBM Sales Engineers would normally never sell an HBA and the idea that software can handle RAID controller functionality is foreign to them. Also, I believe the IBM HBAs may have an artificial cap on the number of supported targets. Bleh.
 

wreedps

Patron
Joined
Jul 22, 2015
Messages
225
You can probably just throw a LSI branded HBA in those IBM servers. I have alot of X servers at work.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Yes, you can definitely do that unless you're trying to source an entire system from IBM for warranty, single-sourcing, or other corporate nontechnical reasons.
 
Joined
Oct 2, 2015
Messages
4
Status
Not open for further replies.
Top