Before I spend my hard earned cash, will this work

Status
Not open for further replies.
Joined
Nov 25, 2012
Messages
9
I've read so many hardware recommendation threads lately my head is spinning. My cash is extremely hard earned and a decision to spend over $1000 on a test box does not come lightly. So before going through with this I would love confirmation if this solution will work.

Goals: I may use this hardware for 1. Freenas zfs, 2. VMWARE ESXi Vsphere whitebox. Looking for hardware that will fully support both including; full VT-d for PCI passthrough support.

Budget: Was hoping to keep it within $1000 - $1200 total. This is based that I already have some 7200 RPM SATA drives lying around and would use these in the meantime while I save more drives at a later time.

Config: Basically my main goal is to get a Freenas ZFS going. I would have 5 x 7200 RPM SATA drives. 2 of the hard drives in a ZFS mirror that I would use as a file share / backup for my public network on LAN-A. The other 3 drives in a RAIDZ that will be used as an ISCSI target on my private network LAN-B. So I would need a dual-homed NIC as Freenas will be shared between 2 subnets.

Questions: I've compiled some of the main questions I've asked over the last while researching hardware:
Does the hardware support both Freenas and ESXi?
If using an ISCSI target will my performance be sufficient using 7200 RPM SATA drives?
In the event I would like to upgrade to better drives will my motherboard support a SAS/SATA controller such as IBM ServeRAID M1015 or Highpoint RocketRaid.

I wanted to keep the box as small form factor as possible. I originally wanted to buy desktop mATX hardware to keep my cost down but I am not sure if this is the way to go. I find it confusing to research the PCI slots on these boards. Do they support the SATA/SAS controller card? Are there enough available slots if expansion is required?

Here is the hardware I originally picked
Motherboard
ASRock Z77 Pro4-M (Support full VT-d and has Realtek 8111e NIC natively supported by ESXi) My main question is does it support one of the SAS/SATA disk controllers listed above?

Case: Lian Li PC-V354B - mATX form factor that holds 7 x SATA 3.5" drives + 2 x SSD's.

CPU - Intel Core i7-3770S - Support full VT-d and low power alternative to the 3770

CPU cooler - Cooler Master GeminII S524

RAM - 2x 8GB DDR3. Whatever is the best sale at the time

PSU - Seasonic X-460 Modular 460W PSU

Basically on my other RIG I will be running Virtualbox on a Windows 7 host with an Ubuntu VM for surfing, torrents, media playback. I have another Ubuntu Server VM that runs a MySQL database / Apache for testing purposes only. I've never had an ISCSI configuration so I'm curious about maybe hosting the DB on the ISCSI target. I may also build another ESXi whitebox and use this Freenas build as the external storage. I just want to make sure that the performance will be tolerable with the 7200 RPM SATA drives. There will not be many multiple write attempts to the drives. The heaviest I/O activity that I can think of would be somebody performing a backup to the ZFS mirror along with the MySQL database in use on the RAIDZ that will be used to store real time data from my Snort IPS.

If the regular Sata 7200 RPM drives will be sufficient then I do not have to worry about controllers or SAS drives and the ASRock MB should work for my needs. I want to avoid spending this type of cash only to be extremely disappointed by the ZFS performance (Especially important for the ISCSI target). I would rather save more cash and go with server class hardware like Supermicro, although I can't afford at the moment.

All suggestions and recommendations are greatly appreciated.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
For iSCSI, see bug 1531 for Super Important Knowledge Transfer (mostly the end).

For $110, the board doesn't sound bad except for the network. I've never seen one though. With 4 SATA-II and 4 SATA-III ports, that's pretty aggressive and is useful to avoid needing to spend cash on an external RAID card. I see no reason the board wouldn't handle a M1015 though.
 
Joined
Nov 25, 2012
Messages
9
From what I can tell the M1015 is a PCIe x8 card. Still unsure of the compatability.

The specs of this board for PCI say:
- 1 x PCI Express 3.0 x16 slot (PCIE1: x16 mode)
- 2 x PCI Express 2.0 x16 slots (PCIE3: x1 mode; PCIE4: x4 mode)
- 1 x PCI Express 2.0 x1 slot

I am afraid of going through with the build if my only option with be to use SATA drives after reading the Hardware recommendations as well as bug 1531:

If you have steady, non-contiguous writes, use disks with low seek times. Examples are 10K or 15K SAS drives which cost about $1/GB. An example configuration would be six 600 GB 15K SAS drives in a RAID 10 which would yield 1.8 TB of usable space or eight 600 GB 15K SAS drives in a RAID 10 which would yield 2.4 TB of usable space.

7200 RPM SATA disks are designed for single-user sequential I/O and are not a good choice for multi-user writes.


This is only a test lab but if I choose you build an ESXi whitebox down the road and use this box as the external storage (ISCSI) and the performance is crap, it will be really disappointing. I don't expect lightning fast, but I also don't want to see multi second lag when using VM's
 
Joined
Nov 25, 2012
Messages
9
I read somewhere that on some of these desktop boards the bios only recognized video cards in the x16 slot. Not sure if this is just BS or what. I was hoping to get confirmation of somebody that tried a ZFS friendly controller card on one of the ASRock LGA 1155 boards but haven't found confirmation yet.
 

pirateghost

Unintelligible Geek
Joined
Feb 29, 2012
Messages
4,219
im running an AM2+ mobo on my main FreeNAS box, prior to that was a socket 775, both of which my PCI-express 8 cards work fine in...

also, i am running a SAS card in my main desktop which is a 1366 mobo, and its PCI-express 4 running in a 16 slot
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
If you have steady, non-contiguous writes, use disks with low seek times. Examples are 10K or 15K SAS drives which cost about $1/GB. An example configuration would be six 600 GB 15K SAS drives in a RAID 10 which would yield 1.8 TB of usable space or eight 600 GB 15K SAS drives in a RAID 10 which would yield 2.4 TB of usable space.

7200 RPM SATA disks are designed for single-user sequential I/O and are not a good choice for multi-user writes.


This is only a test lab but if I choose you build an ESXi whitebox down the road and use this box as the external storage (ISCSI) and the performance is crap, it will be really disappointing. I don't expect lightning fast, but I also don't want to see multi second lag when using VM's

1531's my ticket, so let me provide some context. Under heavy write loads, we were seeing ZFS stalling. As in, actually locking out additional I/O requests while it flushed a transaction group, which ZFS had allowed to grow way too un-freakin-reasonably large. No more reads or writes.

I would expect that out of an I/O subsystem based on spinning rust if (and ONLY if) there were so many operations pending that required a seek that the entire period between issue and service was filled by the I/O subsystem fulfilling older requests. That's a reasonable scenario for being apparently catatonic. THAT is also the scenario that the "Hardware recommendations" would help out with - but it won't fix it, it just raises the number of transactions required to induce catatonia.

The ZFS problem was not a capacity issue; the I/O subsystem had sufficient resources. ZFS was planning and using them poorly. 1531 discusses how to bludgeon ZFS over the head into more reasonable planning and use.

Performance is a function of many things. RAIDZ2 is generally slower than mirrored. Operating without a ZIL is generally slower than operating with one. Operating on less memory is usually slower. Lots of memory with an L2ARC is very fast if your working set mostly fits into L2ARC (and I'm seeing fewer cases these days where it shouldn't).

The speed of the SATA disks are really only one little factor, and the speed difference between a 5400RPM SATA disk and a 15K SAS drive is not a large difference. But let's face the facts. Fast rust is expensive; $309 for a Seagate Cheetah 15K.7 ST3450857SS 450GB 15000 RPM 16MB Cache SAS 6Gb/s 3.5" Internal Enterprise Hard Drive. If you need that, why not just get much faster; $329 for an OCZ Agility 3 AGT3-25SAT3-512G 2.5" 512GB SATA III MLC Internal Solid State Drive (SSD). Don't even get me started about the power consumption.

So if you're really worried about snappy, go all SSD. For faster-than-all-spinning-rust storage, some combination of slow rust and fast SSD for ZIL/L2ARC.
 

janptak

Cadet
Joined
Jan 7, 2012
Messages
6
I read somewhere that on some of these desktop boards the bios only recognized video cards in the x16 slot. Not sure if this is just BS or what. I was hoping to get confirmation of somebody that tried a ZFS friendly controller card on one of the ASRock LGA 1155 boards but haven't found confirmation yet.

Yes thats true - I had this issue with PERC5 Raid card and Old Asus board (Intel 6600 quad core cpu - dont remember MB model number) - system wont properly recognize Raid card in Pci-E 16x slot. What I remember I need to put some plastic stripe to cover one of pin on Pci-E slot to make it at least working with Pci-E x1...

I think now You should'nt have problem with this.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I was going to post that whomever says that a 16x PCIe slot only works with a video card is full of crap. But apparently it's true. I knew there was alot of good reasons to leave Asus and go to Gigabyte years ago...
 

pirateghost

Unintelligible Geek
Joined
Feb 29, 2012
Messages
4,219
my AMD build thats running right now is ASUS mobo with Dell PERC5...had to do the tape/nail polish trick on the card, but trust me, you can run the cards in the x16 slots
 
Joined
Nov 25, 2012
Messages
9
Thanks Pirateghost. Can you post a link to the tip you gave? Not sure I am willing to do this as I'd rather get compatible hardware but the tip is a great reference. After reading some Phoronix articles I probably should go older than Ivy Bridge but I think I will end up buying cutting edge hardware and putting it to different use until FreeBSD support catches up.

Thanks everybody else for their comments as well. I've send an e-mail to ASRock to ask them specifically about compatibility. Not sure I'll get a reply but if so I'll post it here.
 

ramius

Dabbler
Joined
Oct 30, 2012
Messages
17
My old Freenas setup was build arround an Asus P5KC Motherboard with a Dual Core E6300 CPU and 4 GB DDR2. I had a RR2300 Controller in the PCI-E 1x slot and a HP Smartarray P400 in the PCI-E 16x slot (slot1) The only thing i had to do, was to use a PCI 32bit GPU (S3 Virge DX 2MB in my case) and set the bios to use a PCI GPU. I was able to use the full 8x for the raid controller and had a GPU for text mode. It was much more trouble when I had windows 2008 R2 as operating system, with a PCI-E 16x gpu sharing bandwith with the two raid controllers and having Windows 2008 R2 runing with a 2mb GPU is nearly impossible. If you take a closer look at modern server boards, for instance my new Supermicro X9SCM board, you will notice that the onboard gpu is a Matrox G200 or similar with 16 Mb conected via pci-e 1x or even via a pci 32bit slot. With Vmware Esxi, Proxmox or Freenas, you don't have the need for a GPU with more than 1 Mb memory, all you gona use the GPU for is textmode.
 

pirateghost

Unintelligible Geek
Joined
Feb 29, 2012
Messages
4,219
Thanks Pirateghost. Can you post a link to the tip you gave? Not sure I am willing to do this as I'd rather get compatible hardware but the tip is a great reference. After reading some Phoronix articles I probably should go older than Ivy Bridge but I think I will end up buying cutting edge hardware and putting it to different use until FreeBSD support catches up.

Thanks everybody else for their comments as well. I've send an e-mail to ASRock to ask them specifically about compatibility. Not sure I'll get a reply but if so I'll post it here.
http://forums.overclockers.com.au/showthread.php?t=879827
http://www.overclock.net/t/359025/perc-5-i-raid-card-tips-and-benchmarks
 
Status
Not open for further replies.
Top