Hopefully this is ok to post here, not really a question other than "Any further suggestions?" but I wanted to make an attempt at giving back to the community. So here is my recent build. I am a solo IT consultant working with mainly small to medium sized businesses and have been increasing my virtualization of client servers. I have been slowly adding drives to existing ESX servers and had my first opportunity to build out a NAS in support of ESX (and plain file storage). I needed to build two servers, one for each of two offices with one being primary and the other essentially being offsite backup.
Server1 and Server2 are essentially identical with exceptions noted below:
The build went well. Cable runs worked out. No power issues. IMPI just worked and I've never hooked a keyboard or monitor up. The fans seem to be running at 100%. I think that they are 3 pin versus 4 pin or something. Not a big deal because this is going into a server room where I already have a noise issue. I tried a few things to try to stress the system. I usually run Steve Gibson's SpinRite on my drives before putting them into production. It forces the drive to read/write each sector and spare out any that fail before production data ends up on one. I couldn't do that this time because SpinRite has a problem with Advance Format drives or something. Instead, I ran DBAN on them. I figure that 7 passes would cause the drive to at least touch all active sectors and that would have to be good enough.
FreeNAS 8.3.0 installed with no problems. I setup a volume and an NFS share and started copying data from my laptop over a single GB ethernet connection. I have a Lenovo T520 with two 250GB SSDs and one 1TB HDD. The first test was copying a VM from my HDD to the NFS share and I think I got about 300Mbps as measured on the Reporting tab in FreeNAS. I noticed that the HDD light was solid and wondered if I was maxing out my HDD so I copied a VM and some movies to one of the SSDs and started two simultaneous transfers, one from each drive. This time I pushed up to about 500Mbps and thought I could do better so I setup three simultaneous transfers, one from each of the three drives in my laptop and managed about 850Mbps. Here is a video I took with VLC from my laptop posted to youtube.
Is this at least an "in the ball park" measurement of nearly maxing out the single 1000M ethernet connection?
Any comments on my build?
Rick
Server1 and Server2 are essentially identical with exceptions noted below:
- Motherboard - SUPERMICRO MBD-X9SCL+-F Micro ATX Server Motherboard LGA 1155 Intel C202 DDR3 1333
- I chose the Supermicro board because it was 1155, held ECC RAM, I like SuperMicro's IPMI, and on board video if needed.
- CPU - Intel Xeon E3-1230 V2 Ivy Bridge 3.3GHz LGA 1155
- This may be overkill for this server, but I am standardizing on this CPU/Motherboard/32GB RAM combination. Is it overkill? The small amount of benchmarking seems to indicate that the CPU is never above 25%.
- RAM - 4 x Kingston 8GB 240-Pin DDR3 SDRAM ECC Unbuffered DDR3 1333 Server Memory Model KVR1333D3E9S/8G
- If I could have gone with more I might have. My ESX boxes tend to be 64GB or even a couple of 192GB servers. Going over 32GB though tends to escalate the cost too much although it would be cool to see a 512GB server doing dedup!
- Case - Fractal Design Arc Mini Black High Performance PC Computer Case
- Has 6 internal drive bays and decent cooling/cable management.
- PowerSupply - SeaSonic X750 Gold 750W
- Good experience with SeaSonic. With the drives and such didn't want to go lower and this one I think was on sale the first time I bought it. Again, trying to standardize my builds going forward for servers.
- USB Boot Drive - Verbatim Store 'n' Go Micro 8GB USB 2.0 Flash Drive
- Seemed to be good enough. I bought a couple of extras. I use these to boot ESX on those servers as well.
- SSD - 2 x SAMSUNG 840 Pro Series MZ-7PD128BW 2.5" 128GB SATA III MLC
- Mirrored ZIL. I am running multiple VMs off of this over NFS so the consensus seems to indicate that is a good use case for SSD-based ZIL. I may get crazy and try adding two more SSDs to the cage below for the L2ARC.
- RAID Controller - SYBA SY-PEX40008 PCI Express Low Profile Ready SATA II (3.0Gb/s)
- Searched around and found this card running just JBOD but may look into an IBM M1015 for future builds.
- SSD Mounting Cage - ICY DOCK MB994SP-4S Full Metal 4 x 2.5" Hard Drive in 1 x 5.25" bay SAS / SATA 6Gb Hot Swap Backplane Cage
- This is the coolest thing I found. I needed to expand the case somewhat for the SSDs and hated using an Internal 5 1/4" bay and was even considering taping them in place! Then I found this enclosure and love it. Great for adding SSDs to a full case.
- Additional NIC - Intel EXPI9402PT PRO/1000 PT Dual Port Server Adapter 10/ 100/ 1000Mbps PCI-Express 2 x RJ45
- I may play with link aggregation in the future. Also standard in my ESX builds.
- Hard Drives - 6 x Western Digital Red WD30EFRX 3TB IntelliPower 64MB Cache SATA 6.0Gb/s 3.5" Internal Hard Drive
- First use of these WD Red 3TB drives. I have used 2TB Reds and RE drives before, but needed more space than 2TB drives can provide in this case and the 3TB REs are currently about $300 each versus the $169 I paid for the Reds. These are going into a ZF2 array and I just ordered an extra drive that is going into the case as a standby. I'll probably go ahead and order a replacement to keep on the shelf as well. You know, belt AND suspenders mode.
The build went well. Cable runs worked out. No power issues. IMPI just worked and I've never hooked a keyboard or monitor up. The fans seem to be running at 100%. I think that they are 3 pin versus 4 pin or something. Not a big deal because this is going into a server room where I already have a noise issue. I tried a few things to try to stress the system. I usually run Steve Gibson's SpinRite on my drives before putting them into production. It forces the drive to read/write each sector and spare out any that fail before production data ends up on one. I couldn't do that this time because SpinRite has a problem with Advance Format drives or something. Instead, I ran DBAN on them. I figure that 7 passes would cause the drive to at least touch all active sectors and that would have to be good enough.
FreeNAS 8.3.0 installed with no problems. I setup a volume and an NFS share and started copying data from my laptop over a single GB ethernet connection. I have a Lenovo T520 with two 250GB SSDs and one 1TB HDD. The first test was copying a VM from my HDD to the NFS share and I think I got about 300Mbps as measured on the Reporting tab in FreeNAS. I noticed that the HDD light was solid and wondered if I was maxing out my HDD so I copied a VM and some movies to one of the SSDs and started two simultaneous transfers, one from each drive. This time I pushed up to about 500Mbps and thought I could do better so I setup three simultaneous transfers, one from each of the three drives in my laptop and managed about 850Mbps. Here is a video I took with VLC from my laptop posted to youtube.
Is this at least an "in the ball park" measurement of nearly maxing out the single 1000M ethernet connection?
Any comments on my build?
Rick