My Mini-ITX build completed

Status
Not open for further replies.
Joined
Aug 18, 2012
Messages
14
There are quite a few threads discussing potential hardware choices for Mini-ITX builds. I thought some of you might like to see what combination of parts I settled on for my own home NAS project. Along with glamour shots. Component prices are Canadian $, before tax, from Aug 2012.

Case. Chenbro SR30169, 4-bay ($148.98) http://www.chenbro.com/corporatesite/products_detail.php?sku=195
Took some time to find a case for this project. I bought this based on reviews without having touched it, and am pleased to report it is very solid and looks good. The plastic front wouldn't suffer much abuse, but the rest of the case is heavy steel. The included power supply leaves something to be desired. I may replace it in the future with something more efficient. The plastic drive caddys are certainly not enterprise-grade, but they won’t but shuffled very often. I like that the drive bays are arrayed horizontally.
chenbro.jpg chenbro front.jpg chenbro exposed.jpg

Mainboard. Intel DH77DF, Mini-ITX ($135.99) http://www.intel.com/content/www/us...p-motherboards/desktop-board-dh77df.html.html
This board has just about every kind of I/O currently in vogue. And a x16 PCI Express slot. And a nice BIOS. And, blessedly, no legacy ports (go to hell, parallel). My one wish is that it had two Ethernet ports.
intel above.jpg

CPU. Intel Celeron G540 ($46.99) http://ark.intel.com/products/53416/Intel-Celeron-Processor-G540-2M-Cache-2_50-GHz
This dual-core 2.5GHz chip is low power and runs cool, and supports VT-x. I love that it was under $50. The mainboard supports quad-core i7 and Xeon chips if I need more speed (and heat and power consumption).
Memory. Kingston 8G DDR3-1333 ($42.99)
Starting out with 8 GB (plus another 2 GB stick I scavenged). Cheap and simple to double this down the road to the board’s 16 GB maximum.
intel side.jpg

OS drive. OCZ Nocti 120GB mSATA ($89.99) http://www.ocztechnology.com/ocz-nocti-msata-ssd.html
This is a bit of a luxury; I’d intended to install the OS on a USB stick. But, this Nocti was on sale and my board has a slick little mSATA port. And look how adorable it is.
nocti.jpg

Storage. I'm reusing some 1 and 2 TB drives I have around for now. When drive prices come back down to earth, I will look at 3 or 4 TB disks.

Software. FreeNAS by itself is not very demanding, and the hardware in this little thing is quite capable, so I've decided to run VMware ESXi, with FreeNAS in a VM. Disks are passed through to FreeNAS using Virtual Device Mapping, so storage behaviour and performance is very close to bare-metal FreeNAS. And, crucially, disks can be exported to another ZFS-aware system. Goodbye HFS+, ext3, NTFS!
vmware.jpg
 

Joshua Parker Ruehlig

Hall of Famer
Joined
Dec 5, 2011
Messages
5,949
Wow, so much nicer looking then my huge metal case. Nice tiny little package, something a girlfriend / wife would be more ok with havig in the house.

If I were you I'd use that mSATA SSD as a cache or ZIL drive, as the OS is stored in RAM upon bootup and that fast little SSD is only used during bootup. It's not even written too except for logs and config changes I believe.
If you want you could even slice it and give a 2GB ZIL and the rest cache. Just a thought though, awesome build!

EDIT
ahh, running FreeNAS in a VM, nvm about the FreeNAS specific tip then..
 
Joined
Aug 18, 2012
Messages
14
If I were you I'd use that mSATA SSD as a cache or ZIL drive, as the OS is stored in RAM upon bootup and that fast little SSD is only used during bootup. It's not even written too except for logs and config changes I believe.

True that FreeNAS doesn't need to be run from a fast disk. That's also true of VMware to an extent. But my intention here is to have other virtual machines (such as Windows Server) that will benefit from the fast primary storage. And the SSD makes both FreeNAS and ESXi much snappier and more pleasant to work on.
With respect to augmenting ZFS with SSD-based cache and log, I think there would be little practical benefit for my scenario: my file collection is >90% static, I'm not using RAIDZ, and there are only ever one or two clients connecting to shares.

I hope to follow up with some benchmarks and power consumption data.
 

ben

FreeNAS GUI Developer
Joined
May 24, 2011
Messages
373
Hi professorpolymath,

That's a pretty interesting build you've done there - would you mind if it was used on FreeNAS.org as an example? We've just launched a page to display community-designed "FreeNAS Mini" devices at http://www.freenas.org/mini, and it seems like yours would fit right in. You can submit it using the email we've set up or just respond to me here.
 

ProtoSD

MVP
Joined
Jul 1, 2011
Messages
3,348
Wow, very nice build! I would like to see something that can handle more than 4 drives, but that little mSATA drive is pretty sweet. Also a bummer it doesn't have the two ethernet ports, mine and TECK's have the 2 onboard Intel NICs.

Still, very awesome, thanks for posting!
 
Joined
Aug 18, 2012
Messages
14
Wow, very nice build! I would like to see something that can handle more than 4 drives, but that little mSATA drive is pretty sweet. Also a bummer it doesn't have the two ethernet ports, mine and TECK's have the 2 onboard Intel NICs.
I was keen to keep the case really compact. I'm counting on hard drives growing in capacity to accomodate my growing music collection.
I would love to have a second NIC to run pfSense, that other incredible FreeBSD-based appliance. I could slap a second NIC into the PCIe slot to make that happen.
 

ACGIT

Cadet
Joined
Sep 15, 2012
Messages
8
Is that power supply serviceable? Meaning, can you change it to commodity PSU?
 
Joined
Aug 18, 2012
Messages
14
Is that power supply serviceable? Meaning, can you change it to commodity PSU?

The power supply is standard. Measures 15 cm x 14 cm x 8.5 cm (5.9" x 5.5" x 3.4").
The way it's mounted in the case is a little unusual though: with the typical "rear" of the power supply pointed up into a small cavity at the top of the case. A short pigtail connects the AC from the actual rear of the case to the PS power input. Heat from the case interior is exhausted into the little cavity and out the back of the case.
PS.jpg
PS label.jpg
 
Joined
Aug 18, 2012
Messages
14
Power consumption.
At idle, with four disks installed and spinning, the system consumes ~47 W as measured at the wall.
During typical disk and CPU activity, consumption goes up to ~55 W, depending on intensity.
Just after power on, consumption briefly spikes to about 110 W as the disks spin up.

I haven't discovered how to get VMware to let the disks spin down after an idle period. FreeNAS is set to do so on the VDM-attached disks, but the command seems to be ignored; one of the drawbacks to using ESXi at the moment. If I can figure this out, the idle power consumption should drop by ten watts or more.
Another minor drawback to the ESXi abstraction is a lack of sensor data on temperatures, voltages, fan speeds. The current release doesn't retrieve these values from this motherboard's (Intel DH77DF) management controller.
 
Joined
Oct 6, 2012
Messages
5
Nice Built! Was exploring something similar with a VM Router and a VM NAS all in one box.

What are the read and write speeds to the VM FreeNAS, are they close to the bare-metal FreeNAS?
 
Joined
Aug 18, 2012
Messages
14
What are the read and write speeds to the VM FreeNAS, are they close to the bare-metal FreeNAS?
I think read/write speeds are not limited by the virtualization, at least with the relatively slow consumer-grade drives I have used. Using the simple dd benchmarking technique suggested elsewhere on the forum, I tested a single drive and a stripe of four drives.
A single 2TB WD Green drive: read/write 114/107 MB/s
Stripe of four 750 & 500 GB Seagate drives: read/write 283/285 MB/s

I think these numbers represent the maximum physical speed of the underlying disks. Booting the system up with FreeNAS on a USB stick and importing the pools produces very similar benchmark results.

Code:
freenas under vmware -- single 2tb WD green:
$ dd if=/dev/zero of=/mnt/twotera/tmp.dat bs=2048k count=50k && dd if=/mnt/twotera/tmp.dat of=/dev/null bs=2048k count=50k
107374182400 bytes transferred in 996.761056 secs (107723091 bytes/sec)
107374182400 bytes transferred in 937.851037 secs (114489592 bytes/sec)
--
freenas under vmware -- stripe of 4 seagates (750 gb & 500 gb):
# dd if=/dev/zero of=/mnt/bort/tmp.dat bs=2048k count=50k && dd if=/mnt/bort/tmp.dat of=/dev/null bs=2048k count=50k
107374182400 bytes transferred in 376.107703 secs (285487858 bytes/sec)
107374182400 bytes transferred in 379.043916 secs (283276364 bytes/sec)
 

SharkByte

Cadet
Joined
Sep 30, 2012
Messages
1
This might seem like a silly question but what controller card (if any) is used for the raid controller?
 
Joined
Aug 18, 2012
Messages
14
This might seem like a silly question but what controller card (if any) is used for the raid controller?
There's no RAID controller in this setup. The hard drives are connected to the motherboard's AHCI SATA ports. If you want to use RAID levels, that is done in software by ZFS + FreeNAS.
 

SonicPet07

Cadet
Joined
Jan 25, 2013
Messages
6
Software. FreeNAS by itself is not very demanding, and the hardware in this little thing is quite capable, so I've decided to run VMware ESXi, with FreeNAS in a VM. Disks are passed through to FreeNAS using Virtual Device Mapping, so storage behaviour and performance is very close to bare-metal FreeNAS. And, crucially, disks can be exported to another ZFS-aware system. Goodbye HFS+, ext3, NTFS!

Doing a Similar build yet I'm having trouble using Virtual Device Mapping. Any chance of explaining how you set that up? Is there any specific requirements required to do it properly? Issues you've come accross over time?
 
Joined
Aug 18, 2012
Messages
14
Doing a Similar build yet I'm having trouble using Virtual Device Mapping. Any chance of explaining how you set that up? Is there any specific requirements required to do it properly? Issues you've come accross over time?

One quirk of Raw Device Mapping I've run into is it doesn't like 3 TB hard drives on ESXi 5.0. Since adding a 3 TB drive I've had to revert to running FreeNAS natively and bypass VMware. 2 TB drives are no problem. This problem may be rectified in ESXi 5.1, however I haven't been able to install that yet because it is missing a driver package for my Intel network adapter. More here:
http://forums.freenas.org/showthread.php?6900-FreeNAS-8-2-0-BETA-3-sees-3TB-ESXi-5-RDM-as-0MB

Here are some pretty clear instructions on using the RDM feature:
http://vm-help.com/esx40i/SATA_RDMs.php

Happily, I've had no problems importing ZFS drives created under RDM into other FreeNAS installations, virtual or bare metal. A great feature!
 

SonicPet07

Cadet
Joined
Jan 25, 2013
Messages
6
One quirk of Raw Device Mapping I've run into is it doesn't like 3 TB hard drives on ESXi 5.0. Since adding a 3 TB drive I've had to revert to running FreeNAS natively and bypass VMware. 2 TB drives are no problem. This problem may be rectified in ESXi 5.1, however I haven't been able to install that yet because it is missing a driver package for my Intel network adapter. More here:
http://forums.freenas.org/showthread.php?6900-FreeNAS-8-2-0-BETA-3-sees-3TB-ESXi-5-RDM-as-0MB

Here are some pretty clear instructions on using the RDM feature:
http://vm-help.com/esx40i/SATA_RDMs.php

Happily, I've had no problems importing ZFS drives created under RDM into other FreeNAS installations, virtual or bare metal. A great feature!

Thanks for the links. Got the System up and running pretty quickly. Don't have any 3 TB drives at the moment to test with but as far as I can tell in the ESXi 5.1 documentation, they should be support.

Question though. Having switched from Virtualization and running bare metal have you noticed any noticeable performance differences? The only things I've noticed is FreeNAS taking as much RAM as it can even though there is currently nothing on the drives to cache. But then again I've only been using the machine for about a day now and am still in the process of configuring my settings.
 
Joined
Aug 18, 2012
Messages
14
Question though. Having switched from Virtualization and running bare metal have you noticed any noticeable performance differences? The only things I've noticed is FreeNAS taking as much RAM as it can even though there is currently nothing on the drives to cache. But then again I've only been using the machine for about a day now and am still in the process of configuring my settings.

I have noticed no performance difference between bare-metal FreeNAS and virtualized FreeNAS. I'm using low-power 5400 RPM disks, and the I/O speed of the disks themselves is the principal bottleneck for in-machine reads and writes. It would be interesting to test a stripe of SSDs in both scenarios - this might show up a difference between BM and VM.
Ultimately, though, gigabit Ethernet is the bottleneck. A single disk can saturate an Ethernet link easily with a sequential read.
 
Status
Not open for further replies.
Top