Proposed build

Status
Not open for further replies.

Martin Maisey

Dabbler
Joined
May 22, 2017
Messages
34
I’d really appreciate some advice on a server build I’m considering.

I'm currently running FreeNAS on an G1610T HP Microserver Gen8 with 2 x 8GB ECC RAM. It’s served me well, but I found it a bit of a pain expanding the pool recently because it only has four readily accessible drive bays (plus a fifth inside the box, occupied by slog). I'm booting off a USB attached SSD dangling off the back, which is a bit ugly, and there's no room to add new mirrored vdevs, keeping the existing ones.

The limited power also means it’s not viable to run useful workloads on the FreeNAS box itself (bar some light Plex - I use RasPlex on the client which Direct Plays most HD formats, so the most challenging transcoding I do is MPEG2 SD). So I currently run most of my workloads on a Proxmox server hosted on a repurposed old Sandy Bridge desktop with an i7-3770s processor.

I run a mixture of Linux and Window 2008r2 lab-style VMs (e.g. could be more or less anything, depending on what I want to try out at the time) on Proxmox, accessing the FreeNAS pool via NFS over GbE. I'd like to consolidate the these onto a more expandable FreeNAS server, under iohyve or possibly the new FreeNAS 11 GUI-managed virtualisation system, so that they can access storage at much faster internal storage speeds.

I would also like to consolidate an additional workload too if possible, and one that’s quite demanding. I run Adobe Lightroom in a Windows 10 Proxmox VM over RDP, which is surprisingly acceptable over GbE and means I can also access it (albeit more slowly) from my laptop or over VPN. At the moment that VM doesn't store its data in the FreeNAS pool, as dragging back RAWs interactively was just too painful over either NFS or iSCSI when I tried it. Instead, it uses a single local SATA3-attached Crucial CT960_M50 SATA SSD in the Sandy Bridge Proxmox desktop. The raw specs of this drive are 500MB/s read and 80k random 4k read/write IOPS. I assume I’m losing some of that in virtualisation overhead, though I haven’t benchmarked, but performance is very acceptable for my purposes. My Lightroom catalog (SQLite DB for metadata) is approx 10GB, with 262GB of RAWs. Normally I’m only working with a years’ worth at a time - 84GB in 2016, probably significantly more this year as my wife is about to have a baby.

I’d expect the VMs to need no more than 24GB RAM total.

I'm looking at refurbished 2u 12 bay kit - http://www.bargainhardware.co.uk/cheap-chenbro-storage-quad-hex-core-2u-server-configure-to-order/ - with the following configuration:
  • 2 x E5620 quad-core / 8 thread
  • 8 x 8GB DDR3 ECC RAM
  • LSI SAS9201-16i (replacing the provided SAS/SATA Adaptec RAID 51245, which looks like it would be a very bad idea). I’m hoping it should be possible to just replug the SFF-8087 connections to the backplane?
I know the 9201-16i is more than a pair of 9211s, but I want to keep a PCIe slot free and it looks like there are only two. It may turn out I need a GPU for reasonable Lightroom performance (not clear - https://petapixel.com/2015/05/08/wh...ally-be-slower-with-the-new-gpu-acceleration/). Or possibly Lightroom may benefit from an L2ARC (no idea on that until I see the ARC stats….). If it does, I’ve got a HyperX Predator 240GB Gen2 PCIe x4 SSD I can use. Finally, it’s possible that I may move to 10GbE in future if the cost of switches comes down, and that would need an adapter slot free.

I’ll move across my current pool, which consists of 2 vdevs - one formed of 2 x 4GB WD RED drives, the other 2 x 2GB RED drives and an Intel 320 slog. May not need the latter if I stop doing NFS, as the virtualisation is done locally, but may still need it if iohyve/bhyve generates a lot of synchronous write activity, which I imagine it might. I’ll try Lightroom while watching arc_stats.py/arc_summary.py, and see if ARC plus the mechanical drive performance is acceptable, which I imagine it might well be.

If it’s not, I’ll either try L2ARC, or potentially create a pool with a single disk vdev for the existing Crucial CT960_M50 SSD. Obviously that wouldn’t be redundant, but I could do snapshot replication frequently to the main pool. I would think that should be sufficient for my resilience needs, given that I can always delay photo developing until I receive a new SSD if it dies, and redo any small amount of developing I’ve lost.

A few questions:
  • Do people think I’m mad considering building around the original E5620 Westmere Xeons? The way I see it is that it may well be fine for a while, and if it’s not, I’ve only spent £264 on the CPUs and associated 64GB RAM. I can probably get some of that back by eBay’ing it them along with the motherboard, and buying something more modern. I guess it will result in a substantially greater electricity bill, but the purchase price does look like a bit of a bargain.
  • If I decide to go with iohyve, is it likely to be viable long term, or dropped suddenly like VirtualBox was (yes, I got burnt by that upgrading recently, and had to roll back)? Or should I wait for a FreeNAS 11 stable release?
  • Does it sound like it will be reasonable to run Lightroom over ZFS? Does anyone have experience of successfully doing this?
  • Should I be considering going for RAID-Z2 or Z3 instead of striped mirrors, particularly as I move to 4GB or even 8GB drives?
Any other random comments also welcome!
 
Joined
Feb 2, 2016
Messages
574
1. Yes. Well, maybe not mad, but underpowered for as many VMs as you plan on running. That chip - and everything which surrounds it - was released in 2010. A lot has happened in seven years.

2. I don't know which VM world will come to pass or how long it will last. That said, I'm very comfortable with where FreeNAS 11 is headed. Today, starting from scratch, I'd go with FreeNAS 11 instead of 9.10.

3a. Our entire image archive (4TB, give or take) lives on FreeNAS/ZFS. We have two or three concurrent Lightroom users as well as half a dozen designers hitting the server (in signature below) with Photoshop and InDesign plus regular users accessing their boring office documents. Performance is pretty sweet on fairly old hardware.

3b. I can't imagine running Lightroom in a VM. Or any graphics-intensive program for that matter. It just seems painful. But, if you're doing it now and are happy, you'll probably be just as happy running your Lightroom VM under FreeNAS.

4. We use a stripe of mirrors for performance. For a single user, Z2 might be a more economical choice and I'm not sure you'd notice much of a performance penalty. I'm not sold on the necessity of Z3 yet.

Cheers,
Matt
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
If I decide to go with iohyve, is it likely to be viable long term, or dropped suddenly like VirtualBox was (yes, I got burnt by that upgrading recently, and had to roll back)?
I would recommend waiting for a stable 11 release, but that will likely be within a week or two, so that isn't a big deal. I think you'll see much better official support for iohyve, though--it's a pretty core FreeBSD thing, while getting VirtualBox to run on FreeBSD at all was pretty hackish.
 

Martin Maisey

Dabbler
Joined
May 22, 2017
Messages
34
1. Yes. Well, maybe not mad, but underpowered for as many VMs as you plan on running. That chip - and everything which surrounds it - was released in 2010. A lot has happened in seven years.

Although you're clearly getting some good use out of similar vintage kit ;). I'm unlikely to have that many VMs very active at the same time, and there will be 8 cores/16 threads to play with. If I'm doing lab-type work I won't be developing in LightRoom, and I'll cut my cloth. Agreed, modern kit would be a lot faster and more energy efficient. But ECC memory in particular seems really expensive bought new for modern Xeons, and there doesn't seem to be a huge amount of refurb kit from reputable places newer than this (as far as I can see, anyway - do feel free to point me at any good sources!). I was looking Xeon-D for the energy efficiency, but it looked to be well north of £1k to get the motherboard/processor and 64GB new.

3a. Our entire image archive (4TB, give or take) lives on FreeNAS/ZFS. We have two or three concurrent Lightroom users as well as half a dozen designers hitting the server (in signature below) with Photoshop and InDesign plus regular users accessing their boring office documents. Performance is pretty sweet on fairly old hardware.

That's really good to know. Do your Lightroom users have 10GbE to the desktop, or are they making do with 1GbE?

[edit]Also, are you using NFS/CIFS/iSCSI?[/edit]

3b. I can't imagine running Lightroom in a VM. Or any graphics-intensive program for that matter. It just seems painful. But, if you're doing it now and are happy, you'll probably be just as happy running your Lightroom VM under FreeNAS.

It really is surprisingly usable - virtualisation penalty is a lot lower than it used to be. Like I say, storage speed seemed to make by far the biggest difference for me at least - the difference was startling going from networked to local storage, suggesting CPU/graphics wasn't the bottleneck. My current VM over pure SSD setup feels snappier than running on my Mac Mini desktop with a newer Ivy Bridge i7/Iris 5000 and a fusion drive. I am, TBH, slightly unsure about running it under Westmere as the single-core speed is a way off my Sandy Bridge i7, but tempted to give it a go and see what it feels like. It's possible that LR is making good use of the graphics cores in my current Proxmox desktop machine, but they're only the integrated Iris 4000, so pretty weedy. I can always do PCI passthrough of a proper graphics card if that's an issue (hopefully - it looks like it will work with iohyve/bhyve anyway...). The only bits I find slow with LR are big imports and exports, and I have a suspicion (I should check really) that those are multithreaded workloads, and having twice the cores (even if a bit slower) does appeal if that is the case. I do like the manageability aspects of having everything on a VM - being able to snapshot and move around different hardware easily etc.

In the worst case, I can always carry on doing what I'm doing now for LR.
 
Last edited:

Martin Maisey

Dabbler
Joined
May 22, 2017
Messages
34
I would recommend waiting for a stable 11 release, but that will likely be within a week or two, so that isn't a big deal. I think you'll see much better official support for iohyve, though--it's a pretty core FreeBSD thing, while getting VirtualBox to run on FreeBSD at all was pretty hackish.

I hadn't realised it was that imminent, will definitely start with it and play about with some old, nearly knackered WD Green Disks I have - though probably will not migrate my real pool until the stable release has had a chance to settle down for a week or two. It would be nice to have a proper UI and be using the 'official' virtualisation route for FreeNAS.

VirtualBox was indeed more than a bit hacky, so my own fault really. Still caught me by surprise on a minor update though (I know, I should have read the forums!).
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
(I know, I should have read the forums!).
Maybe, but iX needs to learn not to break significant things that lots of people use in what are supposed to be minor updates, without so much as a word in the changelog. I like the product, but between things like this and the FN10 fiasco, I'm losing a lot of faith in the company.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
I can always do PCI passthrough of a proper graphics

Well, not yet you can't. It is being worked on though.

I have a number of westmere systems still in production too. The chip is a power hog.
 

Martin Maisey

Dabbler
Joined
May 22, 2017
Messages
34
Well, not yet you can't. It is being worked on though.

I had only had a brief look, but from https://wiki.freebsd.org/bhyve/pci_passthru and https://github.com/pr1ntf/iohyve/wiki/USB-3.0-PCI-Controller-Pass-through it seemed as if the underlying capabilities were there. I've never done this stuff before - is passing through a graphics card different from other PCI devices? Or did you mean the FreeNAS UI doesn't support this yet?

I'll probably run up the 11 RC in a VM later for a play - I think nested virtualisation is enabled on my Proxmox box. One of the things I'm interested in is whether it's possible to mix and match VMs managed via the FreeNAS UI with VMs managed by iohyve. I'm guessing the latter will be more feature-rich at any given point in time. Also whether it is possible to migrate an iohyve VM to be UI-managed later - not expecting it to be a point and click thing, will probably involve some cli-fu I imagine.
 
Joined
Feb 2, 2016
Messages
574
you're clearly getting some good use out of similar vintage kit

For file services, FreeNAS runs pretty darn well on anything supported and stable no matter how little CPU is available. It is really only when you start hosting other services (Plex for most people) that the level of hardware required starts curving upward.

Also, are you using NFS/CIFS/iSCSI?

Pretty much every file transaction requires NFS and CIFS.

We have an odd setup that is an abomination of history and design. Clients connect to a Samba server running inside a XenServer VM. The VM's drives are SSDs on the FreeNAS server hosted over NFS. That Samba server running inside the VM mounts the data from FreeNAS hosted over NFS (on a mirrored stripe of conventional drives).

One of these days, I'll flatten the infrastructure but, for now, it works really well.

Cheers,
Matt
 
Status
Not open for further replies.
Top