XCP-ng Shared Storage

CJRoss

Contributor
Joined
Aug 7, 2017
Messages
139
I've stood up a XCP-ng cluster using local storage of SATA SSDs. It occurred to me that I could move them to a N54L that I haven't purposed yet.

Will the N54L CPU give me decent VM performance using NFS? I don't think I need to deal with iSCSI. This is just for a homelab, but I don't want to rip out the existing local storage if the performance isn't even going to be viable.

If I need to use something else, what should I be looking for?
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,702
You're asking a lot of work here to answer a simple question that you could make much easier by providing the hardware details according to the forum rules (particularly any optional upgrades and the disks you propose to use in it) of your N54L.
 

CJRoss

Contributor
Joined
Aug 7, 2017
Messages
139
Turns out it's academic as I realized that the N54L caddies don't support 2.5" drives. So now I need to figure out a different solution.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,702
N54L caddies don't support 2.5" drives.
There are adapters available that should make it possible (and those can be generic, meaning no proprietary approval is needed for them to work).

Either look for those or continue on your search for other options.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
Will the N54L CPU give me decent VM performance using NFS? I don't think I need to deal with iSCSI. This is just for a homelab, but I don't want to rip out the existing local storage if the performance isn't even going to be viable.

It isn't going to perform well. If you have maybe half a dozen low traffic VM's, maybe even a dozen, it might be able to survive, but in general block storage is very resource hungry. Please see


I personally wouldn't try it. It's not that it couldn't be made to work, but it'll be slow, it'll be under-resourced, etc.
 

CJRoss

Contributor
Joined
Aug 7, 2017
Messages
139
There are adapters available that should make it possible (and those can be generic, meaning no proprietary approval is needed for them to work).

Either look for those or continue on your search for other options.

Sorry, I should have clarified. I'm aware there are adapters to put a 2.5 in a 3.5 space, but I'm not aware of any offhand that work for generic hot swap bays. That said, the whole reason I was looking at the N54L is to use what I already have and not make additional purchases.

Based on some testing I did, I think I'm going to stick to local storage.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
I'm not aware of any offhand that work for generic hot swap bays.

IcyDock EZConvert MB882SP-1S-2B 2.5" to 3.5" Solid State Drive Adapter

Claims to be for SSD's but works fine for most 2.5" HDD's as long as they don't need a crapton of airflow.
 

CJRoss

Contributor
Joined
Aug 7, 2017
Messages
139
It isn't going to perform well. If you have maybe half a dozen low traffic VM's, maybe even a dozen, it might be able to survive, but in general block storage is very resource hungry. Please see


I personally wouldn't try it. It's not that it couldn't be made to work, but it'll be slow, it'll be under-resourced, etc.

I knew iSCSI wouldn't be feasible based on your previous posts regarding it. I just wasn't sure about NFS.

Regarding your resource link, I thought using SSDs mostly invalidated the issue of fragmentation? I knew I needed mirrors and vdevs for the iops. I didn't realize the performance of a pool dropped that much by 50%. Since most of mine is bulk WORM storage, I assume that's why I never really noticed it before.

I ended up doing some testing with my existing NAS and ran into what I should have realized from the beginning. The network is going to be my biggest bottleneck. Even with the 2.5G I have in one of the hosts, I'm still getting half the speed of local SATA. Even if I back the network storage with NVMe instead of SATA, the random performance won't match local SATA.

What did surprise me was how little CPU NFS used. I was originally thinking that would be my bottleneck.

TLDR: Doing the shared storage isn't worth it unless I want to replace everything I have. I'll be sticking to local storage for now and accepting that migrations will take longer.
 

CJRoss

Contributor
Joined
Aug 7, 2017
Messages
139
IcyDock EZConvert MB882SP-1S-2B 2.5" to 3.5" Solid State Drive Adapter

Claims to be for SSD's but works fine for most 2.5" HDD's as long as they don't need a crapton of airflow.

Heh. Neat. I hadn't seen that one. I did run across a 2.5 to 3.5 adapter that took two drives and did an internal raid but even I'm not that masochistic. :D
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
I knew iSCSI wouldn't be feasible based on your previous posts regarding it. I just wasn't sure about NFS.

NFS shares a slightly different set of problems but there's a lot of commonality. NFS has the advantage that you can get larger block sizes (better compression) but mid-file overwrites of large blocks still cause problems. In most other ways, similar to iSCSI from the ZFS point of view, I think.
 
Top