Vdev Layout

Janko

Dabbler
Joined
Nov 27, 2018
Messages
31
I need some Advice from the Pros :smile:

I want to reinstall an "old" Server from Core to Scale.
The Server has 10x14TB HDD and 2x120GB SSD.

At the Moment i use the following Layout:
2x RaidZ2 VDev and one Mirrored Vdev for the Installation.

I used it with NFS for a Proxmox Host.
Some VMs need WritePower for an SQL Server.

Iam not so happy with the overall IO Performance.

Wich Layout should i used for the best IO / Harddisk Failure Balance?

The Server is a HotSwap and SpareDisk are always available.


Thanks for advice
 

Whattteva

Wizard
Joined
Mar 5, 2013
Messages
1,824
Striped mirrors. Especially since you have 10 pretty large disks. Expansion of that has to be a huge pain. Probably will take days or even weeks. Not to mention the huge load and hit on performance while you're resilvering.
 

Janko

Dabbler
Joined
Nov 27, 2018
Messages
31
Hello Whatteva,

I only had one HD death and Resilver took almost two weeks...


You mean 5 VDevs with each two mirrored Disk?
 
Last edited:

Whattteva

Wizard
Joined
Mar 5, 2013
Messages
1,824
Yeah. Here's an in-depth article about it if you're interested.

You were only replacing 1 HDD. Now imagine if you had to expand your pool capacity, which means resilvering all the drives in your vdev one by one. Imagine how long that will take, months lol. And imagine how much load that puts on the rest surviving drives for that length of time. With striped mirrors, the most you ever have to do is 2 before you see the capacity expand. Also, the load on the rest of your pool is minimal, basically no more than a non-degraded pool.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
I used it with NFS for a Proxmox Host.
Some VMs need WritePower for an SQL Server.

Can you post the rest of your system specifications (CPU, motherboard, RAM, HBA) as well as the details on the SSDs (make and model)?

Some general resources to read are below - while they may make reference to VMware ESXi and/or iSCSI and block storage, your workload of "Proxmox using NFS for VM disks" is impacted by all of the same limitations and caveats.



The details on the SSD are relevant as the low-latency write performance you're requesting for the SQL VMs will necessitate an SLOG device, of which there are some additional resources and threads below:


 

Janko

Dabbler
Joined
Nov 27, 2018
Messages
31
Sure Man,

- Intel Xeon W2145
- SuperMicro X11SRL-F
- 128GB ECC 2933 RAM
- X540 10Gbit Nic
- Samsung 840 Pro SSD (only for the System)

and Thanks for your advice. it really helps
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
You will definitely want to add a pair of write-intensive SSDs as SLOG devices - the 840 PRO is not sufficient for this use case, but will do just fine in the boot device role.

Intel Optane and DC P-series cards are popular and deliver some of the best performance, but are not easily hot-swappable. There are "relatively fast" SAS options but they are similarly costly.

Assuming you had to add two additional 2.5" or PCIe card form SSDs, can you handle that with your current system chassis?
 
Top