Multiple VDevs worse performance

tony95

Contributor
Joined
Jan 2, 2021
Messages
117
I was wanting to get better performance so I took a 9 disk 8TB Z2 single vdev pool and converted it to two 5 disk 8TB Z2 vdev pool. It went from about 900 MB/sec to 500 MB/sec. My performance was almost cut in half. Is this something that should have been expected?
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
Without additional information there is no way to help. Please check the forum rules on the level of detail that is needed for good answers.
 

tony95

Contributor
Joined
Jan 2, 2021
Messages
117
Without additional information there is no way to help. Please check the forum rules on the level of detail that is needed for good answers.
I thought the question was pretty simple. One pool is a single vdev with 9 8TB disks in raidz2. I use the same disks and added one more to blow alway that pool and create the pool with two vdevs, each with 5 8TB in raidz2. My understanding is more vdevs equal more performance, is that incorrect? My performance dropped about 40%, basically for file copy large 20+ GB files over SMB. My question was is this to be expected?
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
I thought the question was pretty simple. One pool is a single vdev with 9 8TB disks in raidz2. I use the same disks and added one more to blow alway that pool and create the pool with two vdevs, each with 5 8TB in raidz2. My understanding is more vdevs equal more performance, is that incorrect? My performance dropped about 40%, basically for file copy large 20+ GB files over SMB. My question was is this to be expected?

No. I don’t think that’s the expected result.

Although you went from 7 data disks to 6, so pure bandwidth is reduced, but that pure bandwidth speed is really only from empty pools with sequential transfers.
 

tony95

Contributor
Joined
Jan 2, 2021
Messages
117
No. I don’t think that’s the expected result.

Although you went from 7 data disks to 6, so pure bandwidth is reduced, but that pure bandwidth speed is really only from empty pools with sequential transfers.
I actually went from 9 disks to 10 disks. The 9 disk pool was a single raidz2 vdev and the 10 disk had two 5 disk vdevs both raidz2. I used the exact same disks except I added one for the 10 disk pool.

Pool Test 1
Vdev1 = 9x 8TB RaidZ2

Pool Test 2
Vdev1 = 5x 8TB RaidZ2
Vdev2 = 5x 8TB RaidZ2
(Both Vdevs were created when the pool was created)

I have already gone back to the 9 disk RaidZ2 because why loss 2 disk for no performance gain?

I have about 30TB on the Pool so I have been copying data back and forth for about 3 days.
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
I thought the question was pretty simple.
The question you asked is indeed simple and was answered. My, perhaps wrong, assumption was that your actual goal was to get information how to diagnose the issue and by that hopefully get the desired performance.
 

tony95

Contributor
Joined
Jan 2, 2021
Messages
117
The question you asked is indeed simple and was answered. My, perhaps wrong, assumption was that your actual goal was to get information how to diagnose the issue and by that hopefully get the desired performance.
If it was answered then that is not clear to me at all. Stux says it is not expected performance, but then describes the vdevs incorrectly so I have no idea whether he was actually understanding the question. That is why I laid out the vdev configuration in a way I thought would be more clear. If I try to read into his answer then the 5 disk Z2 vdevs are actually much slower then I could imagine my results would be expected which is the opposite of what was said.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
My understanding is more vdevs equal more performance, is that incorrect? My performance dropped about 40%, basically for file copy large 20+ GB files over SMB. My question was is this to be expected?
Expected: No, as said.
The performance expectation was that IOPs would roughly double.
In terms of bandwidth, you went from 9-2=7 disks worth of actual data (excluding parity) to 2*(5-2)=6 disks, and have doubled the parity calculation workload. But that does not align with a 40% drop.

Incidentally, your reply indicated that your actual metric is large files over (single-threaded) SMB. That's relevant information.
Now the question is (and this was asked already): What is your complete setup? CPU, controller, RAM, NIC, whatever.
Inquiring minds want to know whether there could be a bottleneck somewhere.
 

tony95

Contributor
Joined
Jan 2, 2021
Messages
117
Expected: No, as said.
The performance expectation was that IOPs would roughly double.
In terms of bandwidth, you went from 9-2=7 disks worth of actual data (excluding parity) to 2*(5-2)=6 disks, and have doubled the parity calculation workload. But that does not align with a 40% drop.

Incidentally, your reply indicated that your actual metric is large files over (single-threaded) SMB. That's relevant information.
Now the question is (and this was asked already): What is your complete setup? CPU, controller, RAM, NIC, whatever.
Inquiring minds want to know whether there could be a bottleneck somewhere.
Oh. Thanks, I did not understand that he was referring to total data disk. Networking is 10G Mellanox Connectx-3. 32 GB DDR5 on 7900X Hyper-V with 4 cores allocated. My CPU doesn't even breathe heavy. I looked at the disk activity and they all were pretty consistent between each other so I don't think there is any of them are causing a bottleneck. I did set up the 9 disk Z2 pool with 512kb record while the dual 5 disk pool I left it at 128kb, but can't imagine that is the difference.
 

tony95

Contributor
Joined
Jan 2, 2021
Messages
117
Oh, I have a LSI 9207e with 8 Drives, 5 drives connected to motherboard sata and another 5 connected to a M.2 6xSATA Adapter.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
That's 18 drives in total for a 10 drive pool. What's what?
And the"M.2 6xSATA adapter" is HIGHLY suspicious. If you've introduced that while changing the pool layout, look no further.
 

tony95

Contributor
Joined
Jan 2, 2021
Messages
117
That's 18 drives in total for a 10 drive pool. What's what?
And the"M.2 6xSATA adapter" is HIGHLY suspicious. If you've introduced that while changing the pool layout, look no further.
Yes, there are 18 drives in that system but 8 drives are in a different pool and pretty much sitting idle. The M.2 6XSATA adapter is fantastic. I have a couple of them running in different computers and they are great for converting M.2 slots to SATA. M.2 slots are just PCIE x4 slots so no different than any other PCIE to SATA card. They have about 4 times the bandwidth required to push 5 mechanical hard drives. I am not getting any errors on my drives. I also almost saturate 10G network reading from the Pool when configured as 9 disk Z2, so has to be the Pool configuration. My guess is that for large files the 9 disk Z2 is just better at reading the data especially with large record size. I am actually backing up the data in the second pool right now so that I can add the drive that I got to create the Dual 5X Z2 Pool to the second Pool which is currently a 7 disk Z2 about to be a 8 disk Z2. If this new 8 disk pool is slow then I will have to look at the new drive but as far as I can tell it is faster than the other drives which are mostly shucked WD drives.
 

nabsltd

Contributor
Joined
Jul 1, 2022
Messages
133
They have about 4 times the bandwidth required to push 5 mechanical hard drives.
The controllers on most of these 6-port M.2 to SATA are ASM1166, which are PCIe 3.0 x2 devices. So, you have 2GB/sec total, which is about 2x the speed you need for spinning rust, but about 60% of what you'd need if you were using SSDs. This assumes that the M.2 slot isn't limited by some other factor, like being connected via PCH or a PEX instead of straight to the CPU. Some M.2 slots routed through a PEX will drop down as low as x1 depending on other slot usage.

And, yes, those cards are no different from any other "PCIe to SATA card", but an HBA is not just a "PCIe to SATA card". Despite allowing mostly direct access to the drives, an HBA actually takes a command from the TrueNAS HBA driver, figures out what it means, and sends it on to the drive, and then sends the result back. OTOH, the SATA driver in TrueNAS has to do everything when communicating to the drives behind an SATA controller. So, it comes down to the driver, and for the Intel AHCI hardware built into motherboards, the TrueNAS driver is robust and well-tested. For random SATA chips from companies who don't specialize in server hardware, the driver isn't as good.
 

tony95

Contributor
Joined
Jan 2, 2021
Messages
117
The controllers on most of these 6-port M.2 to SATA are ASM1166, which are PCIe 3.0 x2 devices. So, you have 2GB/sec total, which is about 2x the speed you need for spinning rust, but about 60% of what you'd need if you were using SSDs. This assumes that the M.2 slot isn't limited by some other factor, like being connected via PCH or a PEX instead of straight to the CPU. Some M.2 slots routed through a PEX will drop down as low as x1 depending on other slot usage.

And, yes, those cards are no different from any other "PCIe to SATA card", but an HBA is not just a "PCIe to SATA card". Despite allowing mostly direct access to the drives, an HBA actually takes a command from the TrueNAS HBA driver, figures out what it means, and sends it on to the drive, and then sends the result back. OTOH, the SATA driver in TrueNAS has to do everything when communicating to the drives behind an SATA controller. So, it comes down to the driver, and for the Intel AHCI hardware built into motherboards, the TrueNAS driver is robust and well-tested. For random SATA chips from companies who don't specialize in server hardware, the driver isn't as good.
Oh, thanks for this. Yes, that is exactly what it is using, ASM1166. You are right, it says PCIE 3.0 X2 on the product page. It is a 10GTek branded adapter. I probably wouldn't want to use more than 3-4 SSD but I never use it for that. My motherboard is an Asus Prime X670-P so the m.2 should be running at X4. I am actually not sure if all the drives from my first pool are plugged into the m.2 adapter, I kinda doubt it so not even all 5 ports would be active at the same time. My second pool is mostly full so I only access it if I need to retrieve something from it which isn't all that often.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
You should know what is plugged where… because, if I understand "Hyper-V with 4 cores allocated" correctly, you have a virtualised setup, with a non-recommended hypervisor, and if you're passing individual virtual drives rather than whole controllers, you're just setting yourself for irrecoverable data loss.

Virtualisation issues notwithstanding, there are potential bottleneck with the x2 ASM1166 in itself and because it shares the upstream link to the CPU with the NIC and the PCH SATA controller. Ideally, you should have a (server) motherboard with at least two x8 electrical PCIe slots from the CPU, and internal and external drives on SAS HBAs.
Maybe a -8e8i HBA could help here?
 

tony95

Contributor
Joined
Jan 2, 2021
Messages
117
You should know what is plugged where… because, if I understand "Hyper-V with 4 cores allocated" correctly, you have a virtualised setup, with a non-recommended hypervisor, and if you're passing individual virtual drives rather than whole controllers, you're just setting yourself for irrecoverable data loss.

Virtualisation issues notwithstanding, there are potential bottleneck with the x2 ASM1166 in itself and because it shares the upstream link to the CPU with the NIC and the PCH SATA controller. Ideally, you should have a (server) motherboard with at least two x8 electrical PCIe slots from the CPU, and internal and external drives on SAS HBAs.
Maybe a -8e8i HBA could help here?
Oh, I think this issue has been gone over enough by now that we can all be sure that there is no issue running TrueNAS in Hyper-V. There are a lot of people virtualizing TrueNAS and Hyper-V is definitely my preferred solution. There is no risk for disk loss any greater than bare metal. I can take my drives out and put them into any instance of TrueNAS whether it be Virtualized or Bare metal and the data will be there. I pass the physical drives which is fine. I think the idea that Hyper-V is "non-recommended" is a little out dated. Because of this misconception I actually had to move an instance out to bare metal just find out that it was a bug in the SMB for that version of TrueNAS which was fixed later. I have moved my drives all over the place and that's the beauty of TrueNAS, as long as you have the disks intact the hardware is largely irrelevant.
 
Top