New setup and looking for an idiots guide for suitable tests.

GForce2010

Cadet
Joined
Nov 9, 2023
Messages
4
I've had a search on the forum and internet for this but all the results I have found have been a little above my head.

I've been setting up a homelab over the past few months and I've just added a JBOD to setup a NAS. I have TrueNAS Core installed on my Dell R630 as a VM within Proxmox. I've passed through the PCI HBA and setup a new zfs pool that is working.

What I want to be able to do is run some testing to make sure I am getting the expected performance. I've had poor performance using passthrough in the past though I have set it up using IOMMU this time so should be better.

I am still very new to all this and trying to learn as I go, I've seen sever posts saying I can use fio to test but i'm struggling to find anything I can currently understand on how to get that installed and run the tests.

I am hoping that someone can give me a quick step-by-step on how to do this.

Thanks.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
I can't help with the tests, but some things can bite you and performance;
  • SMR disks don't play well with ZFS
  • Hardware RAID controllers don't play well with ZFS, (as ZFS was designed as the RAID controller)
  • Network ports can make a difference, as Realtek and other lesser brands tend to have reduced performance
  • ZFS loves RAM for read caching, so running with the minimum affects read performance
  • Too few CPUs. Like 1 core, & 2 threads is probably not enough
So, if you want to share your disk make & models, we can checkout that they are not SMR. (Or you may know already they are not SMR...) Then, let us know the make & model of the HBA so we verify it's not a hardware RAID one. Last, did you pass through the network controller, or let Proxmox handle it?

Basically, the whole hardware.
 
Last edited:

GForce2010

Cadet
Joined
Nov 9, 2023
Messages
4
So, if you want to share your disk make & models
I'm using 6 Dell branded HUS723030ALS640 3TB SAS 6Gbps drives

Then, let us know the make & model of the HBA
I'm using a Dell PERC H200E PCIE 6GBS DUAL SAS PORT RAID Controller reflashed to IT mode.

did you pass through the network controller, or let Proxmox handle it?
Proxmox is handling that.

I've currently given the VM 2 cores and 8GB RAM based on some recommended specs I saw online.

As this is the first time I've set anything like this up I really don't know what kind of performance I should be seeing.

As a basic test I setup a SMB share and transferred a single large file over from my Win 10 PC on a gigabit LAN and had a stable 60MB's transfer speed. This is a lot better than I was getting with a previous network share that was using a USB3 external HDD.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
I'm using 6 Dell branded HUS723030ALS640 3TB SAS 6Gbps drives
...
Those should be CMR and SMR, so good.

...
I'm using a Dell PERC H200E PCIE 6GBS DUAL SAS PORT RAID Controller reflashed to IT mode.
...
Perfect, IT mode is what we want. Just make sure the firmware version matches what the TrueNAS software wants. I don't have that handy.

...
Proxmox is handling that.
...
Your network speed may be less than ideal, I don't know. Perhaps others will be able to give some hints.

...
I've currently given the VM 2 cores and 8GB RAM based on some recommended specs I saw online.
...
That may be enough for simple, general purpose storage. But don't count on BSD Jails, or nested VMs in TrueNAS Core as 8GBytes of RAM is the bare minimum.

...
As this is the first time I've set anything like this up I really don't know what kind of performance I should be seeing.

As a basic test I setup a SMB share and transferred a single large file over from my Win 10 PC on a gigabit LAN and had a stable 60MB's transfer speed. This is a lot better than I was getting with a previous network share that was using a USB3 external HDD.
Network speeds of 60MBbytes/ps is probably reasonable. You don't specify how your ZFS pool layout is configured.

Good luck
 

asap2go

Patron
Joined
Jun 11, 2023
Messages
228
Network speeds of 60MBbytes/ps is probably reasonable. You don't specify how your ZFS pool layout is configured.

Good luck
You think so?
Writing a large file is basically a streaming workload and even a single HDD should be able to satisfy a gigabit connection no matter the pool layout.
Well I have no experience with virtualized TrueNAS might be the overhead.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
I too have limited virtualization experience, but it would seem to me having the hypervisor handle the network, both in and out, for the TrueNAS VM would impose a penalty. What that would be speed wise, I don't know.

Plus, you still have not listed how your 6 disks are configured in pool layout. That makes a difference.
 

GForce2010

Cadet
Joined
Nov 9, 2023
Messages
4
You don't specify how your ZFS pool layout is configured
My drives are in a RAIDZ2 configuration.

I found an old post that gave a couple example commands here:

And then this post that says you should disable compression to before running the above tests to get accurate readings.

The results of the write test:
Code:
root@truenas[/mnt/myTestPool/iocage]# dd if=/dev/zero of=tmp.dat bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 1465.673313 secs (73259287 bytes/sec)


The results of the read test:
Code:
root@truenas[/mnt/myTestPool/iocage]# dd if=tmp.dat of=/dev/null bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 234.001674 secs (458860745 bytes/sec)


If those tests are accurate then it looks like I'm currently getting a local write speed of approx. 73MB/s, and a read speed of approx. 450MB/s.

This seems slow, is it slow?

Thanks.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
Well, the read speeds seem in the ball park for 4 HDDs, (because the other 2 are parity). That is beyond what 1Gbit/ps Ethernet can do, (about 120MByte/ps). Plus, read ahead is likely quite limited because of the 8GByte RAM assigned to the VM.

As for the write speeds, I just don't know. Perhaps someone with more knowledge of this will respond.
 
Last edited:

GForce2010

Cadet
Joined
Nov 9, 2023
Messages
4
Just a quick update.

After doing a lot more testing, both within the TureNAS VM and within Proxmox directly I managed to trace the problem back to a drive that was failing. A quick email to the supplier and an overnight delivery of a new drive and I now have write speeds in excess of 300MB/s and read speed in excess of 600MB/s locally and a network transfer kept a stable 110MB/s up and down.

Thanks for the help.
 
Top