read performance decrease when creating zpool through UI

Status
Not open for further replies.

dudemcbacon

Cadet
Joined
Nov 24, 2018
Messages
2
Hello all, tried searching the forums but couldn't find anything related to this particular issue. Was hoping someone would have some insight. I've noticed that when I create a zpool via the FreeNAS UI I see much worse read performance than when creating it from the command line with the zpool command. The zpool itself is two raidz vdevs (4 disks each) striped and they are configured similarly through the UI and with the zpool command. The disks are 10GB WD drives and there are eight total.

Performance when configured via the FreeNAS UI:
Code:
root@freenas:~ # bash -x ./test.sh
+ rm /mnt/poop/file.out
rm: /mnt/poop/file.out: No such file or directory
+ dd if=/dev/zero of=/mnt/poop/file.out bs=512 count=10000000
10000000+0 records in
10000000+0 records out
5120000000 bytes transferred in 59.497709 secs (86053734 bytes/sec)
+ dd if=/mnt/poop/file.out of=/dev/null bs=512
10000000+0 records in
10000000+0 records out
5120000000 bytes transferred in 154.247230 secs (33193465 bytes/sec)
+ rm /mnt/poop/file.out
+ dd if=/dev/zero of=/mnt/poop/file.out bs=4096 count=10000000
10000000+0 records in
10000000+0 records out
40960000000 bytes transferred in 79.862158 secs (512883712 bytes/sec)
+ dd if=/mnt/poop/file.out of=/dev/null bs=4096
10000000+0 records in
10000000+0 records out
40960000000 bytes transferred in 191.289554 secs (214125649 bytes/sec)


When configured with zpool (zpool create -f -m /mnt/poop poop raidz /dev/da0 /dev/da1 /dev/da2 /dev/da3 raidz /dev/da4 /dev/da5 /dev/da6 /dev/da7):
Code:
root@freenas:~ # bash -x ./test.sh
+ rm /mnt/poop/file.out
+ dd if=/dev/zero of=/mnt/poop/file.out bs=512 count=10000000
10000000+0 records in
10000000+0 records out
5120000000 bytes transferred in 62.070406 secs (82486974 bytes/sec)
+ dd if=/mnt/poop/file.out of=/dev/null bs=512
10000000+0 records in
10000000+0 records out
5120000000 bytes transferred in 25.573671 secs (200205910 bytes/sec)
+ rm /mnt/poop/file.out
+ dd if=/dev/zero of=/mnt/poop/file.out bs=4096 count=10000000
10000000+0 records in
10000000+0 records out
40960000000 bytes transferred in 90.895055 secs (450629574 bytes/sec)
+ dd if=/mnt/poop/file.out of=/dev/null bs=4096
10000000+0 records in
10000000+0 records out
40960000000 bytes transferred in 59.633328 secs (686864236 bytes/sec)


Here are my system specs:
FreeNAS
OS Version:
FreeNAS-11.2-RC2
(Build Date: Nov 15, 2018 20:21)
Processor:
Intel(R) Core(TM) i5-24050S CPU @ 2.50GHz (4 cores)
Memory:
16 GiB

I also attached the output of zdb from both the freenas configuration and the zpool command configuration. I don't know if those are helpful are not, they didn't help me figure it out but maybe there are some gurus here who know more than I do. I'm not sure what other info to provide but if you give me a command I'll run it!

Thanks for help!
 

Attachments

  • zpool.txt
    3.5 KB · Views: 316
  • freenas.txt
    3.8 KB · Views: 314

dudemcbacon

Cadet
Joined
Nov 24, 2018
Messages
2
Figured it out. I did a diff between the output from 'zfs get all <pool_name>' from the FreeNAS configured pool and the zpool configured pool. One of the only major differences was the lz4 compression is enabled on the FreeNAS configured pool by default but not on the zpool created pool. I recreated my pool with the zpool command and enabled lz4 compression and was able to replicate the read performance issues seen on on the FreeNAS configured pool. Guess it's time to upgrade my CPU. :)
 
Status
Not open for further replies.
Top