Full disclosure, I am virtualizing under ESXi with passthrough of LSI2008 cards in IT mode. I have 5 new Ultrastar He12 disks, which I'm migrating to using another LSI2008 card.
I have been testing a couple of things, including SCALE and I noticed slow performance compared with Core using sequential writes, where I would normally expect memory to be the bottleneck. The below results are using the same 5 disks in RAIDz2, same spec VM's with the same RAM (32GB), clean build, no load:
I normally expect around the 2.6GB/s (8.255009 secs (2601431003 bytes/sec) such as my existing pool), however, I am unsure why SCALEs ZFS is dropping to 1.5GB/s. I get the impression this is Debian/OpenZFS compared with FreeBSD, but the Core performance is significantly better? Any Idea's?
Thanks,
I have been testing a couple of things, including SCALE and I noticed slow performance compared with Core using sequential writes, where I would normally expect memory to be the bottleneck. The below results are using the same 5 disks in RAIDz2, same spec VM's with the same RAM (32GB), clean build, no load:
SCALE RAMdisk write performance:
SCALE ZFS write performance:
Debian VM RAMdisk for Comparison:
Core RAMdisk write performance:
Core ZFS write performance:
Code:
dd if=/dev/zero of=TestingSpeed bs=1G count=20 && rm TestingSpeed 20+0 records in 20+0 records out 21474836480 bytes (21 GB, 20 GiB) copied, 10.0064 s, 2.1 GB/s
SCALE ZFS write performance:
Code:
dd if=/dev/zero of=TestingSpeed bs=1G count=20 && rm TestingSpeed 20+0 records in 20+0 records out 21474836480 bytes (21 GB, 20 GiB) copied, 14.1847 s, 1.5 GB/s
Debian VM RAMdisk for Comparison:
Code:
dd if=/dev/zero of=TestingSpeed bs=1G count=20 && rm TestingSpeed 20+0 records in 20+0 records out 21474836480 bytes (21 GB, 20 GiB) copied, 10.2462 s, 2.1 GB/s
Core RAMdisk write performance:
Code:
dd if=/dev/zero of=TestingSpeed bs=1G count=20 && rm TestingSpeed 20+0 records in 20+0 records out 21474836480 bytes transferred in 7.279682 secs (2949969040 bytes/sec)
Core ZFS write performance:
Code:
dd if=/dev/zero of=TestingSpeed bs=1G count=20 && rm TestingSpeed 20+0 records in 20+0 records out 21474836480 bytes transferred in 6.475183 secs (3316483142 bytes/sec)
I normally expect around the 2.6GB/s (8.255009 secs (2601431003 bytes/sec) such as my existing pool), however, I am unsure why SCALEs ZFS is dropping to 1.5GB/s. I get the impression this is Debian/OpenZFS compared with FreeBSD, but the Core performance is significantly better? Any Idea's?
Thanks,