Slower performance in TrueNAS vs FreeNAS

Shankage

Explorer
Joined
Jun 21, 2017
Messages
79
Hi all,

Just wondering if this is a known issue or of there is a setting I am missing somewhere? However, I just spun up a new trueNAS host with SAS 10k drives and the rest of the settings are like for like with my FreeNAS (11.8u2, 6 x6 wd red efrx's) and doing the same DD tests on either, TrueNAS was reportedly slower.. I downgraded the same VM to FreeNAS 11.8u2 and performance was better than the original FreeNAS host.
  • Both have write cache enabled
  • Sync = disabled
  • no slog
  • 64gb ram
  • 4 vcpu
Just wanted to see if I am running into something known or what not with TrueNAS.

Cheers
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,691
11.8 U2.. you are ahead of your time...

Are you running TN BETA2.1?

Its worth documenting the detailed the detailed setup, virtualization environment, test methodology and results..... can you tell whether its CPU limits or disk limits that you are hitting?
 

Shankage

Explorer
Joined
Jun 21, 2017
Messages
79
Hi,

Yup BETA2.1 is the one.

Setup one on the Dell R710;
  • Host CPU dual x5670's
  • 240gb RAM DDR3
  • 6 x 6 WD RED EFRX drives inmirror (handled by FreeNAS)
  • 1 x 1tb nvme that has FreeNAS installed on it, along with AD and vCenter
FreeNAS VM config on R710;
  • 4 vCPU
  • 64GB RAM
  • Single mirrored storage pool of the WD drives, sync disabled
  • No other tuning done
Dell R730;
  • Host CPU, dual E5-2640v3's
  • 256GB RAM, DDR4
  • 8 x 10 10k seagate SAS drives
  • 2 x 200gb SAS SSD's
  • USB stick used as temp datastore for TrueNAS (now FreeNAS, after downgrading and testing
    Noticed with TrueNAS I had to manually enable write cache on the disks
TrueNAS / FreeNAS VM Config;
  • 4 vCPU
  • 64GB ram
  • Mirrored vdev storage pool
  • no other tuning
Memory shares set to high on both VM's across both hosts. Tests were done using DD directly on each host, to reiterate, these are virtualized FreeNAS / TrueNAS instances.

More of my testing and commentary can be found in this thread; https://www.ixsystems.com/community/threads/building-new-host-after-some-opinions.87220/

Feel free to ask for any more information and I'd be happy to get it for you.

Thanks!
 

jenksdrummer

Patron
Joined
Jun 7, 2011
Messages
250
Key point I'm seeing above: {FreeNAS VM}

Run it on iron and see what you get. Running things in general gives false/inconsistent performance numbers due to a number of layers between hardware and the VM, resource contentions, bottlenecks, etc/etc/etc/etc...

I've been a doing a **LOT** of testing and comparisons on performance on a number of factors, only to toss my results when I try them again and the numbers are significantly different due to outside influences; or realizing that my test scope was too small of a window to get a sustained load.

For example, the box I am testing has a 3008-IR controller onboard, and I have a 3108 controller card that has cache and can do all kinds of raid. Tried the cachecade function and compared numbers only to see numbers that didn't line up. Ran the test again near double the performance. THen I'm doing comparisons between CentOS 8 as a SAN, using targetcli; comparing block vs file storage...windows server 2019 with file storage (since it doesn't do block) - testing with VMs under VMWare with a vmdk on a lun from Free/TrueNAS vs at the host as a generic windows server 2019 with iscsi luns vs running the same test locally on the box running windows...so many numbers; so much variation.

Also noted that your older box has an NVMe stick vs USB on the other box. Not even a close comparison on that aspect for operating the OS. Also consider that you have more vdevs you have more IOPS. A single mirrored vdev will perform as fast as a single disk; a set of 3 mirrored pairs will perform as (roughly) fast as 3 disks.

Point blank, come back when you have an apples-to-apples comparison to complain about. My testing has shown that TrueNAS is actually faster than FreeNAS by a fairly significant amount; but, I'm capping out my network (2x 10GB links) in my tests so I can't follow that claim through all the testing as it just tops out and I can't squeeze more on it; at least, with the same tool/profiles (IOMeter) as there isn't a version of it that I know that can run natively on FreeBSD.
 

Shankage

Explorer
Joined
Jun 21, 2017
Messages
79
Unfortunately for my usacase I can't run it BM, however I have converted the old host now to BM FreeNAS for pure storage.
As it is old v new hardware I cannot make all the tests completely like for like... well for the pure reason that the newbox is newer more powerful hardware. All tests were done on the VM's directly with DD and the closest like for like test was downgrading the newer hosts vm to FreeNAS,performing the exact same tests on it and getting a better result.

Also the newer host is running more mirrored vdevs than the original.

Just to note, this post in no way was a complaint letter of sorts, I was just seeking opinions to see if it was a known issue or if anyone had faced something similar.
 
Top