I would like to share my feedback with FREENAS and NFS. I currently have several Freenas systems, mainly used to store data from astrophysical simulations. I am going to focus on the one with following specifications :
FREENAS system:
I ran several benchmarks on 4 different configurations (LSI controler, pure HBA, memory cache, SLOG) :
Configurations C2 and C4 have an extra SSD disk to store SLOG.
LSI 2208 card is configured in JBOD mode, it means that freenas manage itself the raid filesystem. It's important to notice also that this card has 1GB of cache memory.
Each tests have been running from a single linux machine (DELL 20 cores i7, 64GB RAM. Intel 10-Gigabit X540-AT2). During tests, only one machine mount an NFS partition from FREENAS giving all the network bandwidth for the test.
I focused my tests mainly on NFS writing speed from linux machine to freenas system.
TEST #1 : bandwidth test using dd command to write from Linux client to Freenas through NFS
The overall is quite good, whatever the configuration.
We get best performance with C4 configuration. We are close to maximum speed of 10Gb Ethernet !!!! With this configuration, there are almost no penalty for writing such big file to freenas zpool through NFS.
This time we copy smaller files, 3.1GB via NFS. The overall performance is still good.
In both tests, and with pure HBA card (C1 and C2), we notice that transfer speed is faster when there is no SLOG.
TEST #2 : development environment
This test is very important, because it reveals some not obvious behaviours with FreeNAS+NFS in specific situations.
I am currently working on quite big c++ project, using qt5 opengl and a lot of sources files and libraries. According to the freenas configuration (see C1,C2,C3 and C4) you'll see different time to complete a task in a development environment.
All the following tests have been running several times from a Linux machine which has mounted an NFS partition from a freenas system (C1, C2, C3 or C4).
As you can see, C1 configuration took 212 seconds to complete a simple svn co command !! I suspect that svn command do a lot of updates on files which drastically increase time (tons on sync) . This bad behaviour is due to a lack of cache of C1 configuration ( pure HBA and no SLOG) therefore ZIL has to be updated a lot of times during the transaction. Zilstat reports a lot of IO operations.
C2 (pure HBA+Zlog) and C3 (HBA with memory cache) complete the command in the same time.
C4 (HBA with memory cache + SLOG) is the fastest.
Here we measure time to complete cmake command which is a tool which generate Makefiles.
Once again, with such kind of operation, lack of a cache and/or SLOG increase time operation.
Once again, C1 took more than twice time to complete compilation
This is a really weird result. Althought the created file is very small (24 MB) , it looks that linking operation needs a tons of writes sync operations which drastically increase time by a factor of 10 !!!! Once more, cache does matter.
I have noticed same slowdown when compilig/linking any type of fortran and C programs.
CONCLUSION :
As we can see Freenas+NFS needs a cache mechanism to speed up write sync. For some people it's obvious but it was not for me.
So far I was just using freenas to copy and read files by simple read/write operations from simulations and/or from analysis programs. It was working fine and fast without cache mechanism. But since I started to use my freenas to store and compile my programs, it was not possible to work any-more (waiting 30 seconds for every linking completion is just impossible !!!).
Then a cache mechanism is mandatory :
Freenas advices to use a pure HBA controller + SLOG on fast SSD (like C2 configuration).
It looks that a raid controller with memory cache used in JBOD configuration gives also good performances (C3 configuration). However with this configuration your freenas system will not be as reliable than C1 in case of power loss. You might loose operations store in raid memory cache and not flushed yet on ZIL. As I am a not a bank company, with financials transactions, this is not a big deal for me :). The main advantage of C3 configuration is that you can have one more disk available because you don't use a disk for SLOG.
I am sure I might be wrong in about what I said. Please ZFS experts correct me.... What I would like to know is if C3 configuration is a good configuration for freenas ? I have noticed that smartcontrol daemon was not working on this configuration (it fails to start), and I am wondering what happens in case of disk failure, will I get a mail from Freenas ?
Useful links :
FREENAS system:
- Motherboard SUPER MICRO X10DRi-T (http://www.supermicro.com/products/motherboard/xeon/c600/x10dri-t.cfm)
- RAM: 130941MB
- HARD DISK : 30 x 6TBytes (sata III 6Gb/s)
- option tested : 1 X SSD (Kingston Hyperx Savage 120GB) to store SLOG
- FREENAS : FreeNAS-9.3-STABLE-201511040813
- LSI cards : 2 options tested
- LSI 2308 MPT-fusion (HBA)
- LSI 2208 Megaraid (JBOD mode)
- Network card : Intel Corporation Ethernet Controller 10-Gigabit X540-AT2
I ran several benchmarks on 4 different configurations (LSI controler, pure HBA, memory cache, SLOG) :
- C1:
- controler LSI 2308 MPT-FUSION pure HBA card WITHOUT SLOG
- C2:
- controler LSI 2308 MPT-FUSION pure HBA card WITH SLOG
- C3:
- controler LSI 2208 Megaraid (1GB cache memory) in JBOD mode WITHOUT SLOG
- C4:
- LSI 2208 Megaraid (1GB cache memory) in JBOD mode WITH SLOG
Configurations C2 and C4 have an extra SSD disk to store SLOG.
LSI 2208 card is configured in JBOD mode, it means that freenas manage itself the raid filesystem. It's important to notice also that this card has 1GB of cache memory.
Each tests have been running from a single linux machine (DELL 20 cores i7, 64GB RAM. Intel 10-Gigabit X540-AT2). During tests, only one machine mount an NFS partition from FREENAS giving all the network bandwidth for the test.
I focused my tests mainly on NFS writing speed from linux machine to freenas system.
TEST #1 : bandwidth test using dd command to write from Linux client to Freenas through NFS
- dd if=/dev/zero of=qq bs=64M count=500 (34GB transferred from Linux to Freenas)
C1 : 723 MB/s - > yes it's 723 Mega Bytes / seconds, very fast :)
C2 : 564 MB/s
C3 : 874 MB/s
C4 : 908 MB/s
C2 : 564 MB/s
C3 : 874 MB/s
C4 : 908 MB/s
The overall is quite good, whatever the configuration.
We get best performance with C4 configuration. We are close to maximum speed of 10Gb Ethernet !!!! With this configuration, there are almost no penalty for writing such big file to freenas zpool through NFS.
- dd if=/dev/zero of=qq bs=6M count=500 ( 3.1GB transferred from Linux to Freenas)
C1 : 594 MB/s
C2 : 507 MB/s
C3 : 722 MB/s
C4 : 605 MB/s
C2 : 507 MB/s
C3 : 722 MB/s
C4 : 605 MB/s
This time we copy smaller files, 3.1GB via NFS. The overall performance is still good.
In both tests, and with pure HBA card (C1 and C2), we notice that transfer speed is faster when there is no SLOG.
TEST #2 : development environment
This test is very important, because it reveals some not obvious behaviours with FreeNAS+NFS in specific situations.
I am currently working on quite big c++ project, using qt5 opengl and a lot of sources files and libraries. According to the freenas configuration (see C1,C2,C3 and C4) you'll see different time to complete a task in a development environment.
All the following tests have been running several times from a Linux machine which has mounted an NFS partition from a freenas system (C1, C2, C3 or C4).
- time svn co http://svn.lam.fr/repos/glnemo2/trunk
C1 : 212 sec !!!
C2 : 23 sec
C3 : 36 sec
C4 : 2 sec
C2 : 23 sec
C3 : 36 sec
C4 : 2 sec
As you can see, C1 configuration took 212 seconds to complete a simple svn co command !! I suspect that svn command do a lot of updates on files which drastically increase time (tons on sync) . This bad behaviour is due to a lack of cache of C1 configuration ( pure HBA and no SLOG) therefore ZIL has to be updated a lot of times during the transaction. Zilstat reports a lot of IO operations.
C2 (pure HBA+Zlog) and C3 (HBA with memory cache) complete the command in the same time.
C4 (HBA with memory cache + SLOG) is the fastest.
- cd trunk/build && time cmake ..
Here we measure time to complete cmake command which is a tool which generate Makefiles.
C1 : 22 sec
C2 : 4 sec
C3 : 7 sec
C4 : 2 sec
C2 : 4 sec
C3 : 7 sec
C4 : 2 sec
Once again, with such kind of operation, lack of a cache and/or SLOG increase time operation.
- time make -j 6
C1 : 95 sec
C2 : 42 sec
C3 : 48 sec
C4 : 38 sec
C2 : 42 sec
C3 : 48 sec
C4 : 38 sec
Once again, C1 took more than twice time to complete compilation
- del bin/glnemo2 && time make
C1 : 35 sec
C2 : 4 sec
C3 : 8 sec
C4 : 2 sec
C2 : 4 sec
C3 : 8 sec
C4 : 2 sec
This is a really weird result. Althought the created file is very small (24 MB) , it looks that linking operation needs a tons of writes sync operations which drastically increase time by a factor of 10 !!!! Once more, cache does matter.
I have noticed same slowdown when compilig/linking any type of fortran and C programs.
CONCLUSION :
As we can see Freenas+NFS needs a cache mechanism to speed up write sync. For some people it's obvious but it was not for me.
So far I was just using freenas to copy and read files by simple read/write operations from simulations and/or from analysis programs. It was working fine and fast without cache mechanism. But since I started to use my freenas to store and compile my programs, it was not possible to work any-more (waiting 30 seconds for every linking completion is just impossible !!!).
Then a cache mechanism is mandatory :
Freenas advices to use a pure HBA controller + SLOG on fast SSD (like C2 configuration).
It looks that a raid controller with memory cache used in JBOD configuration gives also good performances (C3 configuration). However with this configuration your freenas system will not be as reliable than C1 in case of power loss. You might loose operations store in raid memory cache and not flushed yet on ZIL. As I am a not a bank company, with financials transactions, this is not a big deal for me :). The main advantage of C3 configuration is that you can have one more disk available because you don't use a disk for SLOG.
I am sure I might be wrong in about what I said. Please ZFS experts correct me.... What I would like to know is if C3 configuration is a good configuration for freenas ? I have noticed that smartcontrol daemon was not working on this configuration (it fails to start), and I am wondering what happens in case of disk failure, will I get a mail from Freenas ?
Useful links :