Hello FreeNasers,
Short story : I upgrade my NAS from a RAIDZ1 with 3 disks to a RAIDZ2 with 5 disks (adding 2 disks and rebuilding my pool). The bandwidth was downgraded from 110 Mb/s (Ethernet limit) to 70 Mb/s.
Long story :
I've build a previous FreeNas with low cost hardware (16 Gb of RAM, G4400 processor, 5x 4Tb Red and IronWolf in RAIDZ2, no SSD cache) and I had a 100-110 Mb/s bandwith on rsync transfers in Samba.
I've then build a second one with better hardware (32 Gb of RAM, i5-7400, 3x 8Tb Ironwolf, no SSD cache) and had similar performance limited by Ethernet.
Then I've decided to upgrade this NAS adding 2 more disks and a SSD cache. I had to rebuild my pool and then I had to perform a lot of rsync (backup and restore). With this new setup the transfer rate is significantly slower. I've tried to remove SSD caching for no significant effect.
Also I use encryption on these 3 configurations.
I've tried to figure out the reporting provided by freeNas : the cpu remains above 40%, the disks are limited to 40 MB/s (they can perform much more then that), the interface traffic peaks at 1000M but not continuously during a transfer.
I know that RAIDZ2 is slower than RAIDZ1, but it shouldn't be the case with more disks since the bandwith is share on more disks. Any clue or advice on test to perform will be much apreciated.
Pierre
Short story : I upgrade my NAS from a RAIDZ1 with 3 disks to a RAIDZ2 with 5 disks (adding 2 disks and rebuilding my pool). The bandwidth was downgraded from 110 Mb/s (Ethernet limit) to 70 Mb/s.
Long story :
I've build a previous FreeNas with low cost hardware (16 Gb of RAM, G4400 processor, 5x 4Tb Red and IronWolf in RAIDZ2, no SSD cache) and I had a 100-110 Mb/s bandwith on rsync transfers in Samba.
I've then build a second one with better hardware (32 Gb of RAM, i5-7400, 3x 8Tb Ironwolf, no SSD cache) and had similar performance limited by Ethernet.
Then I've decided to upgrade this NAS adding 2 more disks and a SSD cache. I had to rebuild my pool and then I had to perform a lot of rsync (backup and restore). With this new setup the transfer rate is significantly slower. I've tried to remove SSD caching for no significant effect.
Also I use encryption on these 3 configurations.
I've tried to figure out the reporting provided by freeNas : the cpu remains above 40%, the disks are limited to 40 MB/s (they can perform much more then that), the interface traffic peaks at 1000M but not continuously during a transfer.
I know that RAIDZ2 is slower than RAIDZ1, but it shouldn't be the case with more disks since the bandwith is share on more disks. Any clue or advice on test to perform will be much apreciated.
Pierre