ISCSI storage using Double the space in TrueNAS

csardoss

Cadet
Joined
Dec 16, 2020
Messages
6
Hi, this is my first post and my first build. I have a large TrueNAS build with over 600TiB in storage and I am using it to record Video Footage from a AxxonNext System hosted on 3x Windows 10 servers. The problem I am having is the AxxonNext server is showing I have 4TB currently stored on a ISCSI drive connected to my TrueNAS system and when I look at the Dataset for the ISCSI, I see that its consuming approximately double the space. I am using a Dataset and ISCSI file storage because this is the last thing I have tried to resolve the issue. I originally configured the ISCSI drive using a zvol with 120TiB, but after it started to fill, it increased to over 220TiB. I removed this and ran test with a New Dataset with a Quota of 10GiB and a ISCSI zvol with 2GiB to see in small scale what would happen. It seemed to show promising results as it was holding at the 2TiB size. After the test, I increased the Dataset and zvol to 120TiB just like before. As the drive started to fill, I could see it was causing the same issue of doubling in space on the TrueNAS. I then ended up removing all the changes and then setting up a New Dataset with a ISCSI as a File instead of a device which is still showing the same problem. I am not sure what will happen now that I have a Quota on the root dataset, but from what I can read, it will probably throw errors with Windows.

I have also tried to change the Block size of both the ISCSI and the Dataset to 4K and it had no effect.

Since this was my first build, I set all 48 drives to RAIDZ1 under 1 zdev (+ 2x hot spare). I know this is not the smartest move and will change it if needed.

I am at a loss with a lot riding on getting this to work. I would appreciate any help anyone can provide. We are still in the Beta stages of the project so I can make Major changes if needed. I have also attached the current settings of the Dataset and what Axxon is showing.

TrueNAS Build:
Version: TrueNAS-12.0-U1
CPU: 2x Intel(R) Xeon(R) Silver 4208
Drives: 50x Segate SAS Exos 16TB
Raid: HBA Controller
 

Attachments

  • Screen Shot 2020-12-16 at 10.39.31 PM.png
    Screen Shot 2020-12-16 at 10.39.31 PM.png
    103.3 KB · Views: 150
  • Screen Shot 2020-12-16 at 10.41.22 PM.png
    Screen Shot 2020-12-16 at 10.41.22 PM.png
    68.7 KB · Views: 155
  • Screen Shot 2020-12-16 at 10.41.37 PM.png
    Screen Shot 2020-12-16 at 10.41.37 PM.png
    1.5 MB · Views: 169
  • Screen Shot 2020-12-16 at 10.41.44 PM.png
    Screen Shot 2020-12-16 at 10.41.44 PM.png
    990 KB · Views: 192

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Since this was my first build, I set all 48 drives to RAIDZ1 under 1 zdev (+ 2x hot spare). I know this is not the smartest move and will change it if needed.
Wow, that is a very bad idea. Not the widest anybody has tried, but far too wide. Plus, RAIDZ2 or 3 are much better than hot spares, so keep that in mind.

As for the write amplification: with small enough blocks, ZFS cannot distribute the data over multiple disks, but it still needs to write the parity. So, your single block just turned into two because RAIDZ1 always has one extra block (not strictly parity, but you can think of it that way for the current discussion). So, you have all the disadvantages of RAIDZ with all the disadvantages of Mirrors. Switch to using mirrors and at least you'll have better performance.
 

Herr_Merlin

Patron
Joined
Oct 25, 2019
Messages
200
Go for smaller raidZ volumes. Go for larger block size. Both will benefit
 

csardoss

Cadet
Joined
Dec 16, 2020
Messages
6
As for the write amplification: with small enough blocks, ZFS cannot distribute the data over multiple disks, but it still needs to write the parity. So, your single block just turned into two because RAIDZ1 always has one extra block (not strictly parity, but you can think of it that way for the current discussion). So, you have all the disadvantages of RAIDZ with all the disadvantages of Mirrors. Switch to using mirrors and at least you'll have better performance.

Thanks for the reply, I will be adjusting the zpool to be RAIDZ2 on multiple zdevs. As to what you were saying about Small Blocks, I guess we are not looking performance when it comes to this build but more so writing efficiency so we are not taking up double the space. Is there a different block size we can use instead of moving to a mirror? I defiantly do not want them to loose half of their storage. We just need them to be able to write to the disk at 1-to-1 instead of 1-to-2.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Is there a different block size we can use instead of moving to a mirror?
You can't get away from small blocks if you're doing iSCSI. Even if you go for larger-than-normal block sizes, like 8k, you've only moderately improved your situation.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Get away from iSCSI for this. Set up the SMB protocol instead and configure your AxxonNext software to record to a network archive.


That will help with the space amplification on small blocks. You can also crank up the recordsize depending on how large the writes are that the recording software sends.

For your vdev layout, depending on your desire for space vs. redundancy - but some options are:

8x6-wide Z2 = 450TiB
6x8-wide Z2 = 480TiB
4x12-wide Z2 =513TiB
5x10-wide Z2 = 535TiB (No spares set up here, but you have good coverage from Z2 itself. Have cold spares ready and burnt-in separately.)
 

csardoss

Cadet
Joined
Dec 16, 2020
Messages
6
I found out the issue. So when I created the zpool, it set everything to 128K and when I created the zvol in the dataset, it defaulted to 32K. After changing this to be 4K to match the iSCSI drive, everything started working as normal.

I Also see that you can set up a iSCSI as a file in a Dataset which also resolved my issue.

Get away from iSCSI for this. Set up the SMB protocol instead and configure your AxxonNext software to record to a network archive.
I talked to the Developers at Axxon and they recommended that I stick with iSCSI for my setup. It offers more features for future expansion and redundancy.

8x6-wide Z2 = 450TiB
6x8-wide Z2 = 480TiB
4x12-wide Z2 =513TiB
5x10-wide Z2 = 535TiB (No spares set up here, but you have good coverage from Z2 itself. Have cold spares ready and burnt-in separately.)
I also reorganized my zpool to be 5x10-wide Z2 just as you recommended and everything is running smooth. The Scrub Task takes 1/8th the time and I can relax knowing I have plenty or redundancy. Thank you for the suggestion.

Thanks for all the feedback and will look to the form for more suggestions in the future.
 
Top