Hey all, I'm new to TrueNAS and I've just started using TrueNAS Scale, which I'm loving so far. I just built a new custom NAS w/TrueNAS Scale that I am planning on using as my primary NAS going forward (once it's "officially" released and stable). For now it's a test bed for me to understand the intricacies of ZFS and the TrueNAS system. I have been using Synology devices w/BTRFS for a long time now and will continue to use them as my NAS backup strategy. While I am waiting for the official TrueNAS release I would like to start setting up and configuring everything which includes:
- Docker containers for plex, video editing and encoding, software development
- VMs for Windows and other Linux distros
- A multitude of SMB shares for the family (media, backups for PCs, etc)
- Automatic backups from the TrueNAS
The first 3 I feel like I have a pretty good handle on. It's the last one I'm a bit fuzzy on what the best way to handle this would be.
To give you some understanding of the capability of the NAS I threw together:
- AMD 3950 w/128GB DDR4 ram
- 1x 1TB Samsung 980 Pro as the boot device (which seems like a bit of a waste at this point), w/ 7000MB/s (read) and 5000MB/s (write)
- 1x 1TB Sabrent Rocket 4.0 (currently unused) w/3300 MB/s (read) and 3300 MB/s (write)
- 5 zfs datasets on a pool w/1 RAIDZ1 vdev comprised of 4 Seagate x16 16TB drives (each rated for ~261MB/s max sustained transfer rate) which cover media, backups, software development storage, VMs, and scratch/tmp.
- 2x 10Gbe SFP+ NIC
I have tried to optimize each dataset according to its use (disable atime for perf, set recordsize to 1M for media, etc). There are SMB shares that expose these to various family members.
So far in my tests I've managed to get about 550 MB/s writing to the array and maybe 350 MB/s reading. From my rudimentary ZFS understanding that seems to be roughly what other folks are getting so I guess it's configured optimally? I mostly care about read speed since we will be doing more streaming of large files than writing but, at least from my understanding, the only way to improve read speed would be to add more drives to the pool, or better yet add another vdev. Unfortunately I've about hit my maximum spend, at least for a little while :)
As for the backups I understand ZFS has a nice system setup for snapshotting, which I've already configured differently for each dataset, however my synology machines will be running BTRFS. My understanding is simply doing a zfs send on the snapshot won't be very useful if I want to be able to log into the syno machines and actually "do stuff" with the files since the snapshot will be in some zfs format... is that right? If that's the case what's the best way to perform backups? Do something like:
If anyone has any suggestions on things to check to improve configuration or performance even more I'm all ears! Thanks!
- Docker containers for plex, video editing and encoding, software development
- VMs for Windows and other Linux distros
- A multitude of SMB shares for the family (media, backups for PCs, etc)
- Automatic backups from the TrueNAS
The first 3 I feel like I have a pretty good handle on. It's the last one I'm a bit fuzzy on what the best way to handle this would be.
To give you some understanding of the capability of the NAS I threw together:
- AMD 3950 w/128GB DDR4 ram
- 1x 1TB Samsung 980 Pro as the boot device (which seems like a bit of a waste at this point), w/ 7000MB/s (read) and 5000MB/s (write)
- 1x 1TB Sabrent Rocket 4.0 (currently unused) w/3300 MB/s (read) and 3300 MB/s (write)
- 5 zfs datasets on a pool w/1 RAIDZ1 vdev comprised of 4 Seagate x16 16TB drives (each rated for ~261MB/s max sustained transfer rate) which cover media, backups, software development storage, VMs, and scratch/tmp.
- 2x 10Gbe SFP+ NIC
I have tried to optimize each dataset according to its use (disable atime for perf, set recordsize to 1M for media, etc). There are SMB shares that expose these to various family members.
So far in my tests I've managed to get about 550 MB/s writing to the array and maybe 350 MB/s reading. From my rudimentary ZFS understanding that seems to be roughly what other folks are getting so I guess it's configured optimally? I mostly care about read speed since we will be doing more streaming of large files than writing but, at least from my understanding, the only way to improve read speed would be to add more drives to the pool, or better yet add another vdev. Unfortunately I've about hit my maximum spend, at least for a little while :)
As for the backups I understand ZFS has a nice system setup for snapshotting, which I've already configured differently for each dataset, however my synology machines will be running BTRFS. My understanding is simply doing a zfs send on the snapshot won't be very useful if I want to be able to log into the syno machines and actually "do stuff" with the files since the snapshot will be in some zfs format... is that right? If that's the case what's the best way to perform backups? Do something like:
- Create the baseline zfs snapshot
- Do stuff
- Create another snapshot
- zfs diff the snapshots to determine what changed, parse out what files need to be backed up
- rsync those changes to the synology backup
If anyone has any suggestions on things to check to improve configuration or performance even more I'm all ears! Thanks!