- Joined
- Feb 6, 2014
- Messages
- 5,112
Your sustained (vdev-level) write performance will take a hit, as the allocation thread will have to work harder to find free areas on the disk, and your disks in turn will have to seek more frequently in order to write to those scattered free areas. Depending on how much is written to the share at a given time, it may be totally inconsequential (all of your writes are small, and the buffering nature of ZFS transaction groups save your bacon) or absolutely crippling (a large file comes in and chokes up all I/O to the array)
With regards to the cache; L2ARC isn't nearly as good at its job as ARC is. If you have lots of small files in play, using it for metadata might be worthwhile though:
Unlike a special vdev, L2ARC is a read-only copy, so if you lose the single SSD, your data isn't affected (other than access getting slower due to loss of the cache)
With regards to the cache; L2ARC isn't nearly as good at its job as ARC is. If you have lots of small files in play, using it for metadata might be worthwhile though:
zfs set secondarydata=metadata poolname/dataset
Unlike a special vdev, L2ARC is a read-only copy, so if you lose the single SSD, your data isn't affected (other than access getting slower due to loss of the cache)