awaiting a situation where the system is required to work on swap,
That is not what happens with ARC contents. They are simply evicted, and this then requires the re-fetching of the DDT data. When this happens at scale, it is like hitting a brick wall. You are suddenly thrashing through ARC metadata for every block. It is not fun, it is super-ugly, you will want to commit seppuku.
The system is not sufficient for the amount of stored data, but the tasks are not user-critical (streaming).
Streaming ... what, exactly? Only certain types of data are amenable to significant benefits from deduplication. Storing VM images or uncompressed backups is an example. If you are streaming video and you are hoping that
/mnt/yourpool/Video/Incoming/somemovie.mp4
/mnt/yourpool/Video/Movies/somemovie.mp4
happen to deduplicate because they're the same file contents, well, yes, that should deduplicate, but the better solution is to remove one of them, or use hardlinks, so that the filenames point to the same file data. Hardlink-based "deduplication" is virtually free in the UNIX environment, you just have to analyze your filesystem, which burns up some I/O, but the clever implementations work on file size and only then check file contents, hardlinking them if they are the same. Us old-timers like Phil Karn's dupmerge tool, but several other more recent options exist. You can search for "dupmerge" on the forum to find some other threads where people chime in with alternative tools. This is not quite as awesome as dedup, but it comes without the terrible ARC requirements.
other datasets are not too much effected, I can live with it.
The I/O load is placed on the pool as a whole, so, other datasets can definitely be affected.