ZFS and very large files (Truecrypt)

Status
Not open for further replies.
Joined
Jun 17, 2013
Messages
3
Hi.

My system:
SuperMicro X9SRI-3F
Xeon E5-2603 (AES-NI support)
32gb DDR3-1600 ECC
10x WD black 2tb (one raidz2 vdev)
(Using onboard Intel disk controllers and onboard Intel NIC)
No read or write cache drives.

I'm wondering if I can place one enormous Truecrypt file container on the raidz2 vdev. We're talking like 90 to 95% of the space available. The file container would be mounted from only one machine at a time.

I have heard ZFS has some trouble when >80% of the available space is used on the file system. What is a good rule of thumb for reserved free space?

Is this feasible? If so, what sort of options for dedup, scrub, etc should be turned on or off? Would Freenas whole disk encryption interfere in some way with this setup?
 
Joined
Jun 17, 2013
Messages
3
I don't know if I asked a dumb question or if no one has tried this before.

When I have the system up and the ~14 to ~15 terabyte Truecrypt file container created, I will post some network speed benchmarks in this thread.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
No you haven't asked a dumb question, there's just stuff going on and a release getting prepped and not that many people hanging around the forum right now. Normally I kind of hang out here and talk performance but it's been a busy week here too.

The good news is it'll work. The bad news is that performance will degrade noticeably and rapidly as you write content. What's going to happen is that you write the initial container, straight-line writes, that'll be fine and fast. But now you go and update a few dozen blocks. What ZFS does is allocate NEW space, writes the data, and then frees the old space. This leads to fragmentation. It leads there RAPIDLY if you are abusive to the free space recommendations. I've actually found that 80% is way high for things like iSCSI, at least if you want to retain some semblance of performance, and I've been suggesting more like 60% (but this is situation-dependent too). The fragmentation is inevitable as you write updates, and stressing free space just makes it worse faster.

If you didn't need good read performance for the data, then, less of an issue. If the amount of data you needed to be able to read rapidly was small enough for ARC/L2ARC, that is also a possibility.
 
Status
Not open for further replies.
Top