ZFS + iSCSI + File Extent and Disk Space

Status
Not open for further replies.

jeebs01

Dabbler
Joined
Jun 23, 2012
Messages
16
Hello,

I am too just setting a new system up. I have 24GB of memory in the box and would like to use that as a cache. From research, the method to do that is to create a zfs volume and use a file extent under the iSCSI menu (correct if wrong please). Similar to the post below, I am having issues determining the space to use.

Post: http://forums.freenas.org/showthrea...ZVOL-for-iSCSI-use&highlight=file+extent+size


This has generated two questions:

Question 1: If my installation is on a separate 8GB USB drive, do I still need to worry about the upgrade space referenced above?

Question 2: I have tested that I can over allocate the available space. Does data corrupt or will the system simply halt if that occurs? Is there a reason this is possible?


While the two volumes are 8x2TB and 8x500GB each configured in a hardware RAID5, I still want to be as efficient as possible with the resources while keep performance ideal.

Thanks!
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
A1: Yes, you still need to worry about the upgrade space referenced - if you plan to do upgrades. Basically, FreeNAS needs somewhere to temporarily stuff working files for the upgrade process. It can't be the upgrade target device, and since this is a NAS product, the assumption is that there ought to be at least a little available space on the NAS filesystem. This may not be strictly true for iSCSI uses.

A2: As with anything UNIX, you have the power to shoot yourself in the foot. UNIX allows what are known as "sparse files". If you have a 1TB hard drive, you can create a 2TB file and have it only take a tiny percentage of the space - the remainder is zero-filled. When you seek to some location and write some data, it will store it in that position. When you've written about 1TB of data in this manner, the ability of the system to find free disk blocks to expand your file stops, and the system call returns with an error - disk full - and you find yourself unable to write data at position 234567, despite there being data at 230000 and also at 240000. This seems counterintuitive at first, but sparse files are damn handy. But not for iSCSI. The reality is that you don't want or need your NAS's UNIX heritage to be "helping" you in this manner. If you plan to use a file extent, go to the command prompt, and use "dd" to create it. If you're using UFS, you can safely ram it out to the maximum size dd can write. If you're using ZFS, you're cautioned that ZFS may not perform too well on a full filesystem; I would expect things to be sour if you're using more than 90% of the space.

Note that this doesn't mean that you CAN'T use sparse files for iSCSI, it just means *I* think you SHOULDN'T. Data doesn't "corrupt", it just can't get written. So think about how your initiator will react when it is writing data and suddenly it starts getting failures. Maybe it can't cope. Maybe if it's writing files on a vmfs filesystem, the filesystem gets hosed. The FreeNAS system isn't going to care, it'll just report the same thing UNIX boxes have been reporting since the beginning: disk full (or some variation on that).
 

jeebs01

Dabbler
Joined
Jun 23, 2012
Messages
16
Hello,

I certainly appreciate the the detailed explanation and the time you took to write it. I have not used dd before for tasks and will certainly take a look into it.

A great tip on the 90% suggestion. Just so I am clear, is that 90% suggestion with the iSCSI Extent allocation or actual disk utilization with the initiators formatted file system?

Thanks!
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
ZFS is a filesystem that implements copy-on write, which means that when data is edited and updated, the old data is left intact, and the edits are stored on a new place on disk. I haven't really looked too closely at the implications of this combined with using iSCSI in a file extent mode, but it suggests that - kind of like BSD FFS, which degrades substantially when a filesystem is near capacity - one should be mindful of making sure that new blocks can be allocated in the neighborhood of old blocks, or else performance will start to suck in a massive way as the extent ages and small little iSCSI block writes cause severe fragmentation.

So what you want, then, is to make sure that the ZFS filesystem that you stick your iSCSI extent on has some free space. I don't know exactly how much, or even (quite) whether my thinking is completely valid, but... well, anyways, it seems valid.

The flip side is that the implication is that minimizing writes to a ZFS-based iSCSI file extent is probably a great way to optimize for performance, so if you kind of think of your iSCSI target and set up your initiators like it's using an SSD, and you do the usual SSD tweaks like setting noatime, not storing stuff you don't need, making sure you're not running your filesystems near capacity, etc., you're probably more likely to experience success than if you fill every byte of available space at each level (ZFS on the NAS, vmfs on the ESXi).

Now, of course, this is all keying in on you saying "performance ideal". If you don't give a darn about performance, just being able to store things, then you can cut corners.

Incidentally: I'm unconvinced of the wisdom of running ZFS-based iSCSI file extents. It's one of the reasons we started experimenting with FreeNAS, but we've not been entirely successful. My latest FreeNAS box on the bench has come closest to meeting my expectations out of a ZFS-based system. I've got our ESXi nodes accessing a 1TB extent on a RAIDZ2 filesystem (4x7200 RPM disks, E3-1230 w/ 32GB RAM, 60GB SSD L2ARC) and maxxing out at around 800Mbps sequential access. I'm trying to identify what the bottlenecks are, because there's really nothing slow anywhere in the system and it should be lightning fast. I'm hoping to try some direct connections to the ESXi hosts later if I get a chance.
 
Status
Not open for further replies.
Top