ZFS capacity issue

Status
Not open for further replies.

Heire

Dabbler
Joined
Jul 8, 2011
Messages
12
Well to further analyse my issue, I have been testing on a virtual machine to simulate my situation and test some things (Not related to performance).
At this moment my results are as expected although on some parts I do not understand why it performs that way, perhaps if someone could clarify this bit.
So what I did was to see how much can I write on to the iscsi disk untill ZFS stops working (as it is full)

So test setup on my virtual machine: 12x 53,7GB disks, 8GB ram & dual core cpu.
1) 12 disks RAIDZ2, total brut: 429GB --> real written volume: 185GB (==> 56% of "lost" storage)
2) 10 disks RAIDZ2, total brut: 357GB --> real written volume: 155GB (==> 56% of "lost" storage)
3) 2x6disks RAIDZ2, total brut: 375GB --> real written volume: 185GB (==> 50% of "lost" storage)
4) 12 disks in mirror, total brut: 282GB --> real written volume: 282GB (==> 0% of "lost" storage)

Note: 12 disks or 2x6 disks give the same end result, so even an optimal setup does not really help in storage but I would expect to have better performance on the 2x6disks.

It's rather a RAIDZ issue then a ZFS issue to me and somehow explains it, I understand that small blocks would lead to the 50% "loss" but I've changed the logical size of the iscsitarget to 4096, even formatted the iscsi disk to 4096 but no difference. As I would have expected that changing the format/logical unit would more or less lower the "loss" but no result so far. Any ideas on this?

Regarding file & device extent, I have found that the file extent on a RAIDZ2 system is 1/5 of the write performance then the device and even had iscsi errors on which I think were related to the high data burst. I even limited the bit rate towards freenas which helped a bit but still the write performance was 1/5 of before.
Currently doing a file extent on a mirrored ZFS which I think will give the same result but with 1/2 of the write performance, so it's a bit better and even without the iscsi errors. Strange that the device extent has no iscsi issues compared to the file extent with better performance but the file extent has no issues when in mirrored vdev, strange stuff.

Now I'm wondering why my RAIDZ2 is using small blocks in every test I have done so far.

Doing some more simulations during the night, but so far file&device iscsi only works good on mirrored vdev it seems.
 

Heire

Dabbler
Joined
Jul 8, 2011
Messages
12
So after some final tests I have come to following conclusion:

1) RAIDZ2 (even without following the thumb rule) + ZFS + ZVOL + iSCSI (device) = very bad idea
2) Mirrored vdevs + ZFS + ZVOL + iSCSI (device) = good, no issues here and good performance
Mirrored vdevs + ZFS + iSCSI (file extent) = good, no issues here, adding a ZIL offload increases write performance
3) RAIDZ2 + ZFS + iSCSI + File extent = good but half write performance
4) RAIDZ2 (even without following the thumb rule) + ZFS + iSCSI + File extent + ZIL = perfect with even better write performance then zvol

So I think I will go with option 4 as of now :)
Anyone else has any comments on my results?

Would a ZIL offload really help or could it be that there is too less RAM to handle the ZIL logs?
 
Status
Not open for further replies.
Top