Finally some test data. Firstly, thankyou to PaleoN for a much needed pointer as to how to kick it all off, without which these results would not of appeared. Thankyou to the OP for raising this question in what for me was a very pertinent time, and as a noob commissioning a new system, where my upgrade/maintainence path is important to me, and just in time for me to take 4K into account.
The following system was used and this next paragraph details the constants:
Supermicro X9SCM-F v2.0b BIOS with an E3-1220Lv2 @ 2.3GHz Xeon. 16GB ECC DDR3 uDIMMs with a FreeNAS hardware tunable to limit it to 4GB as per the OP. SATA PCIe HBA card is an LSI9211-8i (LSI00194) flashed with p15 IT firmware with the BIOS disabled, and is the sole card in the motherboard. All drives hang off this card where 8 drives are concerned. For the 10 disk test the extra 2 are plugged into the motherboard.
FreeNAS version: 8.3.0 Rel p1 x64 (r12825) Compression = off. Dedup=off.
All HHDs used are identical, they are Western Digital WD 5002ABYS 512bytes/sector native, 500GB, RE3, 7200rpm, 16MB cache with their orginal and still current firmware v02.03B02. Their sustained large file single drive transfer rate (datasheet & observed) is 112MByte/sec. Drives have no links installed to limit their speed.
IOZONE command line: iozone -r 4k -r 8k -r 16k -r 32k -r 64k -r 128k -s 6g -i 0 -i 1 -i 2. Always from command line.
Now for the variables, and some notes on the specific tests:
A couple of aborted tests suggested a small variation in results from re-running the same test, but a very quick look suggested it was usually small (ie not worse than the 3rd most significant digit). I leave that to IOZONE.
Test #1 & #2 hopefully address the OP's question. Test #3,#4 & #5 are variations to answer my own questions and hopefully maybe of use to others.
For tests #1 thru #4 'atime=on'. For test #5 'atime=off'. Further reading, during testing, suggested that enabling 'atime' may significantly impact read performance, so I ran test #5 at the end with it off. This last test would directly compare with test #2.
Test #3, the 10disk RAIDZ2 test, was to satisfy my own curiosity as to what performance impact an optimal (2/4/8 disks + 2 parrots) ZFS Z2 configuration had over what maybe deemed to be a non-optimal Z2 configuration, ie where the number of data disks are not a power of 2.
Tests #3,#4 & #5 were not performed in 512bytes/sector mode due to lack of time, and also 512bytes/sector will likely become a thing of the past.
The IOZONE test results.
Code:
Test #1. ZFS configuration: 8disks in (6+2) RAIDZ2. Force 4K sectors disabled. (2.6TB volume reported size.)
random random
KB reclen write rewrite read reread read write
6291456 4 344044 90934 139417 149958 648 554
6291456 8 382535 95971 147316 158259 1274 1091
6291456 16 406700 103767 145246 152340 2507 2140
6291456 32 468023 101219 154310 156147 5421 4164
6291456 128 429458 336491 152566 158256 14551 410136
Code:
Test #2. ZFS configuration: 8disks in (6+2) RAIDZ2.Force 4K sectors enabled. (2.5TB volume reported size)
random random
KB reclen write rewrite read reread read write
6291456 4 335485 87068 135491 145476 646 557
6291456 8 385966 92489 150878 148586 1184 1092
6291456 16 388087 96794 141227 148081 2518 2154
6291456 32 423178 97766 145405 148634 5103 4137
6291456 128 399557 318293 147168 161375 15142 421706
Code:
Test #3. ZFS configuration: 10disks in (8+2) RAIDZ2.Force 4K sectors enabled. (3.7TB volume reported size)
random random
KB reclen write rewrite read reread read write
6291456 4 335500 97198 148482 159965 636 557
6291456 8 435407 99014 145315 148184 1267 1100
6291456 16 473380 105504 149504 172862 2415 2141
6291456 32 389674 108700 148647 158314 5375 4182
6291456 128 562781 372510 161167 169586 14058 552917
Code:
Test #4. ZFS configuration: 8disks in ((2+2)x2) RAIDZ2 Force 4K sectors enabled.(1.7TB volume reported size.)
random random
KB reclen write rewrite read reread read write
6291456 4 291322 75182 112847 120848 728 611
6291456 8 308598 76628 118592 135213 1371 1208
6291456 16 307889 84520 119768 128726 2845 2355
6291456 32 304571 82985 120782 123981 6101 4597
6291456 128 327061 304220 126189 130475 15924 20443
Code:
Test #5. ZFS configuration: 8disks in (6+2) RAIDZ2.Force 4K sectors enabled.(2.5TB volume reported size)
N.B. atime = off, for this test only
random random
KB reclen write rewrite read reread read write
6291456 4 331861 91151 135052 145138 643 554
6291456 8 339839 93089 140074 144900 1187 1091
6291456 16 286186 94953 154224 154341 2498 2142
6291456 32 363436 98116 147924 161909 4726 4096
6291456 128 407814 330602 146010 152179 14640 380474
Note to others running the same test; run it from the command line as once the GUI crashed after being left overnight. Test time was about 10-12 hours and about 7+ of those were the 4K random read/writes.
It would be great if somebody could graph this lot, and perhaps even the other thread results, to present the data in a more effective way.
My conclusion:
1) Tests #1 & #2 suggest that 'Force 4K' on these 512bytes/sector HDDs slows the throughput a little but not by much. At a guess about 2%-3% worst case.
2) I'm not worried about the impact (if any?) of 'atime' on speed.
3) Test configuration #4 didn't make the system faster as I thought it would by producing 2vdevs, I'm wondering if that was down to a boo boo by me? Unfortunately I can't repeat these tests as the system is now in use.
4) Neither did the 10 disk 'optimal' configuration speed the show up, apart from random reads of larger files.
Your thoughts please.
I'm sure that after this, that the platters on my hard drives are that little bit thinner

.