I/O Performance

Status
Not open for further replies.

voona

Cadet
Joined
Jun 6, 2014
Messages
2
Hi Guys,

New to FreeNAS ZFS and wanted to get a sanity check on my array:

Supermicro Chassis 32GB RAM Dual Socket CPU

First ZPOOL up is 4 x Samsung EVO SSD's to be ISCSI'd to my ESX Box

Is this performance up to scratch or sub par (these are raw i/o zone from the freenas box itself)

I have tried to read as much as possible on the async / sync debate and if i should use a ZIL etc.

Happy to hear any suggestions etc.

[root@freenas] /# zpoolstatus
pool: Virtual_Machines
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
Virtual_Machines ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
gptid/39b4d279-f09f-11e3-a182-000c29e896ae ONLINE 0 0 0
gptid/39e3d5ee-f09f-11e3-a182-000c29e896ae ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
gptid/3a10cdcf-f09f-11e3-a182-000c29e896ae ONLINE 0 0 0
gptid/3a3f05f5-f09f-11e3-a182-000c29e896ae ONLINE 0 0 0

Run began: Wed Jun 11 02:28:05 2014

Auto Mode
File size set to 3145728 KB
Using Minimum Record Size 64 KB
Command line used: iozone -a -s 3g -y 64
Output is in Kbytes/sec
Time Resolution = 0.000001 seconds.
Processor cache size set to 1024 Kbytes.
Processor cache line size set to 32 bytes.
File stride size set to 17 * record size.
random random bkwd record stride
KB reclen write rewrite read reread read write read rewrite read fwrite frewrite fread freread
3145728 64 571586 1114927 4522225 4514231 4075975 394982 4285674 4648266 4077053 990118 732990 3482588 3506567
3145728 128 1206218 732964 4770555 4918019 4729928 987347 4736001 6274438 4726649 935261 683131 3400761 3479271
3145728 256 1094530 763820 5011587 4939263 4945055 1021026 4897994 6440753 5016308 1039422 731569 3405231 3445254
3145728 512 1191133 833010 5072788 5125506 4997034 1096738 4892807 6409583 5086217 974885 750841 3665610 3688710
3145728 1024 1194204 741001 5348630 5348032 5208029 1084966 4434405 6623351 5171351 952706 750480 3548901 3597420
3145728 2048 1218085 745733 5372620 5465240 5383137 1131260 4643861 6512974 5246404 903068 868688 3549941 3575205
3145728 4096 1211865 820769 5265574 5448053 5362381 1149280 4769853 6648128 5383432 880030 882687 3163796 3251654
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Sorry, but your tests mean nothing. a 3GB test file will be easily cached with 32GB of RAM, so all you did was provide test values for how fast stuff in the ZFS ARC is, which is far far higher than your disk testing.

This is yet another example of why I tell people not to run iostat on their pools. Unless you are a pro at ZFS you have no chance of running any tests and actually getting numbers that mean something.
 

voona

Cadet
Joined
Jun 6, 2014
Messages
2
Sorry, but your tests mean nothing. a 3GB test file will be easily cached with 32GB of RAM, so all you did was provide test values for how fast stuff in the ZFS ARC is, which is far far higher than your disk testing.

This is yet another example of why I tell people not to run iostat on their pools. Unless you are a pro at ZFS you have no chance of running any tests and actually getting numbers that mean something.

O.K. Good point. How would you suggest testing raw disk performance?

Can you confirm the pool is setup correctly?
 
Status
Not open for further replies.
Top