Mirror vdevs use more CPU on FreeNAS than RAIDZ2?

Status
Not open for further replies.

wreedps

Patron
Joined
Jul 22, 2015
Messages
225
Why do Mirror vdevs use more CPU on FreeNAS than RAIDZ2?
 

wreedps

Patron
Joined
Jul 22, 2015
Messages
225
Something I have noticed.
 

DrKK

FreeNAS Generalissimo
Joined
Oct 15, 2013
Messages
3,630
I agree. I have neither heard of, nor noticed that.
 

wreedps

Patron
Joined
Jul 22, 2015
Messages
225
Hmmm I will keep a eye out. I have 2 test boxes right now. One running RaidZ2 and another with 3 mirror vdevs.
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
Why do Mirror vdevs use more CPU on FreeNAS than RAIDZ2?
Do they? If so, here's a hypothesis.
<handwaving>
There might be situations where I/O is bottlenecked on the disk subsystem with RAIDZ2, leaving the CPU idle. On a system with mirror vdevs, the same I/O workload might not be bottlenecked on the disk subsystem, and thus might deliver higher throughput. In turn, this could perhaps put more load on the CPU.
</handwaving>
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Right, that's possible. The testing workloads would need to be very similar or ideally identical.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Not seeing it.

Code:
# dd if=/dev/zero of=file bs=1048576
load: 0.34  cmd: dd 73823 [running] 113.58r 0.03u 19.77s 0% 2616k
41663+0 records in
41663+0 records out
43686821888 bytes transferred in 113.590115 secs (384600561 bytes/sec)


Code:
last pid: 73880;  load averages:  0.48,  0.27,  0.11    up 8+19:16:06  17:21:56
42 processes:  1 running, 41 sleeping
CPU:  0.1% user,  0.0% nice,  2.7% system,  0.4% interrupt, 96.7% idle
Mem: 594M Active, 177M Inact, 48G Wired, 3132K Cache, 13G Free
ARC: 42G Total, 14G MFU, 25G MRU, 2578M Anon, 689M Header, 555M Other
Swap: 28G Total, 28G Free

  PID USERNAME    THR PRI NICE   SIZE    RES STATE   C   TIME    WCPU COMMAND
 4043 root         12  20    0   190M 37764K uwait   0  39:06   0.10% collectd
 4000 root          1  52    0   234M   107M select  0   4:16   0.00% python2.7
 3996 root          6  22    0   384M   173M usem    9   0:43   0.00% python2.7
73823 root          1  28    0  9912K  2632K dmu_tx  1   0:25   0.00% dd


Code:
# zpool iostat storage3 1
               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
storage3    1.88T  9.00T      3     43  53.3K   496K
storage3    1.88T  9.00T      0  4.77K      0   599M
storage3    1.88T  9.00T      0  1.45K      0   180M
storage3    1.88T  8.99T      0   1015      0  99.0M
storage3    1.88T  8.99T      0  5.09K      0   648M
storage3    1.88T  8.99T      0  4.77K      0   607M
storage3    1.88T  8.99T      0  4.61K      0   586M
storage3    1.88T  8.99T      0  2.90K      0   352M
storage3    1.88T  8.99T      0  1.36K      0   173M
storage3    1.88T  8.99T      0    994      0   122M
storage3    1.88T  8.99T      0    668      0  56.9M
storage3    1.88T  8.99T      0  4.11K      0   521M
storage3    1.88T  8.99T      0  5.19K      0   660M
storage3    1.88T  8.99T      0  4.80K      0   609M
storage3    1.88T  8.99T      0  4.20K      0   518M


The thing's pretty much yawning, although admittedly the CPU is massively oversized to the task (E5-1650 v3).

The pool is built with six vdevs of three-wide 2.5" 2TB drives, laptop-ish grade (the Toshibas are technically not for laptops but have similar perf characteristics). Fragmentation's 11% on a 17% full pool.

It isn't really designed for write performance but seeing as how the component devices top out around 100MBytes/sec write, the ~350MBytes/sec isn't bad since theoretical max is ~600MBytes/sec.

Ten minutes later it's still going

Code:
last pid: 74042;  load averages:  0.24,  0.24,  0.16    up 8+19:24:10  17:30:00
43 processes:  1 running, 41 sleeping, 1 stopped
CPU:  0.0% user,  0.0% nice,  2.4% system,  0.1% interrupt, 97.5% idle
Mem: 595M Active, 177M Inact, 48G Wired, 3132K Cache, 13G Free
ARC: 44G Total, 11G MFU, 30G MRU, 2509M Anon, 688M Header, 420M Other
Swap: 28G Total, 28G Free

  PID USERNAME    THR PRI NICE   SIZE    RES STATE   C   TIME    WCPU COMMAND
 4043 root         12  20    0   190M 37784K uwait   9  39:08   0.00% collectd
 4000 root          1  52    0   234M   107M select  4   4:16   0.00% python2.7
73823 root          1  26    0  9912K  2632K dmu_tx  1   1:18   0.00% dd


Code:
219601+0 records in
219600+0 records out
230267289600 bytes transferred in 678.051150 secs (339601650 bytes/sec)


And then reading it back

Code:
last pid: 74136;  load averages:  0.38,  0.26,  0.18    up 8+19:27:58  17:33:48
43 processes:  1 running, 41 sleeping, 1 stopped
CPU:  0.0% user,  0.0% nice,  3.9% system,  0.2% interrupt, 95.8% idle
Mem: 595M Active, 177M Inact, 50G Wired, 3132K Cache, 11G Free
ARC: 46G Total, 8435M MFU, 36G MRU, 34M Anon, 709M Header, 421M Other
Swap: 28G Total, 28G Free

  PID USERNAME    THR PRI NICE   SIZE    RES STATE   C   TIME    WCPU COMMAND
74088 root          1  27    0  9912K  2632K zio->i  9   0:13  12.60% dd


Code:
# dd if=file of=/dev/null bs=1048576
219601+0 records in
219601+0 records out
230268338176 bytes transferred in 263.750597 secs (873053334 bytes/sec)



That's pretty cool considering it had to hit the pool to get that. I get much better performance out of L2ARC...
 
Status
Not open for further replies.
Top