Chris LaDuke
Dabbler
- Joined
- Jan 17, 2014
- Messages
- 18
Hey all. I am a relateive n00b when it comes to zfs. I have a Supermicro 6025b-3 server with 12 GB RAM, 2 dual intel Gb nics, 6 300 GB 15k SAS drives, and two Intel DC S3700's. I am running it as a datastore for ESXi 5.1. I am connecting via iSCSI using mpio. I have the 6 SAS drives in 3 mirrors and I added the two Intel's as an SLOG. I am currently getting what I would consider poor performance (30MB/sec write). In an effort to determine why I began collecting some data.
Here is a snapshot of my pool
[root@NAS1 ~]# zpool status
NAME STATE READ WRITE CKSUM
SASPOOL ONLINE 0 0 0
mirror-0 ONLINE 0 0
gptid/1d9a8ab3-7e1f-11e3-a5c6-003048d5067c ONLINE 0 0
gptid/1e0188cd-7e1f-11e3-a5c6-003048d5067c ONLINE 0 0
mirror-1 ONLINE 0 0
gptid/1e68aba7-7e1f-11e3-a5c6-003048d5067c ONLINE 0 0
gptid/1ecedb74-7e1f-11e3-a5c6-003048d5067c ONLINE 0 0
mirror-2 ONLINE 0 0
gptid/1f3879fb-7e1f-11e3-a5c6-003048d5067c ONLINE 0 0
gptid/1fa97039-7e1f-11e3-a5c6-003048d5067c ONLINE 0 0
logs
mirror-3 ONLINE 0 0
gptid/1fdeef65-7e1f-11e3-a5c6-003048d5067c ONLINE 0 0
gptid/2006de8a-7e1f-11e3-a5c6-003048d5067c ONLINE 0 0
When reviewing dd I got the following results:
Results of dd if=/dev/zero of=/mnt/sata01/test1 bs=8k count=2000000
2000000+0 records in
2000000+0 records out
16384000000 bytes transferred in 37.685992 secs (434750397 bytes/sec)
When I ran Results of GSTAT while copying a 10 GB file - The percentage use on the 6 SAS drives inconsistently jumped from 0 to 5 to 26 to 50 back to 30 ish. Hit 80% a couple times, but would return to the
20's mostly. Often it went to 0. Was seeing about 25MB/s on the windows VM copying.
dT: 1.001s w: 1.000s
L(q) ops/s r/s kBps ms/r w/s kBps ms/w %busy Name
0 0 0 0 0.0 0 0 0.0 0.0| ada0
0 0 0 0 0.0 0 0 0.0 0.0| da2p1
0 0 0 0 0.0 0 0 0.0 0.0| ada1
0 105 0 0 0.0 103 10973 4.8 45.9| da0
0 101 0 0 0.0 99 10462 4.5 43.0| da1
0 110 0 0 0.0 108 11553 4.6 47.2| da2
0 110 0 0 0.0 108 11553 4.6 47.1| da3
0 115 0 0 0.0 113 12252 4.7 50.6| da4
0 115 0 0 0.0 113 12124 4.6 49.5| da5
0 0 0 0 0.0 0 0 0.0 0.0| cd0
0 0 0 0 0.0 0 0 0.0 0.0| da3p1.eli
0 110 0 0 0.0 108 11553 4.6 47.3| da2p2
0 110 0 0 0.0 108 11553 4.6 47.4| gptid/1d9a8ab3-7e
1f-11e3-a5c6-003048d5067c
0 0 0 0 0.0 0 0 0.0 0.0| da3p1
0 0 0 0 0.0 0 0 0.0 0.0| da4p1.eli
0 110 0 0 0.0 108 11553 4.6 47.1| da3p2
0 110 0 0 0.0 108 11553 4.6 47.2| gptid/1e0188cd-7e
1f-11e3-a5c6-003048d5067c
0 0 0 0 0.0 0 0 0.0 0.0| da4p1
0 0 0 0 0.0 0 0 0.0 0.0| da5p1.eli
0 115 0 0 0.0 113 12252 4.7 50.7
My two Zil drives (ada0 and ada1) are not being touched. Any idea why?
Here is a snapshot of my pool
[root@NAS1 ~]# zpool status
NAME STATE READ WRITE CKSUM
SASPOOL ONLINE 0 0 0
mirror-0 ONLINE 0 0
gptid/1d9a8ab3-7e1f-11e3-a5c6-003048d5067c ONLINE 0 0
gptid/1e0188cd-7e1f-11e3-a5c6-003048d5067c ONLINE 0 0
mirror-1 ONLINE 0 0
gptid/1e68aba7-7e1f-11e3-a5c6-003048d5067c ONLINE 0 0
gptid/1ecedb74-7e1f-11e3-a5c6-003048d5067c ONLINE 0 0
mirror-2 ONLINE 0 0
gptid/1f3879fb-7e1f-11e3-a5c6-003048d5067c ONLINE 0 0
gptid/1fa97039-7e1f-11e3-a5c6-003048d5067c ONLINE 0 0
logs
mirror-3 ONLINE 0 0
gptid/1fdeef65-7e1f-11e3-a5c6-003048d5067c ONLINE 0 0
gptid/2006de8a-7e1f-11e3-a5c6-003048d5067c ONLINE 0 0
When reviewing dd I got the following results:
Results of dd if=/dev/zero of=/mnt/sata01/test1 bs=8k count=2000000
2000000+0 records in
2000000+0 records out
16384000000 bytes transferred in 37.685992 secs (434750397 bytes/sec)
When I ran Results of GSTAT while copying a 10 GB file - The percentage use on the 6 SAS drives inconsistently jumped from 0 to 5 to 26 to 50 back to 30 ish. Hit 80% a couple times, but would return to the
20's mostly. Often it went to 0. Was seeing about 25MB/s on the windows VM copying.
dT: 1.001s w: 1.000s
L(q) ops/s r/s kBps ms/r w/s kBps ms/w %busy Name
0 0 0 0 0.0 0 0 0.0 0.0| ada0
0 0 0 0 0.0 0 0 0.0 0.0| da2p1
0 0 0 0 0.0 0 0 0.0 0.0| ada1
0 105 0 0 0.0 103 10973 4.8 45.9| da0
0 101 0 0 0.0 99 10462 4.5 43.0| da1
0 110 0 0 0.0 108 11553 4.6 47.2| da2
0 110 0 0 0.0 108 11553 4.6 47.1| da3
0 115 0 0 0.0 113 12252 4.7 50.6| da4
0 115 0 0 0.0 113 12124 4.6 49.5| da5
0 0 0 0 0.0 0 0 0.0 0.0| cd0
0 0 0 0 0.0 0 0 0.0 0.0| da3p1.eli
0 110 0 0 0.0 108 11553 4.6 47.3| da2p2
0 110 0 0 0.0 108 11553 4.6 47.4| gptid/1d9a8ab3-7e
1f-11e3-a5c6-003048d5067c
0 0 0 0 0.0 0 0 0.0 0.0| da3p1
0 0 0 0 0.0 0 0 0.0 0.0| da4p1.eli
0 110 0 0 0.0 108 11553 4.6 47.1| da3p2
0 110 0 0 0.0 108 11553 4.6 47.2| gptid/1e0188cd-7e
1f-11e3-a5c6-003048d5067c
0 0 0 0 0.0 0 0 0.0 0.0| da4p1
0 0 0 0 0.0 0 0 0.0 0.0| da5p1.eli
0 115 0 0 0.0 113 12252 4.7 50.7
My two Zil drives (ada0 and ada1) are not being touched. Any idea why?