loader/drivers

Status
Not open for further replies.

RichR

Explorer
Joined
Oct 20, 2011
Messages
77
Hi,

I'm not sure if I'm asking the question correctly, or even comparing apples to apples....

I'm getting siis timeouts in dmesg. After a LOT of Googling, it seems many responses end up being to use the latest driver. So, is a module a driver? Is the driver even included in FreeNAS? If I specify a "module" (which does end up in /boot/loader.conf.local) where would it load from?

According to http://www.unix.com/man-page/freebsd/4/SIIS/, the following should be in loader.conf - siis_load="YES".

Just because cat loader.conf.local confirms the line is there, another thread here said I'd still have to download the module. siis is not in /boot/kernel, nor /boot/modules

I have 3 Syba cards, each with 4 SATA ports. They do show up in dmesg....
siis0: [ITHREAD]
siisch0: <SIIS channel> at channel 0 on siis0
siisch0: [ITHREAD]
siisch1: <SIIS channel> at channel 1 on siis0
siisch1: [ITHREAD]
siisch2: <SIIS channel> at channel 2 on siis0
siisch2: [ITHREAD]
siisch3: <SIIS channel> at channel 3 on siis0
siisch3: [ITHREAD]

Because siis is there, does this mean that the driver is there?
Same goes for here...
siisch0: siis_timeout is 00040000 ss 7fffff00 rs 7fffff00 es 00000000 sts 801e2000 serr 00000000
siisch0: ... waiting for slots 6fffff00
siisch0: Timeout on slot 27
siisch0: siis_timeout is 00040000 ss 7fffff00 rs 7fffff00 es 00000000 sts 801e2000 serr 00000000
siisch0: ... waiting for slots 67ffff00

What does all of this mean?

Also, I have 9 5-port port multipliers connected to the cards. Each card has 4 ports but I only use 3 per card, and have 3 cards.

45 3TB Hitachi drives
16GB memory
8.0.3p1
i3 540 @ 3.0.7GHz

All the drives are seen fine. For "testing and burn in" I have 3 7-drive pools, and 3 8-drive pools.

Thanks,

Rich
 

louisk

Patron
Joined
Aug 10, 2011
Messages
441
Hi,

I'm not sure if I'm asking the question correctly, or even comparing apples to apples....

I'm getting siis timeouts in dmesg. After a LOT of Googling, it seems many responses end up being to use the latest driver. So, is a module a driver? Is the driver even included in FreeNAS? If I specify a "module" (which does end up in /boot/loader.conf.local) where would it load from?

According to http://www.unix.com/man-page/freebsd/4/SIIS/, the following should be in loader.conf - siis_load="YES".

Just because cat loader.conf.local confirms the line is there, another thread here said I'd still have to download the module. siis is not in /boot/kernel, nor /boot/modules

I have 3 Syba cards, each with 4 SATA ports. They do show up in dmesg....
siis0: [ITHREAD]
siisch0: <SIIS channel> at channel 0 on siis0
siisch0: [ITHREAD]
siisch1: <SIIS channel> at channel 1 on siis0
siisch1: [ITHREAD]
siisch2: <SIIS channel> at channel 2 on siis0
siisch2: [ITHREAD]
siisch3: <SIIS channel> at channel 3 on siis0
siisch3: [ITHREAD]

Because siis is there, does this mean that the driver is there?
Same goes for here...
siisch0: siis_timeout is 00040000 ss 7fffff00 rs 7fffff00 es 00000000 sts 801e2000 serr 00000000
siisch0: ... waiting for slots 6fffff00
siisch0: Timeout on slot 27
siisch0: siis_timeout is 00040000 ss 7fffff00 rs 7fffff00 es 00000000 sts 801e2000 serr 00000000
siisch0: ... waiting for slots 67ffff00

What does all of this mean?

Also, I have 9 5-port port multipliers connected to the cards. Each card has 4 ports but I only use 3 per card, and have 3 cards.

45 3TB Hitachi drives
16GB memory
8.0.3p1
i3 540 @ 3.0.7GHz

All the drives are seen fine. For "testing and burn in" I have 3 7-drive pools, and 3 8-drive pools.

Thanks,

Rich

Yes, you can think of a driver and a module as the same.

Modules in the base FreeBSD are in /boot/kernel. 3rd party modules would be in /usr/local/etc/modules/

Correct, just because you specify you want to load something doesn't mean its there. You should check for the existence of the module.

It looks like its seeing your cards, so I would expect you would not need to modify config files or add extra modules.

I would guess that the timeouts are when its trying to detect ports which don't have devices connected to them. If thats true, you should see 4 timeouts, 1 per card (since you're using 3 drives per card and 4 cards).
 

RichR

Explorer
Joined
Oct 20, 2011
Messages
77
Yes, you can think of a driver and a module as the same.

Modules in the base FreeBSD are in /boot/kernel. 3rd party modules would be in /usr/local/etc/modules/

Correct, just because you specify you want to load something doesn't mean its there. You should check for the existence of the module.

It looks like its seeing your cards, so I would expect you would not need to modify config files or add extra modules.

I would guess that the timeouts are when its trying to detect ports which don't have devices connected to them. If thats true, you should see 4 timeouts, 1 per card (since you're using 3 drives per card and 4 cards).

Thanks for the reply....

I'm getting a LOT of time outs.... (for 23 slots - didn't want to list them all for space reasons). Would it help if I posted all output of dmesg?

Again, not that these are related..... Posts from the siis author in other forums as late as mid 2011 say that they should update to the latest driver. So I think of this a couple of ways with possibly unrelated questions. If a driver/module is part of the FreeNAS kernel release (apparently siis is because it shows up in dmesg), does loading it as an additional module create conflicts? (in other words, having 2 pieces)? Does loading a module do anything to what's currently loaded from the kernel? Does a module load after the kernel -and if so, does it override a part that may be "older"?

How do I get/find a siis(4) module?

In my release /usr/local/etc/modules doesn't exist.

Rich
 

louisk

Patron
Joined
Aug 10, 2011
Messages
441
It could be that the siis is compiled in, in which case it would not show up in /boot/kernel/

You are correct, that trying to load the module more than one will make it squawk, but it won't actually hurt (or accomplish) anything.

Hmm, you may have to wait for something based on 8.3. Looks like 8.2 was released just before the newer siis driver.

You mention that all drives are seen, and you've been able to get them put into various vdevs and into a pool. Do they work properly?
 

RichR

Explorer
Joined
Oct 20, 2011
Messages
77
So to answer the question, It seems they work properly. Besides the dd, I can copy and retrieve files just fine.

This does not seem to be very good.... compared to others here... http://forums.freenas.org/showthread.php?981-Notes-on-Performance-Benchamarks-and-Cache, which is concerning considering the amount of memory and the newness of all the hardware.....

[pacs@pod2 /]$ dd if=/dev/zero of=/mnt/Store4/tmp.dat bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 1224.831318 secs (87664465 bytes/sec)

[pacs@pod2 /]$ dd if=/mnt/Store4/tmp.dat of=/dev/null bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 631.636189 secs (169993715 bytes/sec)

[pacs@pod2 /]$ zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
Store4 19T 141G 18.9T 0% ONLINE /mnt
Store5 21.8T 2.14M 21.7T 0% ONLINE /mnt
Store6 19T 2.06M 19.0T 0% ONLINE /mnt
Store7 21.8T 1.98M 21.7T 0% ONLINE /mnt
Store8 19T 2.06M 19.0T 0% ONLINE /mnt
Store9 21.8T 1.98M 21.7T 0% ONLINE /mnt

For Store4 only....
[pacs@pod2 /]$ zpool status
pool: Store4
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
Store4 ONLINE 0 0 0
raidz2 ONLINE 0 0 0
ada0p2 ONLINE 0 0 0
ada1p2 ONLINE 0 0 0
ada2p2 ONLINE 0 0 0
ada3p2 ONLINE 0 0 0
ada4p2 ONLINE 0 0 0
ada5p2 ONLINE 0 0 0
ada6p2 ONLINE 0 0 0

errors: No known data errors

Supermicro X8SIL-F
Intel i3 540 @ 3.07GHz
16GB DDR3-1333
(For this test on Store1) 7x3TB Hitachi 5400 Deskstar
Overall 45 3TB Hitachi 5400 Deskstar, 3 Volumes of 7 drives + 3 Volumes of 8 drives

Rich
 

RichR

Explorer
Joined
Oct 20, 2011
Messages
77
After some looking, it seems like there was no 8.3 per your post, 8.2 was released in early 2011.
 

louisk

Patron
Joined
Aug 10, 2011
Messages
441
[root@freenas] /mnt/tank# dd if=/dev/zero of=/mnt/tank/tmp.dat bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 356.698135 secs (301022551 bytes/sec)
[root@freenas] /mnt/tank# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
tank 7.25T 4.32T 2.93T 59% ONLINE /mnt
[root@freenas] /mnt/tank# zpool status
pool: tank
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
mirror ONLINE 0 0 0
gpt/mfid0 ONLINE 0 0 0
gpt/mfid1 ONLINE 0 0 0
mirror ONLINE 0 0 0
gpt/mfid2 ONLINE 0 0 0
gpt/mfid3 ONLINE 0 0 0
mirror ONLINE 0 0 0
gpt/mfid4 ONLINE 0 0 0
gpt/mfid5 ONLINE 0 0 0
mirror ONLINE 0 0 0
gpt/mfid6 ONLINE 0 0 0
gpt/mfid7 ONLINE 0 0 0
logs
mfid8 ONLINE 0 0 0
cache
mfid9 ONLINE 0 0 0

errors: No known data errors
[root@freenas] /mnt/tank#

I should note that the cache is a 32G SSD, it would make some improvement, but I would not expect a factor of 2x.
 

stereoa

Cadet
Joined
Mar 30, 2012
Messages
6
Did you ever fix your problem with this? I have this exact same problem with a setup extremely similar to you (backblaze recipe).

Thanks!
 

RichR

Explorer
Joined
Oct 20, 2011
Messages
77
Not yet, I'm trying a different configuration soon (we've already modified the pod in several ways, now to version 3). I'll let you know
 

stereoa

Cadet
Joined
Mar 30, 2012
Messages
6
Awesome. Thanks for the response. I switched to openfiler for now which after the initial learning curve is faster and runs smoother.
 

Chewie71

Cadet
Joined
Sep 26, 2012
Messages
9
Just a followup. Did you ever get this figured out? I'm having similar timeout issues with my backblaze.

Matt
 

RichR

Explorer
Joined
Oct 20, 2011
Messages
77
Kind of came up with a workaround--- trash the current Backblaze model :D

I've reconfigured the 3rd pod with NO port multipliers - and added a single redundant PSU also. Using 3 LSI 9201-16i cards with direct attached cables to each drive. Had to dremel out the pems where the nylon standoffs go. Reconfigured some other stuff which I'm not sure I can talk about. But I can say the drives are now all upside down - had to relabel each drive (physically) with the serial number so it's visible from the top.

Now, we're getting another company to make a reconfigured pod case with those and other modifications. I'll say this - it's MUCH faster than using the port multipliers, which for me, was really important for scrub and resilver times.

This is 6 3TB drives - 7 pools per pod, includes 3 spares (45 drives total)

[pacs@freenas /]$ dd if=/dev/zero of=/mnt/Store1/tmp3.dat bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 231.435799 secs (463948028 bytes/sec)

[pacs@freenas /]$ dd if=/mnt/Store1/tmp3.dat of=/dev/null bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 160.692649 secs (668195981 bytes/sec)
 

paleoN

Wizard
Joined
Apr 22, 2012
Messages
1,402
Kind of came up with a workaround--- trash the current Backblaze model :D

I've reconfigured the 3rd pod with NO port multipliers - and added a single redundant PSU also. Using 3 LSI 9201-16i cards with direct attached cables to each drive.
+1

I was going to suggest something like this myself.
 

Chewie71

Cadet
Joined
Sep 26, 2012
Messages
9
I just wonder how Protocase (and BackBlaze) can promote the default hardware configuration if it doesn't work. It apparently works for BackBlaze....unless they don't use the same parts they advertise on their blog or that Protocase sells. :(

Thanks for the info....

I've upgraded to the latest FreeNAS 8.3.RC1 and it has minimized the port multiplier errors somewhat. I've still not configured all the drives in my ZFS pool though. We'll see what happens when I have 6-disk raidz2 arrays across all trays inside the ZFS Pool.
 

Chewie71

Cadet
Joined
Sep 26, 2012
Messages
9
Or Debian Linux either apparently....even though that's the default install from the factory.... Debian was even worse...

I guess I'll see how it goes with 8.3... Didn't I read though that you had two of these things and these problems didn't appear for you until you got your second unit? Maybe that was someone else. If not I'd be curious to hear what the difference between the two units was.
 

RichR

Explorer
Joined
Oct 20, 2011
Messages
77
from the Backblaze blog.... http://blog.backblaze.com/2011/07/20/petabytes-on-a-budget-v2-0revealing-more-secrets/

"We upgraded the Linux 64-bit OS from Debian 4 to Debian 5, but we no longer use JFS as the file system. We selected JFS years ago for its ability to accommodate large volumes and low CPU usage, and it worked well. However, ext4 has since matured in both reliability and performance, and we realized that with a little additional effort we could get all the benefits and live within the unfortunate 16 terabyte volume limitation of ext4. One of the required changes to work around ext4’s constraints was to add LVM (Logical Volume Manager) above the RAID 6 but below the file system. In our particular application (which features more writes than reads), ext4’s performance was a clear winner over ext3, JFS, and XFS."

I'm starting our 4th pod soon, with the previous 3 getting the "upgrades/mods" so they match. I found a manufacture locally (not in Canada) and a really good CAD person, so the modification costs are not too bad in the overall scheme for my application. The price for a bare pod is hundreds less than the Canadian manufacturer in a small quantity (5), however our increase in performance, especially with scrubs and resilvering, and stability/reliability on parts is worth a small increase in overall cost.

Rich
 
Status
Not open for further replies.
Top