ChrisH
Dabbler
- Joined
 - Jan 29, 2014
 
- Messages
 - 11
 
This is new supermicro hardware, and my first ZFS box. I've got 36 NL-SAS drives attached to six LSI SAS2308 controllers (one onboard, five HBAs) in point-to-point configuration.
 
Specs:
* Supermicro 6047R-E1R36L
* Intel(R) Xeon(R) CPU E5-2630L v2 @ 2.40GHz (12 real cores)
* FreeNAS 9.2.0 64-bit
* 256GB RAM
* 10Gb Ethernet: Intel(R) PRO/10GbE PCI-Express Network Driver, Version - 2.5.15
* SSD SLOG
 
I did read about a potential mps driver issue in "Known Issues", which might be the cause.
 
 
I created the zpool at the command line, because it seemed a lot easier than the web interface. Basically, there are 34 drives in mirrored vdev (details below), plus two hot spares. After all the vdevs were the way I wanted them, I did a zpool export, and automatic import from the web gui. This seemed to work fine. Also, this configuration may not be final - I plan to benchmark raidz* vdevs as well.
 
 
I can then use the zpool and it seems to roughly work (I have not done thorough testing yet, but basic NFS read/write works fine).
 
After I reboot, the zpool is UNAVAILABLE:
 
 
I have to manually delete the zpool from the web interface, and do an automatic import in order to get the zpool functional again. What's going on here? I'm fairly certain the geom names (e.g. da34) are not switching after reboot. The problem recurs after every reboot.
 
I have searched for answers to this problem, but have not found anything yet.
About Me: New to ZFS and FreeNAS/FreeBSD, but otherwise tech savvy.
	
		
			
		
		
	
			
			Specs:
* Supermicro 6047R-E1R36L
* Intel(R) Xeon(R) CPU E5-2630L v2 @ 2.40GHz (12 real cores)
* FreeNAS 9.2.0 64-bit
* 256GB RAM
* 10Gb Ethernet: Intel(R) PRO/10GbE PCI-Express Network Driver, Version - 2.5.15
* SSD SLOG
I did read about a potential mps driver issue in "Known Issues", which might be the cause.
My LSI controllers are running version 17, so I emphasized "older" in the quote above - I am hoping that v >=13 is okay.
- The mps driver for 6gbps LSI SAS HBAs is version 13, which requires phase 13 firmware on the controller. This is a hard requirement and running older firmware can cause many woes, including the failure to probe all of the attached disks, which can lead to degraded or unavailable arrays.
 
I created the zpool at the command line, because it seemed a lot easier than the web interface. Basically, there are 34 drives in mirrored vdev (details below), plus two hot spares. After all the vdevs were the way I wanted them, I did a zpool export, and automatic import from the web gui. This seemed to work fine. Also, this configuration may not be final - I plan to benchmark raidz* vdevs as well.
Code:
[root@lens] ~# zpool status -v pool: datapool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM datapool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 da2 ONLINE 0 0 0 da3 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 da4 ONLINE 0 0 0 da5 ONLINE 0 0 0 mirror-2 ONLINE 0 0 0 da6 ONLINE 0 0 0 da7 ONLINE 0 0 0 mirror-3 ONLINE 0 0 0 da8 ONLINE 0 0 0 da9 ONLINE 0 0 0 mirror-4 ONLINE 0 0 0 da10 ONLINE 0 0 0 da11 ONLINE 0 0 0 mirror-5 ONLINE 0 0 0 da12 ONLINE 0 0 0 da13 ONLINE 0 0 0 mirror-6 ONLINE 0 0 0 da14 ONLINE 0 0 0 da15 ONLINE 0 0 0 mirror-7 ONLINE 0 0 0 da16 ONLINE 0 0 0 da17 ONLINE 0 0 0 mirror-8 ONLINE 0 0 0 da18 ONLINE 0 0 0 da19 ONLINE 0 0 0 mirror-9 ONLINE 0 0 0 da20 ONLINE 0 0 0 da21 ONLINE 0 0 0 mirror-10 ONLINE 0 0 0 da22 ONLINE 0 0 0 da23 ONLINE 0 0 0 mirror-11 ONLINE 0 0 0 da24 ONLINE 0 0 0 da25 ONLINE 0 0 0 mirror-12 ONLINE 0 0 0 da26 ONLINE 0 0 0 da27 ONLINE 0 0 0 mirror-13 ONLINE 0 0 0 da28 ONLINE 0 0 0 da29 ONLINE 0 0 0 mirror-14 ONLINE 0 0 0 da30 ONLINE 0 0 0 da31 ONLINE 0 0 0 mirror-15 ONLINE 0 0 0 da32 ONLINE 0 0 0 da33 ONLINE 0 0 0 mirror-16 ONLINE 0 0 0 da34 ONLINE 0 0 0 da35 ONLINE 0 0 0 logs ada1p1 ONLINE 0 0 0 spares da0 AVAIL da1 AVAIL errors: No known data errors
I can then use the zpool and it seems to roughly work (I have not done thorough testing yet, but basic NFS read/write works fine).
After I reboot, the zpool is UNAVAILABLE:
Code:
[root@lens] ~# zpool status
  pool: datapool
state: UNAVAIL
status: One or more devices could not be opened.  There are insufficient
    replicas for the pool to continue functioning.
action: Attach the missing device and online it using 'zpool online'.
  see: http://illumos.org/msg/ZFS-8000-3C
  scan: none requested
config:
 
    NAME                      STATE    READ WRITE CKSUM
    datapool                  UNAVAIL      0    0    0
      mirror-0                UNAVAIL      0    0    0
        3592302713242575290  UNAVAIL      0    0    0  was /dev/da2
        2021266809768084791  UNAVAIL      0    0    0  was /dev/da3
      mirror-1                UNAVAIL      0    0    0
        5112586467922550107  UNAVAIL      0    0    0  was /dev/da4
        6419444988605564983  UNAVAIL      0    0    0  was /dev/da5
      mirror-2                UNAVAIL      0    0    0
        8652468216450752575  UNAVAIL      0    0    0  was /dev/da6
        14819342351271755606  UNAVAIL      0    0    0  was /dev/da7
      mirror-3                UNAVAIL      0    0    0
        15880097202006874895  UNAVAIL      0    0    0  was /dev/da8
        16907735780769163509  UNAVAIL      0    0    0  was /dev/da9
      mirror-4                UNAVAIL      0    0    0
        3071978661370149652  UNAVAIL      0    0    0  was /dev/da10
        1791407592833643381  UNAVAIL      0    0    0  was /dev/da11
      mirror-5                UNAVAIL      0    0    0
        14867928338641177295  UNAVAIL      0    0    0  was /dev/da12
        3150936061178659772  UNAVAIL      0    0    0  was /dev/da13
      mirror-6                UNAVAIL      0    0    0
        8316830207967157625  UNAVAIL      0    0    0  was /dev/da14
        6685696694650249293  UNAVAIL      0    0    0  was /dev/da15
      mirror-7                UNAVAIL      0    0    0
        17794825240028426542  UNAVAIL      0    0    0  was /dev/da16
        14952460444424399977  UNAVAIL      0    0    0  was /dev/da17
      mirror-8                UNAVAIL      0    0    0
        12167009551578455686  UNAVAIL      0    0    0  was /dev/da18
        2627324427696886329  UNAVAIL      0    0    0  was /dev/da19
      mirror-9                UNAVAIL      0    0    0
        4625506170048938841  UNAVAIL      0    0    0  was /dev/da20
        11965429751940287398  UNAVAIL      0    0    0  was /dev/da21
      mirror-10              UNAVAIL      0    0    0
        12210899300310530724  UNAVAIL      0    0    0  was /dev/da22
        12988575474745012328  UNAVAIL      0    0    0  was /dev/da23
      mirror-11              UNAVAIL      0    0    0
        17984353124639465830  UNAVAIL      0    0    0  was /dev/da24
        8748366759598853076  UNAVAIL      0    0    0  was /dev/da25
      mirror-12              UNAVAIL      0    0    0
        10315855113324936583  UNAVAIL      0    0    0  was /dev/da26
        8172850504687767722  UNAVAIL      0    0    0  was /dev/da27
      mirror-13              UNAVAIL      0    0    0
        918384368181134623    UNAVAIL      0    0    0  was /dev/da28
        14501400028025044371  UNAVAIL      0    0    0  was /dev/da29
      mirror-14              UNAVAIL      0    0    0
        11784271588938812269  UNAVAIL      0    0    0  was /dev/da30
        4281149748709097750  UNAVAIL      0    0    0  was /dev/da31
      mirror-15              UNAVAIL      0    0    0
        18083932246763186878  UNAVAIL      0    0    0  was /dev/da32
        4983784078936673443  UNAVAIL      0    0    0  was /dev/da33
      mirror-16              UNAVAIL      0    0    0
        13178871247697575703  UNAVAIL      0    0    0  was /dev/da34
        13024903489373090391  UNAVAIL      0    0    0  was /dev/da35
I have to manually delete the zpool from the web interface, and do an automatic import in order to get the zpool functional again. What's going on here? I'm fairly certain the geom names (e.g. da34) are not switching after reboot. The problem recurs after every reboot.
I have searched for answers to this problem, but have not found anything yet.
About Me: New to ZFS and FreeNAS/FreeBSD, but otherwise tech savvy.
				


