Register for the iXsystems Community to get an ad-free experience and exclusive discounts in our eBay Store.

16 Bay Supermicro Chassis

Western Digital Drives - The Preferred Drives of FreeNAS and TrueNAS CORE
Status
Not open for further replies.

AltecBX

Senior Member
Joined
Nov 3, 2014
Messages
259
Hi guys, I'm back. FreeNAS has been working great since I got this up and running 2 years ago. Been updating it throughout. I've hit a limit.
I' down to the 6TB and getting the yellow light thatI'm running out of space. As some people know, which helped me early on, I have a 16 bay chassis (3U Supermicro 16 bays Storage Server Chassis SC836E26-R1200 SAS2-836EL2 6Gbs). I have 12 6TB WD reds inside. I just purchased 4 more WD 8TB drives to fill the bay.

I'm running into a problem. I can't see the 4th drive. No matter if I swap another drive in the slot I don't see the drive, I can't see a drive in that slot. I only see da0-da14. What could it be?

Also, how can I add these drives to expand my current Plex drive I've basically built this server for?



Thanks
 

Attachments

  • drives.JPG
    drives.JPG
    158.2 KB · Views: 196

snaptec

Senior Member
Joined
Nov 30, 2015
Messages
502
Has your backplane an expander or single sata cable?
If the last, change it!


Gesendet von iPhone mit Tapatalk
 

AltecBX

Senior Member
Joined
Nov 3, 2014
Messages
259
Last edited:

BigDave

FreeNAS Enthusiast
Joined
Oct 6, 2013
Messages
2,479
The EL2 backplane is an expander backplane IIRC.
The board can simply have a blown out component
that carries data or power to that hot swap connector.
The good news is the backplane is easy to replace.
 

AltecBX

Senior Member
Joined
Nov 3, 2014
Messages
259
I was thinking it can be something physically wrong with the hardware itself. Because I had the same issue when I was installing the first 12 drives 2 years ago. Is the backplane worth replacing for 1 drive?
 

BigDave

FreeNAS Enthusiast
Joined
Oct 6, 2013
Messages
2,479
I was thinking it can be something physically wrong with the hardware itself. Because I had the same issue when I was installing the first 12 drives 2 years ago. Is the backplane worth replacing for 1 drive?
I was thinking to myself that exact question when typing my
answer earlier, I don't think so. The reason I say this is you have
managed to fill your pool in two years!!! I think it might be time
for you to consider adding another machine. What comes to
mind is the old phrase, don't put all your eggs in one basket...

Your backplane has the ability to expand further but all the I/O
will be going through the single HBA and having to share that
bandwidth would suck! Further expansion of the machine you
have is beyond my ken, so maybe someone else will respond...

When I got my 16 bay chassis and switched out the SAS backplane for a SAS2 backplane, I went out and purchased 16
old POS 40 GB hard drives to test it out. I got lucky and all the
bays were working fine. I have yet to move my main system
hardware to this chassis, but will do so next week.

You can read my older posts regarding more specific
details of my experience with the chassis/backplane.
 

AltecBX

Senior Member
Joined
Nov 3, 2014
Messages
259
I was thinking to myself that exact question when typing my
answer earlier, I don't think so. The reason I say this is you have
managed to fill your pool in two years!!! I think it might be time
for you to consider adding another machine. What comes to
mind is the old phrase, don't put all your eggs in one basket...

Your backplane has the ability to expand further but all the I/O
will be going through the single HBA and having to share that
bandwidth would suck! Further expansion of the machine you
have is beyond my ken, so maybe someone else will respond...

When I got my 16 bay chassis and switched out the SAS backplane for a SAS2 backplane, I went out and purchased 16
old POS 40 GB hard drives to test it out. I got lucky and all the
bays were working fine. I have yet to move my main system
hardware to this chassis, but will do so next week.

You can read my older posts regarding more specific
details of my experience with the chassis/backplane.
Yes that was my idea. Most of the data that got my server filled was data in my old file Drobo server. I decided to build a FreeNAS because of the scalability. I thought of adding a JBOD in the future. Now I'm 6TB free space so need to expand.

So what I need to know is a JBOD is worth it for what I want to do at this point? I really think that the new 4 8TB Reds that I just purchased is going to hold me for another 2 years. But since one bay is not working, I can only use 3. Is there a way I can add this three (8TB reds * 3) to my pool I have now? I'm currently running 2 RAIDZ2 (6TB RED) * 2 = 12 drives.
 

BigDave

FreeNAS Enthusiast
Joined
Oct 6, 2013
Messages
2,479
You can not just add the three new drives to your existing pool without rebuilding it. What I would recommend at this time is to begin replacing the 6TB drives with the 8TB drives one at a time until all the drives in Vdev #1 are replaced. This will result in an initial expansion of 6.3TB of "usable" space and the recovery of the original six drives for future reuse. You could then replace the drives in Vdev #2 in the same manner as your needs arise. By the time you have need of more space, you could use the 12 - 6TB drives for a JBOD enclosure.

In a nut shell, for the cost of two more 8TB drives now, you can shortly have an additional 6TB which should give you some breathing room.
 

AltecBX

Senior Member
Joined
Nov 3, 2014
Messages
259
Ohh I see what you mean. I guess I'll do at the moment.
2 questions?
1. Can I set these 3 drives (8TB) in an RAIDZ1 (2 for data + 1 for parity)? Then can I amend it to my Plex media storage so I can grow the drive?
2. If I can't do number 1, how can I know which drives in my chassis is from Vdev #1 and Vdev #2?
3. If I can only do number 2, do I just pull the drive while running and it'll rebuild itself automatically?

Thanks for your help by the way!
 

BigDave

FreeNAS Enthusiast
Joined
Oct 6, 2013
Messages
2,479
8TB drives in a RAIDz1
NEVER, NO, NOT EVER!

If I can't do number 1, how can I know which drives in my chassis is from Vdev #1 and Vdev #2?
You have to go by serial numbers of each drive, first recommendation is to shut down your server and pull drives one by one and either draw a map of the bays or
place a label on the front of the drive trays (there's a little space on the release handle for this purpose).
Go into the GUI and click on storage, then click on your pool name, once highlighted three icons appear at the bottom of the window, click on the Volume Status icon.
Each Vdev will be listed with the drive labels, write them down.
Next click on Storage, then find View Disks, the serial numbers will be listed there.
Read the manual section on how to replace drives, if you have questions after some study, post back for more help. The forum's search feature is your friend - use it!
I shut my server off to pull and add drives to the machine. I avoid "hot swap" because no one yells at me when the server is down and I feel it is safer to do this
 

Scharbag

Senior Member
Joined
Feb 1, 2012
Messages
509
IMG_1701.jpg


I use a little bit of painters tape to label the drive bays. Very helpful when replacing drives. (yes those are milk crates...)

There is also the ability to use your LSI backplane to confirm drive positions:

Code:
sas2ircu list #lists the SAS controllers on your system

sas2ircu 0 display #lists the drives connected to controller 0

sas2ircu 0 locate 2:11 ON #turns the light on for slot 11 on controller 2

sas2ircu 0 locate 2:11 OFF #turns the light off:)


I have had really good luck with the above commands with my 9211-8i cards and the SAS2 expanders in my 36 bay server case.

And yeah, Z1 with 8TB drives is not a great idea as the resilver will take forever once you have any amount of data on there. Best to use Z2 as a minimum with large drives these days.

Cheers.
 

AltecBX

Senior Member
Joined
Nov 3, 2014
Messages
259
Thank you, guys. So since I have 2 Vdev Z2 and have (4) 8TB drives to replace the 6TB, Can I replace (2) at a time? One in vDev 1 and the other on Vdev 2? Let it resilver and do it again for the other (2) 8tB drives?

I know is probably not good to do 2 at a time in one vDev right?
 

Scharbag

Senior Member
Joined
Feb 1, 2012
Messages
509
If you are not off-lining the disk (have spare bays so pools are never degraded) then go for it. I have replaced multiple drives at the same time more than once when growing my production pool (3TB drives replaced with 4TB drives). With your hardware being server grade, there should be no real risk as replacement effectively adds the additional drive to the vDev prior to removing the drive being replaced, so the pool is never put in a degraded state. You will be punishing your drives during the resilver but that will happen regardless of how many drives you replace at a time. Statistically, you may be able to argue that you are exposing yourself to less risk as replacing multiple drives at the same time is far faster than replacing them one at a time. This means the drives in the pool will be worked at 100% for a shorter duration which should reduce the risk.

One caveat is that I have backups of everything (even the trailers from Emby) so I may be a little more cavalier with drive replacements than others. I choose to not resilver in both my production and backup pools at the same time, unless required due to an unexpected failure :)

Cheers,
 

Scharbag

Senior Member
Joined
Feb 1, 2012
Messages
509
Your backplane has the ability to expand further but all the I/O
will be going through the single HBA and having to share that
bandwidth would suck! Further expansion of the machine you
have is beyond my ken, so maybe someone else will respond...
The bandwidth of a single HBA will be more than adequate to run ANY home use server. The SAS2 adapter (9211-8i - can address 256 drives, I have 2 but only use 1) that I use provides 8@6Gbps channels. In simplistic terms, the card should allow for 6GBps of throughput (real world test @2GBps). Given that typical spinning rust drives can handle about 120MBps MAX, you can safely run 50 drives on one HBA before the HBA could even be considered as a bottleneck. Furthermore, if you are accessing your data at 6GBps continuously, you will need to have one HELL of a network and a boat load of data :)

When I bought my used Supermicro 36 drive server, the 9211-8i was connected using only 1 of the 2 available SFF connectors (HBA->rear->front). That must have worked fine for whatever purpose it was built for. I changed to use both connectors cause I am a strange duck, but for my purposes, it makes no difference.

If I were the OP, buy another chassis (Norco +expander or Supermicro) to be used as JBOD only and connect it to your existing HBA via an external SFF cable.

Cheers,
 

Stux

Wizened Sage
Joined
Jun 2, 2016
Messages
4,222
Lsi 9211-8i is limited by 8 PCIe2 lanes to about 32gbps, whereas it exposes 48gbps of bandwidth.

Still not really a problem.

I'd suggest upgrading a vdev. Then adding a jbod chassis.

Be careful not to add a vdev which has a poor data layout to your pool. You can't undo it.

Ie 3x8TB in z1 is a poor data layout.

You should post your system specs for those who don't know.
 

AltecBX

Senior Member
Joined
Nov 3, 2014
Messages
259
If you are not off-lining the disk (have spare bays so pools are never degraded) then go for it. I have replaced multiple drives at the same time more than once when growing my production pool (3TB drives replaced with 4TB drives). With your hardware being server grade, there should be no real risk as replacement effectively adds the additional drive to the vDev prior to removing the drive being replaced, so the pool is never put in a degraded state. You will be punishing your drives during the resilver but that will happen regardless of how many drives you replace at a time. Statistically, you may be able to argue that you are exposing yourself to less risk as replacing multiple drives at the same time is far faster than replacing them one at a time. This means the drives in the pool will be worked at 100% for a shorter duration which should reduce the risk.

One caveat is that I have backups of everything (even the trailers from Emby) so I may be a little more cavalier with drive replacements than others. I choose to not resilver in both my production and backup pools at the same time, unless required due to an unexpected failure :)

Cheers,
Perfect. SO I didn't take them offline. All I did was say replace and choose to replace them with the other 2 drives I had in the system. I was doing some research before and it all said to offline the disk before removing and then insert the drive you were replacing. I didn't know you can do it the way you mentioned. Now I know! Both drives are resilvering. Thanks

Questions
1. Once completed, can I remove the old 6TB drives and move the 8TB drives in its slot?
2. Is there a progress bar to see how the resilvering is doing? I can only see HD activity under Reporting.
 
Last edited:

AltecBX

Senior Member
Joined
Nov 3, 2014
Messages
259
Lsi 9211-8i is limited by 8 PCIe2 lanes to about 32gbps, whereas it exposes 48gbps of bandwidth.

Still not really a problem.

I'd suggest upgrading a vdev. Then adding a jbod chassis.

Be careful not to add a vdev which has a poor data layout to your pool. You can't undo it.

Ie 3x8TB in z1 is a poor data layout.

You should post your system specs for those who don't know.
Yes I'm upgrading the ^TB to 8TB. Putting the 6TB aside so when I build a new Vdev I'll just start it in a JBOD as I don't have space anymore to do it in my 16 bay chassis.
I didn't know that a new 3 drive Vdev can affect my whole system. I thought if it went down, only those 3 drives will lose data.

My signature has most of the info but I'll post below what I have:
Motherboard - SUPERMICRO MBD-X10SRH-CLN4F-O ATX Server Motherboard LGA 201
Processor - Intel Xeon E5-1650 v3 Six-Core Processor 3.5GHz 0GT/s 15MB LGA 2011-v3 CPU, Retail
Heatsink - Supermicro SNK-P0048AP4 CPU Heatsink For LGA2011
Memory - Crucial 646GB DDR4 2133 MT/s (PC4-2133) CL15 DR x4 ECC Registered DIMM 288-Pin
Flash Drive - Super Talent 16GB USB 3.0 Express ST4 Flash Drive (MLC)
Hard Drive - (12) WD Red WD60EFRX 6TB 64MB Cache SATA 6.0Gb/s 3.5" NAS Hard Drive
Chassis - 3U Supermicro 16 bays Storage Server Chassis SC836E26-R1200 SAS2-836EL2 6Gbs
Rack - Tripp Lite 12U SmartRack WallMount Enclosure Rack Cabinet SRW12US33-800115632
 
Last edited:

AltecBX

Senior Member
Joined
Nov 3, 2014
Messages
259
So the resilvering finished with no problem. It took 23 hours to do 2 8TB drives. I still see the same amount of free space after resilvering two 8TB drive in Vdev1. What?

I started resilvering the other 2 drives in Vdev2.
 

Stux

Wizened Sage
Joined
Jun 2, 2016
Messages
4,222
You should've reislverred all the 8TB drives into the same vdev.

You won't see a gain until you replace all six drives in the vdev with 8TB drives.
 

AltecBX

Senior Member
Joined
Nov 3, 2014
Messages
259
You should've reislverred all the 8TB drives into the same vdev.

You won't see a gain until you replace all six drives in the vdev with 8TB drives.
WHAT! So once the other 2 8TB finish, I'll still have the same amount of free space?
I thought I was replacing the 6TB to 8TB so I gain another 4TB.
 
Status
Not open for further replies.
Top