Need help installing FreeNAS with Dell PERC H200. So very stuck and so very frustrated.

Status
Not open for further replies.

mikey1004

Cadet
Joined
Jun 13, 2017
Messages
5
First, a quick summary of my situation so you guys get a general idea of where my problem lies. Then I go into more details so that those of you who think you might be able to help have as much information as possible.



SUMMARY

So I think I’ve successfully installed FreeNAS onto two SSDs in my “new” Dell R510, but I’m having trouble booting. It’s set to boot into BIOS, and when I get into boot manager, the those SSDs are nowhere to be found. In fact, neither are any of my other 12 HDDs.

The problem might stem from the RAID controller card. I got the Dell PERC H200 because it was cheap and apparently people here have gotten it to work reliably. After some trials and tribulations, I think I successfully flashed it to LSI 9211-8i IT Mode by following this helpful guide (plus about a million blog and forum posts).

My evidence that the firmware flashing worked is that when I load the FreeNAS installer, it lists all my disks. So I select the two SSDs, intall FreeNAS, reboot, and… uh... now I can’t find those disks anywhere in the BIOS. It's as if they don't exist. Help?



SETUP


CROSSFLASHING

I’ll go into all the details here in case it’s helps you help me. You can probably skip over most of this though.

Step one is to create a USB drive that is bootable into FreeDOS and also has all these files in the root folder.

This took a while. First of all, I don’t have any PCs lying around the house, only Macs. Apparently this is not a thing that people with Macs have to do, because, as far as I can tell, you cannot make a bootable drive with a volume any bigger than the absolute minimum necessary for a disk image. Let me explain

If you do this in Terminal using the dd command, the boot disk volume that’s created is exactly as big as the disk image. Which, I imagine, is find in most circumstances for most people. But if you need to put a bunch of files in the root folder, you’re outta luck. And no, you can’t expand the volume in Terminal or Disk Utility—I tried.

Everyone says use Rufus. There is no Rufus for Mac. There’s Etcher, but that has the same limitation as dd. Same goes for Disk Utility.

Somehow a friend of mine, whom I dragged to my house and forced to help me, got it to work by just copying the FreeDOS img and the folder onto the USB drive and then booting the server via UEFI.

By some miracle, we got the shell up, so then we differed to the guide: find the right drive, run listall, write down the SAS Address, and run sas2flash.efi -o -f -GBPSAS.FW.

But then I got stuck. It said “Failed Reconnecting the EFI Driver. (EFI Error: Not Found)”, so then I panicked and called a different friend, who found this helpful blog post. He said reboot, so I rebooted. And the thing was bricked. I was in the exact same boat as this guy. Nothing was working; all the sas2flash.efi commands gave me the same error as before I rebooted.

From what I gathered, the best route was probably to try to get into FreeDOS via BIOS rather than UEFI so that I could run all those megarec.exe commands.

So I went back to the first friend and begged him to go find his old PC so we could download Rufus and make a bootable disk the right way this time. He abided, downloaded stuff, ran stuff, and boom, we’ve got a bootable USB drive! And it worked! I went back to the original guide and flashed the thing. We did it! Woo hoo!



INSTALLING FREENAS

I picked the Fresh Install option, and waddaya know, all 14 drives showed up. I selected the two SSDs by hitting spacebar on both of them (that’s how you make a mirrored boot drive, right?), chose to boot via BIOS and then proceeded with the install. It said the installation was successful. So I shut down, removed the FreeNAS installer boot drive, turned it back on, and went into the BIOS boot manager, expecting my lovely new FreeNAS disks to show up.

Nuthin’. “No boot options.” Huh. So I go into system setup, boot sequence, etc., nothing. I hit every freaking button on the keyboard to try to add a drive to the boot sequence, nothing.

Maybe it doesn’t like boot into BIOS? So I plug the FreeNAS installer drive back in and start over, except this time I select UEFI. Fresh install, format drives, etc. Reboot into UEFI, nothing, UEFI boot manager, add drives, nothing. Absolutely nothing.



CROSSFLASHING, PART II

(You can skip this section, too, probably.)

At this point, I decided I'd post here. I started writing all this, but before I hit post, I decided I should dig around on the forums in case I missed something. Then I found this lovely guide. Though the instructions were essentially the same, I thought I’d give it a shot before posting. Besides, I already knew what I was doing; it should go smoothly, right?

Nope. This turned out to be just as difficult as the first go-around. First, the boot FreeDOS boot drive wouldn’t work with the files provided in the guide. I kept getting CONFIG.SYS errors, plus an error that said something like “Incorrect version of MS-DOS.” Any commands I tried just got spat back in my face.

I would wager a guess that the cause was something to do with the keyboard something something in the CONFIG.SYS file, or maybe the COMMAND.COM file. I don’t know. All I know is that that folder render the FreeDOS that Rufus puts on a USB drive totally unusable. at least in my system. After much fiddling, reading, trialing, and erroring, the only way I could get this to work was to redo Rufus, copy the files from the first guide, and then copy over the select few files that looked different and important from the second guide.

I booted into BIOS and then found MS-DOS sitting there, ready to go.
  • Megarec.exe -writesbr worked nicely; so did -cleanflash.
  • Reboot.
  • Sas2flsh.exe -c 0 -o -f 6GBPSAS.FW from the old guide worked.
  • Reboot.
  • Sas2flsh.exe -c 0 -o -f 2118p7.bin worked.
  • Reboot
  • Sas2flsh.exe -c 0 -o -f LSI-P20-2118.bin from the new guide didn’t work, but 2118it.bin did, and, according to sas2flsh.exe -c 0 -list, successfully put it in P20.
  • Reboot.
  • Sas2flsh.exe -c 0 -o -sasadd [original SAS Address] worked.
Worth noting: I couldn’t figure out how to get any flasher other than sas2flsh.exe to work, which I assume is P5 because it says “LSI Corporation SAS2Flash Utility. Version 5.00.00.00 (2010.02.10)" at the top. But what do I know? Either way, I don’t think I ever successfully used the P20 version of SAS2flash.

Also, something not particularly worth noting but I'm going to note it anyway because it's interesting: it turns out that the only difference in the procedure when I followed the first guide vs. the second one was that the second put the card in P7 firmware first before moving onto P20, whereas the first skipped straight to P20. It doesn't seem to have made much of a difference.



INSTALL FREENAS, PART 11 (get it?)

Then FreeNAS 11 was released(!), which, along with having completed my crossflashing redo just minutes earlier, gave me some hope. (Note to self: I really gotta stop with this whole “hope” thing.) So I Etchered the new ISO to a different USB, turned the ignition, and waited for something to go wrong.

It didn’t take long—in fact, before I'd even gotten to the FreeNAS Installer Console Setup home page thing, I saw error messages fly by. They went too quickly to write down what they were, but I suppose I could record a video or something if you're curious.

We made it to the that familiar screen where I could choose Install/Upgrade, so that’s what I did. And there were all my drives. Excellent! I feel like I've seen this movie before, but whatever.

Spacebar on the two SSDs, enter—WHOA! Everything went nuts. Lots of “Retrying Command” stuff. I thought it was going to be stuck there, but somehow it persevered and the “WARNING: THIS WILL ERASE YOUR ENTIRE SOUL” message popped up. I proceeded, picked a password, chose BIOS, and let it run.

It hung on “Retrying Command” again, and I thought it had a chance of working things out on its own again. So I went and made breakfast. When I came back, it was in a loop of retrying command and something about an SCSI Status Error. So then I took a shower, and when I cam back, it said installation had failed. But then it decided to try again all on its own, so now it’s been retrying some sad command for the past two hours. Somehow I doubt this will work.



POSSIBLE SOLUTIONS/EXPLANATIONS I’VE READ OR THOUGHT OF
  • These SSDs aren’t working. I tried installing to other HDDs, both with and without mirroring them, and nothing else worked.
  • The Firmware and the FreeBSD driver don't match. I just read about this like 10 minutes ago on here and here. So I ran mesg | grep “mps0: Firmware” and found that—low and behold—they don't match. My controller is at P20 and FreeBSD wants... P21? Huh? If you thought P20 was the latest, well, so did I. And we were right, it turns out. And this is not an issue.
  • Move the card out of the Integrated Storage Controller Card slot and into one of the PCIe slots. I tried that but then noticed the two included SAS cables don’t reach all the way to the back where the PCIe slots are. I’ve got two new cables coming in the mail—I’ll give this a shot when those arrive.
  • The SAS cables that came with the R510 are bad. Again, we’ll try the new ones when they get here, which should be tomorrow (Friday).
  • The H200 is totally hopeless and I should have just spent an extra like $15 to get the LSI 9211-8i in the first place. Well, on Monday I caved and bought one, so now instead of spending an extra $15, I spent an extra $50 to buy a RAID card that’ll end up just sitting around. Lesson learned.
(Side note: Why on earth does anyone bother with the Dell? Just get the stupid LSI, stupid! They're almost the same stupid price!)



OTHER (PERHAPS) NOTEWORTHY DETAILS

  1. Before I even began the corssflashing process in the first place, the first thing I did when I put my new machine together was install FreeNAS. Because, like, why not? Just like in the two “Installing FreeNAS” sections above, all my disks showed up like they’re supposed to. But just like in the above sections, after installation, they never showed up in my boot options.
  2. I now have access to a PC. It doesn’t have any HDDs anything, but if I need to play with a motherboard other than the one on the R510, I can do that.
  3. I am pretty far out of my league in setting this up. I might come across as reasonably knowledgeable from this post, but believe you me, I have no idea what I’m doing. I have friends who are smarter than be but EVEN THEY are stumped. Either that, or they’re sick of my pestering them.
  4. The impetus for this endeavor is that I do a lot of video editing—some of which is collaborative—and a lot of traveling. Those two things don’t mix. I don’t really like dragging external drives with me wherever I go, I don’t like my stuff not being backed up all the time, and I don’t like having to drive across town or fly across the country to collaborate with someone. I decided to build a server that would serve (get it?) as a video hub: all my video files would live there, nicely organized, along with all the Final Cut Pro X library files. If I’m at home, I access it over LAN (via NFS? I like what these people did). If my coworkers are in town, they come to the house and also access it over LAN. If I need to work remotely or share files to a coworker, I can do that (Also via NFS? I don’t really know how all this works—please refer to Noteworthy Detail 3.)
  5. Yes, I know, I probably should have just gotten a Synology or a QNAP or something, but here we are nonetheless.


CLOSING REMARKS

If you need any more details (as you can probably tell, I don't mind writing waaaaay too many words), photos of my screen at various points, any other information, just let me know and I'll get that for you ASAP. If this is in the wrong section of the forum, I’ll happily move it. I greatly appreciate any help you can provide, in addition to the help you’ve already provided courtesy of dozens of old threads I’ve read through, which were what allowed me to get this far in the first place. Thanks so much!
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,367
In order to boot off drives connected to an HBA the HBA needs to have the option roms installed when you flash it.

It seems to me like you only installed the IT firmware and not the option roms. I could be wrong.

If you have onboard Sata ports can you connect the SSDs to those?
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
Way to much info! Plug the ssds into the motherboard and it will boot.

Sent from my Nexus 5X using Tapatalk
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
First, a quick summary of my situation so you guys get a general idea of where my problem lies. Then I go into more details so that those of you who think you might be able to help have as much information as possible.



SUMMARY

So I think I’ve successfully installed FreeNAS onto two SSDs in my “new” Dell R510, but I’m having trouble booting. It’s set to boot into BIOS, and when I get into boot manager, the those SSDs are nowhere to be found. In fact, neither are any of my other 12 HDDs.

The problem might stem from the RAID controller card. I got the Dell PERC H200 because it was cheap and apparently people here have gotten it to work reliably. After some trials and tribulations, I think I successfully flashed it to LSI 9211-8i IT Mode by following this helpful guide (plus about a million blog and forum posts).

My evidence that the firmware flashing worked is that when I load the FreeNAS installer, it lists all my disks. So I select the two SSDs, intall FreeNAS, reboot, and… uh... now I can’t find those disks anywhere in the BIOS. It's as if they don't exist. Help?



SETUP


CROSSFLASHING

I’ll go into all the details here in case it’s helps you help me. You can probably skip over most of this though.

Step one is to create a USB drive that is bootable into FreeDOS and also has all these files in the root folder.

This took a while. First of all, I don’t have any PCs lying around the house, only Macs. Apparently this is not a thing that people with Macs have to do, because, as far as I can tell, you cannot make a bootable drive with a volume any bigger than the absolute minimum necessary for a disk image. Let me explain

If you do this in Terminal using the dd command, the boot disk volume that’s created is exactly as big as the disk image. Which, I imagine, is find in most circumstances for most people. But if you need to put a bunch of files in the root folder, you’re outta luck. And no, you can’t expand the volume in Terminal or Disk Utility—I tried.

Everyone says use Rufus. There is no Rufus for Mac. There’s Etcher, but that has the same limitation as dd. Same goes for Disk Utility.

Somehow a friend of mine, whom I dragged to my house and forced to help me, got it to work by just copying the FreeDOS img and the folder onto the USB drive and then booting the server via UEFI.

By some miracle, we got the shell up, so then we differed to the guide: find the right drive, run listall, write down the SAS Address, and run sas2flash.efi -o -f -GBPSAS.FW.

But then I got stuck. It said “Failed Reconnecting the EFI Driver. (EFI Error: Not Found)”, so then I panicked and called a different friend, who found this helpful blog post. He said reboot, so I rebooted. And the thing was bricked. I was in the exact same boat as this guy. Nothing was working; all the sas2flash.efi commands gave me the same error as before I rebooted.

From what I gathered, the best route was probably to try to get into FreeDOS via BIOS rather than UEFI so that I could run all those megarec.exe commands.

So I went back to the first friend and begged him to go find his old PC so we could download Rufus and make a bootable disk the right way this time. He abided, downloaded stuff, ran stuff, and boom, we’ve got a bootable USB drive! And it worked! I went back to the original guide and flashed the thing. We did it! Woo hoo!



INSTALLING FREENAS

I picked the Fresh Install option, and waddaya know, all 14 drives showed up. I selected the two SSDs by hitting spacebar on both of them (that’s how you make a mirrored boot drive, right?), chose to boot via BIOS and then proceeded with the install. It said the installation was successful. So I shut down, removed the FreeNAS installer boot drive, turned it back on, and went into the BIOS boot manager, expecting my lovely new FreeNAS disks to show up.

Nuthin’. “No boot options.” Huh. So I go into system setup, boot sequence, etc., nothing. I hit every freaking button on the keyboard to try to add a drive to the boot sequence, nothing.

Maybe it doesn’t like boot into BIOS? So I plug the FreeNAS installer drive back in and start over, except this time I select UEFI. Fresh install, format drives, etc. Reboot into UEFI, nothing, UEFI boot manager, add drives, nothing. Absolutely nothing.



CROSSFLASHING, PART II

(You can skip this section, too, probably.)

At this point, I decided I'd post here. I started writing all this, but before I hit post, I decided I should dig around on the forums in case I missed something. Then I found this lovely guide. Though the instructions were essentially the same, I thought I’d give it a shot before posting. Besides, I already knew what I was doing; it should go smoothly, right?

Nope. This turned out to be just as difficult as the first go-around. First, the boot FreeDOS boot drive wouldn’t work with the files provided in the guide. I kept getting CONFIG.SYS errors, plus an error that said something like “Incorrect version of MS-DOS.” Any commands I tried just got spat back in my face.

I would wager a guess that the cause was something to do with the keyboard something something in the CONFIG.SYS file, or maybe the COMMAND.COM file. I don’t know. All I know is that that folder render the FreeDOS that Rufus puts on a USB drive totally unusable. at least in my system. After much fiddling, reading, trialing, and erroring, the only way I could get this to work was to redo Rufus, copy the files from the first guide, and then copy over the select few files that looked different and important from the second guide.

I booted into BIOS and then found MS-DOS sitting there, ready to go.
  • Megarec.exe -writesbr worked nicely; so did -cleanflash.
  • Reboot.
  • Sas2flsh.exe -c 0 -o -f 6GBPSAS.FW from the old guide worked.
  • Reboot.
  • Sas2flsh.exe -c 0 -o -f 2118p7.bin worked.
  • Reboot
  • Sas2flsh.exe -c 0 -o -f LSI-P20-2118.bin from the new guide didn’t work, but 2118it.bin did, and, according to sas2flsh.exe -c 0 -list, successfully put it in P20.
  • Reboot.
  • Sas2flsh.exe -c 0 -o -sasadd [original SAS Address] worked.
Worth noting: I couldn’t figure out how to get any flasher other than sas2flsh.exe to work, which I assume is P5 because it says “LSI Corporation SAS2Flash Utility. Version 5.00.00.00 (2010.02.10)" at the top. But what do I know? Either way, I don’t think I ever successfully used the P20 version of SAS2flash.

Also, something not particularly worth noting but I'm going to note it anyway because it's interesting: it turns out that the only difference in the procedure when I followed the first guide vs. the second one was that the second put the card in P7 firmware first before moving onto P20, whereas the first skipped straight to P20. It doesn't seem to have made much of a difference.



INSTALL FREENAS, PART 11 (get it?)

Then FreeNAS 11 was released(!), which, along with having completed my crossflashing redo just minutes earlier, gave me some hope. (Note to self: I really gotta stop with this whole “hope” thing.) So I Etchered the new ISO to a different USB, turned the ignition, and waited for something to go wrong.

It didn’t take long—in fact, before I'd even gotten to the FreeNAS Installer Console Setup home page thing, I saw error messages fly by. They went too quickly to write down what they were, but I suppose I could record a video or something if you're curious.

We made it to the that familiar screen where I could choose Install/Upgrade, so that’s what I did. And there were all my drives. Excellent! I feel like I've seen this movie before, but whatever.

Spacebar on the two SSDs, enter—WHOA! Everything went nuts. Lots of “Retrying Command” stuff. I thought it was going to be stuck there, but somehow it persevered and the “WARNING: THIS WILL ERASE YOUR ENTIRE SOUL” message popped up. I proceeded, picked a password, chose BIOS, and let it run.

It hung on “Retrying Command” again, and I thought it had a chance of working things out on its own again. So I went and made breakfast. When I came back, it was in a loop of retrying command and something about an SCSI Status Error. So then I took a shower, and when I cam back, it said installation had failed. But then it decided to try again all on its own, so now it’s been retrying some sad command for the past two hours. Somehow I doubt this will work.



POSSIBLE SOLUTIONS/EXPLANATIONS I’VE READ OR THOUGHT OF
  • These SSDs aren’t working. I tried installing to other HDDs, both with and without mirroring them, and nothing else worked.
  • The Firmware and the FreeBSD driver don't match. I just read about this like 10 minutes ago on here and here. So I ran mesg | grep “mps0: Firmware” and found that—low and behold—they don't match. My controller is at P20 and FreeBSD wants... P21? Huh? If you thought P20 was the latest, well, so did I. And we were right, it turns out. And this is not an issue.
  • Move the card out of the Integrated Storage Controller Card slot and into one of the PCIe slots. I tried that but then noticed the two included SAS cables don’t reach all the way to the back where the PCIe slots are. I’ve got two new cables coming in the mail—I’ll give this a shot when those arrive.
  • The SAS cables that came with the R510 are bad. Again, we’ll try the new ones when they get here, which should be tomorrow (Friday).
  • The H200 is totally hopeless and I should have just spent an extra like $15 to get the LSI 9211-8i in the first place. Well, on Monday I caved and bought one, so now instead of spending an extra $15, I spent an extra $50 to buy a RAID card that’ll end up just sitting around. Lesson learned.
(Side note: Why on earth does anyone bother with the Dell? Just get the stupid LSI, stupid! They're almost the same stupid price!)



OTHER (PERHAPS) NOTEWORTHY DETAILS

  1. Before I even began the corssflashing process in the first place, the first thing I did when I put my new machine together was install FreeNAS. Because, like, why not? Just like in the two “Installing FreeNAS” sections above, all my disks showed up like they’re supposed to. But just like in the above sections, after installation, they never showed up in my boot options.
  2. I now have access to a PC. It doesn’t have any HDDs anything, but if I need to play with a motherboard other than the one on the R510, I can do that.
  3. I am pretty far out of my league in setting this up. I might come across as reasonably knowledgeable from this post, but believe you me, I have no idea what I’m doing. I have friends who are smarter than be but EVEN THEY are stumped. Either that, or they’re sick of my pestering them.
  4. The impetus for this endeavor is that I do a lot of video editing—some of which is collaborative—and a lot of traveling. Those two things don’t mix. I don’t really like dragging external drives with me wherever I go, I don’t like my stuff not being backed up all the time, and I don’t like having to drive across town or fly across the country to collaborate with someone. I decided to build a server that would serve (get it?) as a video hub: all my video files would live there, nicely organized, along with all the Final Cut Pro X library files. If I’m at home, I access it over LAN (via NFS? I like what these people did). If my coworkers are in town, they come to the house and also access it over LAN. If I need to work remotely or share files to a coworker, I can do that (Also via NFS? I don’t really know how all this works—please refer to Noteworthy Detail 3.)
  5. Yes, I know, I probably should have just gotten a Synology or a QNAP or something, but here we are nonetheless.


CLOSING REMARKS

If you need any more details (as you can probably tell, I don't mind writing waaaaay too many words), photos of my screen at various points, any other information, just let me know and I'll get that for you ASAP. If this is in the wrong section of the forum, I’ll happily move it. I greatly appreciate any help you can provide, in addition to the help you’ve already provided courtesy of dozens of old threads I’ve read through, which were what allowed me to get this far in the first place. Thanks so much!
Hello, @mikey1004 -- welcome to the forums!

I'm sorry for your troubles...

@Stux makes a good point: it certainly sounds as though you haven't flashed the boot ROM on your Dell H200. Here is a screenshot from one of my systems booting from a Dell H200; you should see this when your sysytem boots:
avago-boot-1.jpg


If you don't see this at boot-up then you're missing the boot ROM. There are two things you can do to get going:
  • Take @SweetAndLow's suggestion and move the 2 boot SSDs over to SATA ports on your motherboard. You'll need to configure your system's BIOS to boot from one of these SSDs. This is what I would do, BTW. Or you could...
  • Reflash the HBA to include the boot ROM. You'll need the boot ROM code (mptsas2.rom) that should've accompanied the 2118it.bin firmware file you used to flash your card. The flash command would be something like this: sas2flash -o -f 2118it.bin -b mptsas2.rom
Good luck!
 

mikey1004

Cadet
Joined
Jun 13, 2017
Messages
5
Hi guys, thanks for the responses! The consensus is clear:
  1. Plug the SSDs straight into the motherboard. No can do! This is not supported by the R510 12-bay server, as far as I can tell. There are some SATA ports, but, from what I've been reading, I don't think they'll work.
  2. Enable the Boot/Option ROM. Good idea! I've just gone and done this. This ROM file was not included in the zip files downloaded from the two guides I was following, but I was able to find it on the Broadcom support page in the "9211-8i P20 IR IT Firmware BIOS for MSDOS Windows" firmware download.
  3. I wrote too much. I realize this. Yes, I included the insane level of detail partially for you to be able to help me better, but they were also for people in the future who are just as confused/uninformed as I was. I cannot tell you how many threads I found that were almost helpful to me.
OK, so the Boot ROM has been enabled, and FreeNAS has been successfully installed (despite hanging on those same SCSI Status Errors again for a little while). Only problem is I still can't boot FreeNAS.

In the Avago Config Utility, I made sure the controller was enabled. Not much else I can do in there, I think. In the System Setup, I made sure it was set to boot from BIOS. Again, not a lot of choices. In the BIOS Boot Manager, my choices were:
  • Normal
  • Hard drive C:
If I select the former, boot fails. If I select the latter, I'm presented with only one option:
  • Slot 4 #0200 IDOC LUN0 ATA HGST HDN######
Boot fails if I select this one because this is one of the HDDs and not the SSDs—no duh. I don't see any option to boot from the SSDs. Any ideas?

So I'm getting closer, but I'm not quite there yet. But I can already smell the sweet, sweet aromas of RAID-Z3 tempting me from afar!
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
Hi guys, thanks for the responses! The consensus is clear:
  1. Plug the SSDs straight into the motherboard. No can do! This is not supported by the R510 12-bay server, as far as I can tell. There are some SATA ports, but, from what I've been reading, I don't think they'll work.
  2. Enable the Boot/Option ROM. Good idea! I've just gone and done this. This ROM file was not included in the zip files downloaded from the two guides I was following, but I was able to find it on the Broadcom support page in the "9211-8i P20 IR IT Firmware BIOS for MSDOS Windows" firmware download.
  3. I wrote too much. I realize this. Yes, I included the insane level of detail partially for you to be able to help me better, but they were also for people in the future who are just as confused/uninformed as I was. I cannot tell you how many threads I found that were almost helpful to me.
OK, so the Boot ROM has been enabled, and FreeNAS has been successfully installed (despite hanging on those same SCSI Status Errors again for a little while). Only problem is I still can't boot FreeNAS.

In the Avago Config Utility, I made sure the controller was enabled. Not much else I can do in there, I think. In the System Setup, I made sure it was set to boot from BIOS. Again, not a lot of choices. In the BIOS Boot Manager, my choices were:
  • Normal
  • Hard drive C:
If I select the former, boot fails. If I select the latter, I'm presented with only one option:
  • Slot 4 #0200 IDOC LUN0 ATA HGST HDN######
Boot fails if I select this one because this is one of the HDDs and not the SSDs—no duh. I don't see any option to boot from the SSDs. Any ideas?

So I'm getting closer, but I'm not quite there yet. But I can already smell the sweet, sweet aromas of RAID-Z3 tempting me from afar!
You need to reboot the system and press Ctrl-C to enter the LSI configuration utility.

At the main screen make sure 'Boot Order' is set to 0 and 'Boot Support' is set to 'Enabled BIOS & OS'.

Then select 'SAS Topology', get a list of the attached devices, highlight the SSD you want to boot from, and press Alt-B to make it the preferred boot device. Press 'Alt+M' ('More Keys') to see other available options.

I'm working from memory here, so you may have to do a little self-guided exploration... but the gist of it is that you need to specify the device you want to boot from. Then your BIOS will boot from that drive ('Hard drive C:') in the expected way.
 

mikey1004

Cadet
Joined
Jun 13, 2017
Messages
5
Ah hah! Thank you, Spearfoot! Somehow it never occurred to me to hit Enter after selecting the controller (refer back to Noteworthy Detail #3).

So I selected SAS9211-8i, hit Enter, selected SAS Topology, hit Enter, selected my enclosure ("DP SAS2 R510 BP"), hit Enter, selected one of my SSDs, hit ALT + B, selected the other SSD, hit ALT + A, made sure all the options were as you said they should be, and saved and exited. Hooray, we got that figured out.

I rebooted, and got to a screen I'd never gotten to before: the FreeNAS GNU GRUB menu. Yippee! The excitement didn't last long, however, as some familiar errors reared their ugly heads and prevented me from reaching the FreeNAS Console Setup menu (AKA, The Promised Land). There were a lot of errors, but some of them hung around long enough for me to take a picture. See if you can make heads or tails of them:

IMG_2398.jpg

IMG_2396.jpg
IMG_2397.jpg

These are similar sorts of errors that I was seeing when I installed FreeNAS (whether the install was successful or not). Unless you guys have any better suggestions, after dinner I'm going to try re-installing FreeNAS (maybe something went wrong), replacing the SAS cables (they arrived today!), and, if I'm feeling especially adventurous, replacing the Dell with the new LSI card that is sitting in my mailbox.



UPDATE

We're in! After what feels like 100 years wandering in the desert, I've finally made it to The Promised Land! I replaced the SAS cables (which, by the way, were impossible to detach—who designed these things?!) and it booted straight into the FreeNAS Console Setup Menu. After some trouble configuring the network settings (turns out the server's NIC was turned off), I actually loaded the FreeNAS GUI.

A million thanks to everyone who helped my get this darned thing off the ground. I cannot tell you how relieved I am, and it's all due to your kindness. I could kiss you!

P.S. While I have you here, any thoughts on the RAID setup? This will be primarily for editing with Final Cut Pro X, so the priorities for me would be:
  1. Read/write speed
  2. Redundancy
  3. Capacity
I see many threads with people recommending striping two 6-disk RAID-Z2s instead as opposed to one big 12-disk RAID-Z3, but I'm unclear as to why. The other option is an 11-disk RAID-Z3 with one hot spare, but that seems like a waste of resources.
 
Last edited:

Linkman

Patron
Joined
Feb 19, 2015
Messages
219
Congrats on seeing it through.

When you say "1. Read/write speed," are you talking bandwidth, or IOPS? I am assuming IOPS, based on your comment about editing, so you may want to look into mirror pairs, say six 2-disc mirrors, for better IOPS than RAIDZ2/3
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
Ah hah! Thank you, Spearfoot! Somehow it never occurred to me to hit Enter after selecting the controller (refer back to Noteworthy Detail #3).
You're very welcome!
P.S. While I have you here, any thoughts on the RAID setup? This will be primarily for editing with Final Cut Pro X, so the priorities for me would be:
  1. Read/write speed
  2. Redundancy
  3. Capacity
I see many threads with people recommending striping two 6-disk RAID-Z2s instead as opposed to one big 12-disk RAID-Z3, but I'm unclear as to why. The other option is an 11-disk RAID-Z3 with one hot spare, but that seems like a waste of resources.
There are problems with 'wide' arrays:
  • The write performance of a vdev is that of the slowest disk in the vdev
  • The wider an array the longer it takes to resilver if you ever have to replace a failed/failing disk.
These are some of the the reasons why you've seen recommendations to use two 6-disk RAIDZ2 arrays vs. a single 12-disk RAIDZ3.

Mirrors always out-perform any kind of RAIDZ array, but you don't get the same level of redundancy and at 50% they're very space-inefficient. FreeNAS offers 3-way mirrors, which improves redundancy -- you can lose 2 of the 3 disks in the mirror without losing your data -- but at even lower space space efficiency of 33.33%!

So at one extreme you can build a system of 4 x 3-disk mirrored vdevs for a grand total of 16TB of capacity. At the other extreme you can build a single 12-disk RAIDZ3 vdev and get 40TB of capacity. Quite a bit of difference in capacity!

So what configuration will work best for your use-case? Probably a pair of 6-disk RAIDZ2 vdevs: this will give you 32TB of space; half the IOPS of the 4-mirror system but twice the IOPS of the single RAIDZ3 array; and good redundancy in that you can lose any two disks in both RAIDZ2 vdevs without losing your data.

In the end, the design you chose is up to you...

Here is a good article about the subject: "A Closer Look at ZFS, Vdevs, and Performance".

Good luck!
 

mikey1004

Cadet
Joined
Jun 13, 2017
Messages
5
Linkman, thanks for dropping by. I did look into mirrors but decided against them: if I did six two-disk mirrors, I'd have 6 separate vdevs to deal with, right? That seems like a hassle, and then none of my vdevs could grow larger than 4 TB. I'd rather everything be unified into one big volume. And there's no way I'm striping those six mirrors together—that'd be far too big of a risk of failure.

And yes, I think I did mean IOPS—although, to be honest, I don't entirely understand the difference.

Spearfoot, thanks for the info, as per usual. I had read that article already, but I think it might have been more helpful the second time through. Good stuff. By the time you responded, I had already set up an 11-disk RAID-Z3 array (with one hot spare) and started copying over my video files. While it is working nicely, I must say, I wouldn't mind if it were a little faster. It's not slow, but I have to imagine there's still room for improvement. Maybe it's just that working with Thunderbolt these past few years has spoiled me!

My data is still in the process of transferring, but write times for that transfer over LAN are somewhere in the neighborhood of 50 MB/s. One thing that surprised me was that the RAM filled up right away. The CPU seems like it's not doing much of anything, while the 64 GB of RAM are filled to the brim. I went and ordered another 4 x 16 GB sticks this morning (plus another CPU so that I can enable all eight RAM slots), so that should substantially widen that bottleneck.

Anyway, back to my array. Do you mean to say I should stripe two 6-disk RAID-Z2 vdevs? In which case losing three disks from one vdev or the other would cause me to lose the entire thing. That doesn't seem like a good idea. I just did some rudimentary probability calculations, and this setup would be WAY more prone to failure than one big RAID-Z3. Like, if I assume a 1% probability that any given disk will fail, two striped 6-disk RAID-Z2s are 13 times more prone to failure than an 11-disk RAID-Z3.

I think that's not what you meant. You mean to keep two separate RAID-Z2 vdevs, right? And I know I just said I'm not a huge fan of have multiple vdevs to work with, but two isn't a disaster. I feel like I could live with that—it's significantly faster, of course. Although it's still 6x more likely to fail than one Z3. So that's worth considering.

Here's a question: Can I mirror two vdevs? Say I made two 6-disk RAID-Z1s and then mirrored them. Then I'd have 20 TB, and I imagine it would have some pretty fast read/write times.

Another thing worth considering: In my above post, I said I didn't get the point of having a hot spare. Well, now I get it. I was reading the User Guide and it imbued upon me the wisdom of having a spare drive port: apparently it makes it much easier/faster/less risky to replace a failing/failed drive if you can first add a drive to the vdev, rather than having to take the failing drive out to make room for the replacement.

As far as I can tell, there's no way to add an expander or connect a drive externally to the Dell R510 II 12-bay, so I think I'd like to keep a hot spare around to make things easier to replace (or, perhaps some day, upgrade). Maybe I do one 5-disk and one 6-disk RAID-Z2?

I apologize for doing so much thinking out loud here. I just want to get everything optimized, you know? Don't leave any leaf unturned!
 

craigdt

Explorer
Joined
Mar 10, 2014
Messages
74
When I had 1x 6 disk RAID-Z2 running at home with WD RED Pro 3TB drives, I could saturate 2x 1gig (LACP) network connections no problems.

See my signature for the Main Server for its specs

I then added another 6 disk RAID-Z2 vdev to the volume to make it bigger, but at the time I used older 2TB combination of WD Green and WD Red drives which are quite slow, I could still saturate 1x 1gig network with it. But it wasn't always consistent.

I'm in the middle of replacing all the older slower 2tb WD Green and Red drives with Seagate Ironwolf 6tb drvies, I'm hoping it will improve the performance again to the point where I can saturate 2x 1gig network again. I will test this once I finish replacing the drives at the end of this week.

I personally think RAID-Z2 is plenty for redundancy for smaller vdev's, but it also depends on critically of up time.

Another thing I can't stress enough is having a backup system as well. I have a backup server, but when I initially built my server, I had no backup and I was quite nervous about it. Even with the best raid redundancy, a server can still fail. Thankfully I got a couple of old supermicro servers free from work that they were throwing out, perfect for backup duties..

Given decent spec'd hardware and hard drives, I would imagine FreeNAS could saturate a 10gig network connection, really comes down to how much you want to spend.

Looking at your spec's I would have expected you to at least get 100MB/s with your transfer. But there are a lot of factors that can influence network speeds, for example: speed of client machine hdd, networking gear, etc
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,466
if I did six two-disk mirrors, I'd have 6 separate vdevs to deal with, right? That seems like a hassle, and then none of my vdevs could grow larger than 4 TB.
You'd have six vdevs, which would (if set up this way, which we'd generally recommend) be combined into a single pool. Any of those vdevs could grow to any size you wanted if you later replaced those disks with larger disks. Replace (the right) two disks, your pool grows. Replace (the right) two more disks, the pool grows again. Or you can expand the pool by adding pairs of disks.
 

mikey1004

Cadet
Joined
Jun 13, 2017
Messages
5
When I had 1x 6 disk RAID-Z2 running at home with WD RED Pro 3TB drives, I could saturate 2x 1gig (LACP) network connections no problems.

Hi there, Craig! Appreciate the advice. You see, I thought that would be the case for me, too.

And yet I'm getting write speeds of about 59 MB/s (i.e., copying a file to FreeNAS) and read speeds of about 63 MB/s (copying a file from FreeNAS). In the Reporting tab of the FreeNAS GUI, I'm seeing traffic on the primary network interface hovering around 500 Mb/s and a maximum of 613 Mb/s, which is in line with my hand-timed transfer tests.

But there are a lot of factors that can influence network speeds, for example: speed of client machine hdd, networking gear, etc

I would very much like to figure out where the bottleneck is. I can't imagine the issue is on the client computer side: for this test, I plugged a relatively new MacBook Pro with an internal SSD directly (via a USB-C Ethernet adapter) into the same switch as the server. And I don't think the limiting factor is the networking gear either: I have all brand new Ubiquiti Unifi stuff, all Gigabit capable (the 10 GbE switches and routers was just out of my price range, although I wired the house with Cat6 500 MHz, so it should be good to go when I make the leap), so everything should be able to support at least 100 MB/s.

The server itself is currently configured with two mirrored SSDs for the OS, a 2.26 GHz six-core Xeon processor, 64 GB RAM, and 12 HGST Deskstar HDDs (see links in my first post for the exact models of everything). When I had these exact hard drives in a 4-disk RAID 5, I was getting read/write speeds over 500 MB/s (bytes, not bits!), so I know these drives are plenty capable. Is there a better way to test the speed of the RAID configuration? Maybe like an internal file transfer? Then we can narrow down the bottleneck even further.

I think it's certain that the problem lies somewhere in the server, but it could be one of several things besides the volume array: the network interface, the HBA, even the SAS cables themselves.

(By the way: When reading through the FreeNAS User Guide, I came across this link aggregation thing, which sounded like a no brainer. So I plugged in my second interface, enabled LACP... and the whole thing froze. I did some googling, and found this thread, which described by problem exactly. I wasn't up for the task of doing a complete format at the time, so I didn't try the proposed solution.

I did, however, try what the guy at the bottom suggested, but that didn't work either. Rebooting from the console just brought me back to the console with zero network uplink: no GUI, etc. I tried this with both Failover and LACP, same result. So I did some more googling and found a bunch of people saying, "Don't mess with link aggregation unless you really know what you're doing!" which I definitely do not, so I bailed even though I'm pretty sure my switch should support it. If anyone has any suggestions as to how to get LAGG working, I'd love to hear them.)

Another thing I can't stress enough is having a backup system as well. I have a backup server, but when I initially built my server, I had no backup and I was quite nervous about it. Even with the best raid redundancy, a server can still fail. Thankfully I got a couple of old supermicro servers free from work that they were throwing out, perfect for backup duties..

Yeah, I'm working on getting backup figured out. For now, everything important is living on several (redundant) external drives, although this will not be sustainable as I pick up new projects. I also have CrashPlan, and I see that it has a FreeNAS plugin that I'll test shortly, but I realize that's still somewhat insufficient. I'll figure something out.

I personally think RAID-Z2 is plenty for redundancy for smaller vdev's, but it also depends on critically of up time.

Can you expand a bit on this point? Specifically, what you mean by "uptime" (e.g., time powered on, time spent reading/writing, etc.).

You'd have six vdevs, which would (if set up this way, which we'd generally recommend) be combined into a single pool. Any of those vdevs could grow to any size you wanted if you later replaced those disks with larger disks. Replace (the right) two disks, your pool grows. Replace (the right) two more disks, the pool grows again. Or you can expand the pool by adding pairs of disks.

Gotcha. Thanks, Dan. Just to be clear: If I lose (the right) two disks from a pool of six mirrored pairs, I lose the whole pool, right? Assuming I understood Cyberjock's famous slideshow, pooling together multiple vdevs effectively stripes them.

So, for example, let's say I call my 12 disks A0, A1, B0, B1, etc., according how the mirrors are paied. I could lose something like A1, B1, C1, D1, E1, and F1, and I'd be OK. But if I lose D0 and D1, I'm toast. Am I understanding this correctly?

Another thing. I took one look at this graph and assumed it was telling me my RAM was all filled up, but on second glance, maybe I'm misinterpreting it. What in the world am I looking at here? Does "wired" mean used or unused or something else?

Screen Shot 2017-06-20 at 10.08.18 PM.png
 

craigdt

Explorer
Joined
Mar 10, 2014
Messages
74
Hi there, Craig! Appreciate the advice. You see, I thought that would be the case for me, too.

And yet I'm getting write speeds of about 59 MB/s (i.e., copying a file to FreeNAS) and read speeds of about 63 MB/s (copying a file from FreeNAS). In the Reporting tab of the FreeNAS GUI, I'm seeing traffic on the primary network interface hovering around 500 Mb/s and a maximum of 613 Mb/s, which is in line with my hand-timed transfer tests.

I would very much like to figure out where the bottleneck is. I can't imagine the issue is on the client computer side: for this test, I plugged a relatively new MacBook Pro with an internal SSD directly (via a USB-C Ethernet adapter) into the same switch as the server. And I don't think the limiting factor is the networking gear either: I have all brand new Ubiquiti Unifi stuff, all Gigabit capable (the 10 GbE switches and routers was just out of my price range, although I wired the house with Cat6 500 MHz, so it should be good to go when I make the leap), so everything should be able to support at least 100 MB/s.

The server itself is currently configured with two mirrored SSDs for the OS, a 2.26 GHz six-core Xeon processor, 64 GB RAM, and 12 HGST Deskstar HDDs (see links in my first post for the exact models of everything). When I had these exact hard drives in a 4-disk RAID 5, I was getting read/write speeds over 500 MB/s (bytes, not bits!), so I know these drives are plenty capable. Is there a better way to test the speed of the RAID configuration? Maybe like an internal file transfer? Then we can narrow down the bottleneck even further.

I think it's certain that the problem lies somewhere in the server, but it could be one of several things besides the volume array: the network interface, the HBA, even the SAS cables themselves.

Sorry for getting back to you this late, I've been hammered with work and my own server problems :eek:

Based on the 12x HGST drives you have, I would have expected them to max out a 1gigabit link no problems.

One thing you can try that comes to mind is to copy a large file from one spot to another on your NAS within ssh (therefor you are doing a file transfer within FreeNAS not over the network) and whilst that is running, ssh in a second time and perhaps use iostat to see the performance of each of your hdd's

This will in theory take the networking gear out of the equation.

(By the way: When reading through the FreeNAS User Guide, I came across this link aggregation thing, which sounded like a no brainer. So I plugged in my second interface, enabled LACP... and the whole thing froze. I did some googling, and found this thread, which described by problem exactly. I wasn't up for the task of doing a complete format at the time, so I didn't try the proposed solution.

I did, however, try what the guy at the bottom suggested, but that didn't work either. Rebooting from the console just brought me back to the console with zero network uplink: no GUI, etc. I tried this with both Failover and LACP, same result. So I did some more googling and found a bunch of people saying, "Don't mess with link aggregation unless you really know what you're doing!" which I definitely do not, so I bailed even though I'm pretty sure my switch should support it. If anyone has any suggestions as to how to get LAGG working, I'd love to hear them.)

I wouldn't worry about LACP for the moment, its a nice to have after you get the performance issue sorted. I found it easy to setup myself on FreeNAS, but configuring my Cisco switch is a different storey, I don't do it often so I forget how to do it :rolleyes:

Can you expand a bit on this point? Specifically, what you mean by "uptime" (e.g., time powered on, time spent reading/writing, etc.).

What I mean was the likelyhood of losing your raid array and then having to rely on backups to recover.
I guess its about a balance of the amount of redundancy vs capacity

Example my home server I have a volume that has 2 vdevs in it, each vdev consists of a 6drive raidz2, this gives me a 2 drive redundancy in each of the vdev's. Also means later on I can upgrade the capacity of one of my vdev's by only having to replace 6 drives. (which I just completed recently)

My original theory behind going the 6 drive vdev was later on I could upgrade to a 24 bay case and add another 2 vdevs with 6 drives each as I need the capacity whilst maintaining a level of redunancy I deem enough to have the least fear of total raid failure :D
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
So, for example, let's say I call my 12 disks A0, A1, B0, B1, etc., according how the mirrors are paied. I could lose something like A1, B1, C1, D1, E1, and F1, and I'd be OK. But if I lose D0 and D1, I'm toast. Am I understanding this correctly?
That's exactly right. You can lose one of the drives in each mirrored pair without losing your pool, but if you lose both drives in any pair you will lose your pool. ZFS (and therefore FreeNAS) provides for 3-way mirrors... but at the expense of lost capacity, as these would only have a storage efficiency of 33 1/3%.
Another thing. I took one look at this graph and assumed it was telling me my RAM was all filled up, but on second glance, maybe I'm misinterpreting it. What in the world am I looking at here? Does "wired" mean used or unused or something else?
Wired memory is more-or-less a proxy for your ARC cache and it's perfectly normal to see most of your RAM allocated the way you're seeing. Check your ARC Size under Reporting->ZFS and I'll bet it matches pretty closely with the wired memory allocation you're seeing under Reporting->Memory.
 

Charlie G

Cadet
Joined
Jul 18, 2017
Messages
4
We're in! After what feels like 100 years wandering in the desert, I've finally made it to The Promised Land!

For what it's worth, thank you for posting this. I'm finding myself in a very similar situation. I wasn't aware of the drive size limitation on the SAS6 iRs (which is what is in the 12-bay R510 I just ordered) and in looking further into it, it sounds like those machines never used them in the first place :mad:.
It's looking like I'm going to have to do more or less exactly what you did to get my 3TB drives registering. Did you end up using that LSI 9211-8i instead of a re-flashed H200? All the listings I see refer to either as 8-drive cards where the backplane+internal trays add up to 14. Maybe that's only if you're actually using it for RAID? I want to make sure I'm shopping the right part (I don't want to make unnecessary mistakes!).

I'm also seeing those cards with the ports in different places...
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
For what it's worth, thank you for posting this. I'm finding myself in a very similar situation. I wasn't aware of the drive size limitation on the SAS6 iRs (which is what is in the 12-bay R510 I just ordered) and in looking further into it, it sounds like those machines never used them in the first place :mad:.
It's looking like I'm going to have to do more or less exactly what you did to get my 3TB drives registering. Did you end up using that LSI 9211-8i instead of a re-flashed H200? All the listings I see refer to either as 8-drive cards where the backplane+internal trays add up to 14. Maybe that's only if you're actually using it for RAID? I want to make sure I'm shopping the right part (I don't want to make unnecessary mistakes!).

I'm also seeing those cards with the ports in different places...
Welcome to the forum!

Here is a forum reference you may want to study: "Don't be afraid to be SAS-sy... a primer on basic SAS and SATA"

The only difference between the LSI-9210 and LSI-9211 HBAs is the port location: they're located on the 'top' of the 9210 and on the 'rear' of the 9211. Functionally they're identical and the port location makes no difference... unless your particular system has physical/geometrical constraints that dictate using one or the other port location.

These cards can run 8 drives directly using a pair of SFF-8087 forward breakout cables. They can run up to 256 drives when connected to an expanding backplane.

The IBM M1015 and Dell H200 (among others!) are OEM versions of the LSI-9210, i.e., they're based on the same LSI 2008 chip and have the 2 ports on 'top'.

So you may very well be able to use any of the HBA cards (Dell H200, LSI-9210/9211, or IBM M1015) in your R510... depending on the capabilities of the backplane installed in your R510:

Is it a direct attached backplane? Then you'll need 2 HBAs. You'll connect a breakout cable for each set of 4 drives on the backplane, using 2 ports from one of the cards and 1 from the other.

Or does the backplane have an expander? If so, then you can run all 12 drives with a single port from the HBA.

Is the expander an older SAS model? These may or may not support drives larger than 2TB. If the expander is a SAS2 (or better) model, then you're in good shape as these do support the newer, larger disks.

You'll have to do some homework and research your R510 system before you can proceed.

If you will post your full system specifications, per the Forum rules, perhaps others with more Dell experience will be able to chime in and help out.

Good luck!
 

Charlie G

Cadet
Joined
Jul 18, 2017
Messages
4

Interesting, thank you!
Yes, it must be using an expander based on what I've seen from poking around the net. All the configurations I've come across are using single H200 or H700 cards. Definitive answers get hard to find because the R510 was sold with several bay configurations. The listing wasn't specific regarding the backplane, and the unit won't arrive until the end of the week, so I can't post a more detailed question as yet.
I was responding to the post in an attempt to preempt difficulties, as this machine sounds very similar. My current ancient NAS is making me very nervous and I want to minimize downtime in getting my data moved over to a FreeNAS setup. If I could determine what card I'd need beforehand and have it en-route, that would be ideal.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,367
If I could determine what card I'd need beforehand and have it en-route, that would be ideal.

I suspect in reality, the pragmatic thing to do is wait until you have the hardware and take a look at what you have ;)
 

Charlie G

Cadet
Joined
Jul 18, 2017
Messages
4
I suspect in reality, the pragmatic thing to do is wait until you have the hardware and take a look at what you have ;)
You're probably right, but what I actually did was decide the backplane expander is probably fine, so I ordered an H200 I found listed for $25 ;)
 
Status
Not open for further replies.
Top