Is Dell R510 a good option?

Status
Not open for further replies.

johan56

Dabbler
Joined
Mar 18, 2017
Messages
23
Hi!

So I'm thinking of changing my server setup because my current setup is running low on space.

I found, IMO, a pretty cheap Dell R510 12 bay with the following specs (400 USD in Europe)

  • Xeon E5606 4c/4t 2.13GHz (1/2)
  • 8GB DDR3 RAM (2/8)
  • 4x 300GB 6G SAS 10k
  • 2x 500GB SAS 7.2k (6/8 LFF)
  • PERC H700
  • two psu

(I can't make half of it out btw)

And currently I'm running the following machine:

  • Intel Xeon E3-1240V2 / 3.4 GHz Processor
  • Supermicro X9SCM-F (proshop)
  • Kingston ValueRAM hukommelse - 8 GB - DIMM 240-pin x 2
  • Western Digital WD40EFRX Red NAS 4TB IntelliPower 64MB 3.5" SATA-3 x 3
  • IBM M5015 (LSI MegaRAID 9260-8i) 46M0829 512MB Cache 6GB/sec (ebay)
  • Fractal Design Define R4 Black Pearl (komplett)
  • Noctua NF-A14 FLX 140mm Fläkt (komplett)
  • IBM OEM Intel PRO 1000 PT Quad Port PCIE GIGABIT Ethernet Server

It seems that Im lowering the ability to transcode due to the lower CPU freq? I was thinking to populate the "new" R510 with 8x6tb disk so that I don't need to worry about space for a couple of years. But can this machine handle it?
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
The R510 is a bit older than what you have. That said, why not just get a supermicro 12bay chassis with the SAS2 expander and move your parts over?
 

Jason Evans

Explorer
Joined
May 18, 2015
Messages
78
Hi!

So I'm thinking of changing my server setup because my current setup is running low on space.

I found, IMO, a pretty cheap Dell R510 12 bay with the following specs (400 USD in Europe)

  • Xeon E5606 4c/4t 2.13GHz (1/2)
  • 8GB DDR3 RAM (2/8)
  • 4x 300GB 6G SAS 10k
  • 2x 500GB SAS 7.2k (6/8 LFF)
  • PERC H700
  • two psu

(I can't make half of it out btw)

And currently I'm running the following machine:

  • Intel Xeon E3-1240V2 / 3.4 GHz Processor
  • Supermicro X9SCM-F (proshop)
  • Kingston ValueRAM hukommelse - 8 GB - DIMM 240-pin x 2
  • Western Digital WD40EFRX Red NAS 4TB IntelliPower 64MB 3.5" SATA-3 x 3
  • IBM M5015 (LSI MegaRAID 9260-8i) 46M0829 512MB Cache 6GB/sec (ebay)
  • Fractal Design Define R4 Black Pearl (komplett)
  • Noctua NF-A14 FLX 140mm Fläkt (komplett)
  • IBM OEM Intel PRO 1000 PT Quad Port PCIE GIGABIT Ethernet Server

It seems that Im lowering the ability to transcode due to the lower CPU freq? I was thinking to populate the "new" R510 with 8x6tb disk so that I don't need to worry about space for a couple of years. But can this machine handle it?
When you say transcode can we assume you plex transcode? If this is the case, then yes. You are dropping your power to transcode significantly. You might be better off trying to reuse that mobo and cpu into another chassis. Or maybe get a R5. Those can house a pretty good amount of drives. I do believe around 10 or more. Socket 1366 is dated, I'm currently in the process of upgrading my dual cpu setup for a single cpu setup with over 3x the transcoding power. It will also use less power as well.

Sent from my SM-N950U using Tapatalk
 

rvassar

Guru
Joined
May 2, 2018
Messages
972
You're a better candidate for chassis swap.

The other hidden thing about the socket 1366 chips... They're not getting Spectre/Meldown microcode fixes. This may or may not matter to you, depending on your use case (ala VM's on the NAS, etc...)
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Just a note on the R510's... the 12 bay units in particular are a bit crappy because there's no good way to have a boot device that doesn't take up valuable space. The internal SATA ports are disabled by the current BIOS, USB thumb drives are an option but don't try to install them internally because the slots are difficult to get to, etc.
 

rvassar

Guru
Joined
May 2, 2018
Messages
972
Just a note on the R510's... the 12 bay units in particular are a bit crappy because there's no good way to have a boot device that doesn't take up valuable space. The internal SATA ports are disabled by the current BIOS, USB thumb drives are an option but don't try to install them internally because the slots are difficult to get to, etc.

Funny you should mention that... I just picked up a SAS HBA, that has been flashed to IT mode. It disables booting from the onboard SATA ports on my Poweredge SC1430 test box. I have to put the boot device on the HBA in order to boot.
 

johan56

Dabbler
Joined
Mar 18, 2017
Messages
23
Just at a glance, two things. One You will need to use low profile PCI cards and they don't mention a SAS EXPANDER just a backplane. That could be part of why it's less expensive. With that you would need ports from your HBA to each drive or to add your own expander and then from that to the backplane.
Aha SAS expander.

Isent my current enough ”LSI MegaRAID 9260-8i”?
 

charlie89

Explorer
Joined
Dec 26, 2013
Messages
55
Hey.
LSI MegaRAID 9260-8i is a real RAID Card. It is not recommended to use that with FreeNAS.
What you want is a HBA Card (something like a IBM M1015 or a DELL H310) flashed to IT Mode, because the ZFS filesystem itself handles the raid functionality.

And it is advised to get an expander backplane because then you just need one single SAS cable from one HBA to the backplane to connect 12 drives. Without expander backplane you would need 2 8-port HBAs and either 3 SAS cables or 3 SAS breakout cables connected to each of your 12 disks (depending on backplane).
 

johan56

Dabbler
Joined
Mar 18, 2017
Messages
23
Hey.
LSI MegaRAID 9260-8i is a real RAID Card. It is not recommended to use that with FreeNAS.
What you want is a HBA Card (something like a IBM M1015 or a DELL H310) flashed to IT Mode, because the ZFS filesystem itself handles the raid functionality.

And it is advised to get an expander backplane because then you just need one single SAS cable from one HBA to the backplane to connect 12 drives. Without expander backplane you would need 2 8-port HBAs and either 3 SAS cables or 3 SAS breakout cables connected to each of your 12 disks (depending on backplane).
Alright thanks! I think my card has to bo flashed or that i have it in IT-mode. I can see all my disks in freenas.

What expander would you recommend?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Funny you should mention that... I just picked up a SAS HBA, that has been flashed to IT mode. It disables booting from the onboard SATA ports on my Poweredge SC1430 test box. I have to put the boot device on the HBA in order to boot.
It should not disable the SATA ports, butI have seen problems with certain system boards. You might need to erase the BIOS from the SAS card, not the firmware, because your system board may not be smart enough to allow you to configure the boot order once the BIOS from the SAS controller loads.


Sent from my SAMSUNG-SGH-I537 using Tapatalk
 

rvassar

Guru
Joined
May 2, 2018
Messages
972
It should not disable the SATA ports, butI have seen problems with certain system boards. You might need to erase the BIOS from the SAS card, not the firmware, because your system board may not be smart enough to allow you to configure the boot order once the BIOS from the SAS controller loads.

Ok, I can't say it disables them. I ended up disabling them because the system bios throws a boot warning dialog if nothing is connected to them. There's a couple config scenarios I need to cycle thru before concluding it can't boot from the onboard SATA ports. Part of the problem was FreeNAS installed on the SSD, but I had two old 250Gb test drives attached to the HBA, and one contained a bootable CentOS image. The boot order was definitely HBA first. But I haven't gone back to see if it will fall back to the motherboard if there's no bootable drives on the HBA.

In a SC1430 it's better off on the HBA anyway... Motherboard is 1.5Gb SATA-1.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Ok, I can't say it disables them. I ended up disabling them because the system bios throws a boot warning dialog if nothing is connected to them. There's a couple config scenarios I need to cycle thru before concluding it can't boot from the onboard SATA ports. Part of the problem was FreeNAS installed on the SSD, but I had two old 250Gb test drives attached to the HBA, and one contained a bootable CentOS image. The boot order was definitely HBA first. But I haven't gone back to see if it will fall back to the motherboard if there's no bootable drives on the HBA.

In a SC1430 it's better off on the HBA anyway... Motherboard is 1.5Gb SATA-1.
I can't recall if the hard drives I boot my NAS from are SATA I or SATA II, but they don't need to be fast. It is just reliability that matters. If you remove the BIOS from the SAS controller, none of the drives attached to it will show up as a boot-able devices. That is how I run mine. All the data drives are on the SAS controller and the boot drives are on the SATA controller. Works like a champ.
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
I can't recall if the hard drives I boot my NAS from are SATA I or SATA II, but they don't need to be fast. It is just reliability that matters. If you remove the BIOS from the SAS controller, none of the drives attached to it will show up as a boot-able devices. That is how I run mine. All the data drives are on the SAS controller and the boot drives are on the SATA controller. Works like a champ.
I still boot my R510 from USB and have all drives running from my LSI 2008. This seems to be rock stable even if dreadfully slow for updates. All of my storage is in the front and SLOG/L2ARC SSDs in the internal cage.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I still boot my R510 from USB and have all drives running from my LSI 2008. This seems to be rock stable even if dreadfully slow for updates. All of my storage is in the front and SLOG/L2ARC SSDs in the internal cage.
That is fine, but @rvassar is trying to get away from USB sticks for booting because of the failures he has had. I know that some people have not had problems with them, good for you, but other people have USB drives that are croaking, sometimes after less than a month. I don't know why some people have suces with USB and other people don't but there it is.
Not everyone wants to use USB and there are reasons.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Funny you should mention that... I just picked up a SAS HBA, that has been flashed to IT mode. It disables booting from the onboard SATA ports on my Poweredge SC1430 test box. I have to put the boot device on the HBA in order to boot.

The R510's a little nastier than that. Because it's a custom system, there's really no good alternatives for booting. While the mainboard does sport some SATA ports, they're actually disabled in the PCH by the BIOS, so they're not usable for SATA DOM's etc., and the internal USB slots are buried on the drive backplane in a hard-to-get-to location. There's no reasonable way to tap power for SATA SSD's etc even if the SATA ports were enabled, so you can neither use the mainboard ports nor easily add in an HBA for some extra internal SSD's beyond the two bays.

So what I've been doing is using an H310 controller and an Addonics card, along with some M.2 SSD's. The Addonics card is powered by the PCIe bus, and the H310 in IR mode provides a redundant boot option (independent of ZFS). The downside of this is that it's a costly solution, once you add it all up, but when you're refurb'ing donated gear for a nonprofit, it isn't always possible to pick and choose. The upside to the Addonics card is that it even leaves open some space for an NVMe M.2 SSD...

I don't know why some people have suces with USB and other people don't but there it is.

Some combination of not moving the system dataset off the USB and the use of cheap USB. They don't generally have wear leveling or much intelligence of the sort that keeps SSD's functioning longer.

Not everyone wants to use USB and there are reasons.

Yeah, such as running gear at data centers where the service call to "fix" a problem because you cheaped out is more expensive than the marginal additional cost to do it "better."

People don't blink at a $200 HDD for their storage, or $500 for memory, or a $100 PSU, but they feel compelled to use a 99c USB thumb drive they got out of the dollar store bargain bin. I don't really pretend to understand. ;-)
 

rvassar

Guru
Joined
May 2, 2018
Messages
972
Some combination of not moving the system dataset off the USB and the use of cheap USB. They don't generally have wear leveling or much intelligence of the sort that keeps SSD's functioning longer.

People don't blink at a $200 HDD for their storage, or $500 for memory, or a $100 PSU, but they feel compelled to use a 99c USB thumb drive they got out of the dollar store bargain bin. I don't really pretend to understand. ;-)

Just to be clear... In my case, the USB thumb drives I was using format and test fine after dropping out of the boot pool. The problem is in the USB HCI, or possibly in the way ZFS interprets the results of write attempts.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
The R510's a little nastier than that. Because it's a custom system, there's really no good alternatives for booting. While the mainboard does sport some SATA ports, they're actually disabled in the PCH by the BIOS, so they're not usable for SATA DOM's etc., and the internal USB slots are buried on the drive backplane in a hard-to-get-to location. There's no reasonable way to tap power for SATA SSD's etc even if the SATA ports were enabled, so you can neither use the mainboard ports nor easily add in an HBA for some extra internal SSD's beyond the two bays.

So what I've been doing is using an H310 controller and an Addonics card, along with some M.2 SSD's. The Addonics card is powered by the PCIe bus, and the H310 in IR mode provides a redundant boot option (independent of ZFS). The downside of this is that it's a costly solution, once you add it all up, but when you're refurb'ing donated gear for a nonprofit, it isn't always possible to pick and choose. The upside to the Addonics card is that it even leaves open some space for an NVMe M.2 SSD...



Some combination of not moving the system dataset off the USB and the use of cheap USB. They don't generally have wear leveling or much intelligence of the sort that keeps SSD's functioning longer.



Yeah, such as running gear at data centers where the service call to "fix" a problem because you cheaped out is more expensive than the marginal additional cost to do it "better."

People don't blink at a $200 HDD for their storage, or $500 for memory, or a $100 PSU, but they feel compelled to use a 99c USB thumb drive they got out of the dollar store bargain bin. I don't really pretend to understand. ;-)
That is unusual, that they have SATA ports and disable them. I don't recall seeing that before. I have some systems with SATA ports and no power connections, so that they are not usable, but to go the extra mile to disable it.
Good to know. Thanks for the advice.


Sent from my SAMSUNG-SGH-I537 using Tapatalk
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Just to be clear... In my case, the USB thumb drives I was using format and test fine after dropping out of the boot pool. The problem is in the USB HCI, or possibly in the way ZFS interprets the results of write attempts.

The same could be said of Realtek ethernets, random non-HBA/PCH SATA, etc., etc. Many things appear to "test fine" under idealized conditions, but tend to show their true colors when pressed into doing Real Work, and then Fail Spectacularly(tm). PC stuff is often designed with the average consumer application in mind, so when you run into an application where 98%-good isn't good enough, such as pretty much any FreeNAS that runs 24/7 and runs ZFS on the boot pool, you have a greater possibility of hitting a failure.

Your test was "it worked fine for the short time I tested it." But the problem was "there was some point during continuous operation where it failed, even if momentarily." There are people who wonder why we normally burn in gear here in the shop for at least a month, or why I advocate long-term validation of things such as virtualizing FreeNAS, and it's precisely because it isn't the 98% or 99% I worry about... it's the failure cases that matter.

We know that USB devices don't work anywhere near as well. This is probably not a problem with a general solution, alas.
 
Status
Not open for further replies.
Top