Advice on buying a Dell R720 - different controllers?

VypR

Dabbler
Joined
Oct 15, 2021
Messages
19
I'm looking at the Dell R720 as a host to which I will add a HBA and 10GbE and as much RAM as I can fit in the damn thing. future expandability is a major consideration.
I'm not too worried about how many drives it supports internally but I want at least 8 slots just in case though I will likely only use 5 or 6 of them.
2 in mirror for the TrueNAS OS and 3 or 4 in Raid z for jails and maybe a VM or two, Like I said LOTS of ram, think 512 GB
The big question I have is in regard to the controllers they ship with. as far as I can tell there's at least 3.

1) H310
2) H710
and
3) H710P

I'm pretty sure I want to steer clear of the H710P, but I'm unsure of the other two. I've seen notes of cross flashing to IT or IR mode in a few places and I know you can't use a strict RAID controller with ZFS.
So what I hope is an easy question is does it matter if the controller is 1 or 2. and what do I do to it once I get the wee little bugger in my hands to set up TrueNAS on it.
The HBA will be a LSI 9206-16e and the NIC an Intel x-520 unless someone can point me differently in either regard.
The end goal is a 45 drive pool in raid-z3 drive size to be decided at a later date. I may make it 3 x 15 drive pools but that's a lot to sacrifice in redundancy.

So take your time and get back to me with a well though out answer and maybe a guide to cross flashing which I'm presently unsure of how to do, but comfortable enough to do it once I have a basic guide. Flashed more than enough devices in my time, just not this specific use case.

Thanks in advance
 

VypR

Dabbler
Joined
Oct 15, 2021
Messages
19
Ok, I found the cross flash guide with a little searching so that's that out of the road and it covers all the controllers I listed which leaves me the problem of how to assign the drives.

I'm back to the idea of 3 x 15 drive z3 'sets' is there a way to combine the 3 sets together in a single /mnt so I don't have to have 3 'drives' I'd prefer to have a single contiguous volume if possible. but reading informed me that a 45 drive pool would never finish re-slivering so that's out as an option.
I'll also likely pair back the HBA to a LSI SAS 9207-8e as the other two ports won't be doing anything important. undecided on the 10GbE NIC but it's more for future proofing than allowing for heavy simultaneous access. I also need to look into rsync to a remote site, basically a mate on the other side of town as off site. maybe not the entire array but the most important files from it. so that's something else I need to look into, OpenVPN research there from what I've read.
 

Lix

Dabbler
Joined
Apr 20, 2015
Messages
27
Are you going to connect it to a jbod-Shell? The R720 works well with Core, a bit old.. I have 2x running Core, a 8x 3.5" and a 24x 2.5", both with crossflashed controllers.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
You can stripe as many vdevs as you wish in one pool, but 15-wide is a bit too wide. It is generally advised not to wider than 10-12 drives per raidz# vdev.
 

VypR

Dabbler
Joined
Oct 15, 2021
Messages
19
even with Raidz-3? though I guess I could go 4 x 11 in raidz2or3
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
Even in raidz3, 15-wide means 12 "data drives" for only 3 "parity drives" (it doesn't work like this, of course, but that's the idea).
4*11 is certainly a safer layout than 3*15.
 

VypR

Dabbler
Joined
Oct 15, 2021
Messages
19
Given that this is a long term box and will likely go through 2 or 3 drive upgrades I'm gonna go with RaidZ-3.
as you cannot migrate raid levels yet, or maybe ever.
If memory serves Raid-5 has a volume limit of like 12 TB before you're likely to get an unrecoverable error. anyone know what it is for RaidZ-3?
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
There is no such "limit" per raid type, and it's all a matter of probability…
The thinking behind "raid-5 is dead" first assumes that one drive fails, and then considers how likely an URE is during rebuild. Under this assumption, the raid-5 array has no redundancy. A raiz3 (or raidz2, or raid-6) where one drive fails still has redundancy and can take an URE. You need to loose three drives in a raidz3 vdev before UREs come into play (but then, indeed, with 8+ large HDDs without redundancy you're in trouble…).
 

VypR

Dabbler
Joined
Oct 15, 2021
Messages
19
So I guess the question is, should I use Raidz-2 or Raidz-3 for 11 drive arrays. I do plan on implementing a secondary back-up solution as well but this will come later in the piece, especially IF I can work out a way to hook up a tape drive to the TrueNAS server, there's plenty of old posts bemoaning the lack of this feature but I didn't see anything recent.
 

blanchet

Guru
Joined
Apr 17, 2018
Messages
516
I have such a SuperMicro JBOD with 44 x 18 TB drives, I have configured it with 7 x 6 raidz2 + 2 hot spares,
because larger vdevs lead to very long silvering time when the pool is full and busy.

TrueNAS is very versatile, so it is important to keep enough IOPS to be able to add more workloads to the server over the years.
 

VypR

Dabbler
Joined
Oct 15, 2021
Messages
19
larger vdevs lead to very long silvering time when the pool is full and busy.

TrueNAS is very versatile, so it is important to keep enough IOPS to be able to add more workloads to the server over the years.


Quick question, is the re-sliver 'problem' an issue with number of drives or size of drives?
I've not decided on drive size just yet, but it will be between 10 and 18 TB, depending on costs and current budget allowance. I do intend on being able to replace the drives 'in place' at least twice before I roll over to a new server. Load won't be that high but I do need a considerable amount of space.

With the server itself I'll be starting with an old Dell R720 or similar but the JBOD enclosure and the drives that are going to go in it will all be brad new as I expect to get at least 10 years out of that hardware, I'll cycle the drives every 5ish years as that's the warranty period on the data centre drives I intend on using, I also allows an appropriate expansion schedule for the use case. need to get at least 2 upgrades out of that hardware. I figure after that there will be a good argument for a full replacement of the entire unit and finding 'new' drives will be nigh to impossible on that standard. but the hardware will likely be placed off-site as a back-up only unit by that stage. Maybe using SSD's with very few read/writes actually needed, but full uptime required. MTFB will be an important factor for that.

I do intend to replace the host system within a few years but based on budget can't warrant buying a brand new host off the bat and as it looks simple enough to just re-import an 'old' pool I don't see any reason not to do it this way. Total required bandwidth is unlikely to exceed 10 GbE but who knows what new tech will allow for.

I need to be able to do both SMB / Domain and iSCSI on this server, not sure how to do both at the same time. Do they need separate pools / vdevs? if so I may add an SSD vdev just for that part of the server.

With the hot spares, the only config I've seen posted were of those connected as hot spares directly to individual vdevs. I don't like this idea, but definitely need hot spares. Can they 'float'? And how hard is that e-mail notification automation thing to set up? Definitely need that. I'm too lazy to log in every day or so and check the server manually, though I will likely do this every other week depending on other schedules.

One thing this server will be doing is offering off-site to other file servers, one of the reasons it needs to be as big as possible but still keep it within an acceptable budget. how hard is it to 'silo' these back ups and are there tools other than rsync to make this happen? I'm also keen to back up limited areas of this server off-site. Just the mission critical stuff till I can get an LTO - 8/9 going.

9 Is preferable as it can write both 8 and 9 tapes and is the 'current' generation and will be available for quite some time to come as far as tapes go. Their price is acceptable / TB and if need be I'll build a dedicated box with an internal drive, if I can actually find one. Have a 4RU box to put this hardware in, but most of the software I've seen for this type of thing is Mac, except for a single drag and drop one I've seen for MS. I'd really like one that I can run with a few clicks say once a month or so and have it make incremental copies that I can access individually and have a database that tells me what files are on which tape unit. Willing to spend a bit on that as I see it as a worthwhile investment for the project.

I'm planning on putting together a half height rack, I have a unit that's around 22/24RU not counted the height just yet. but the space budget is pretty tight when I look at everything I want to put in it. Pretty sure there's little to no space left in the rack when I account for all the hardware that's planned. I will house nearly all the infrastructure for the site other than the modem which I can't move but will put in an up-link to the rack unit when I install all the other wiring throughout the building, It's only 12 wired connections, everything else will be wi-fi or kept within the rack itself. It's only a small site with a handful of links needed and that part of the infrastructure is already decided on.
 

blanchet

Guru
Joined
Apr 17, 2018
Messages
516
Quick question, is the re-sliver 'problem' an issue with number of drives or size of drives?

It is an issue with both the number of drive and the size. When you resilver a disk, you basically have to read all the data from the other disks in the vdev and then write to the new disk. This process generates a lot of I/O.
On my server, I have started with 12-wide raidz3 vdev but after 2 years I have switched to 6-wide raidz2, because the resilvering times increase too much when the pool is filling.

I've not decided on drive size just yet, but it will be between 10 and 18 TB, depending on costs and current budget allowance. I do intend on being able to replace the drives 'in place' at least twice before I roll over to a new server. Load won't be that high but I do need a considerable amount of space.
I suggest to buy the larger disks you can, because the bottleneck for a backup server is the number of free disk slots.


I need to be able to do both SMB / Domain and iSCSI on this server, not sure how to do both at the same time. Do they need separate pools / vdevs? if so I may add an SSD vdev just for that part of the server.
For iSCSI, you need stripe of mirror or a SSD vdev.

With the hot spares, the only config I've seen posted were of those connected as hot spares directly to individual vdevs. I don't like this idea, but definitely need hot spares. Can they 'float'? And how hard is that e-mail notification automation thing to set up? Definitely need that. I'm too lazy to log in every day or so and check the server manually, though I will likely do this every other week depending on other schedules.

As far as I know, hot spare are attached to a pool. Cold spare is also a good solution: it avoids wearing the disks and it frees disk slots.
The email notification works well, so you do not have to worry too much about that.


One thing this server will be doing is offering off-site to other file servers, one of the reasons it needs to be as big as possible but still keep it within an acceptable budget. how hard is it to 'silo' these back ups and are there tools other than rsync to make this happen?
I'm also keen to back up limited areas of this server off-site. Just the mission critical stuff till I can get an LTO - 8/9 going.
9 Is preferable as it can write both 8 and 9 tapes and is the 'current' generation and will be available for quite some time to come as far as tapes go. Their price is acceptable / TB and if need be I'll build a dedicated box with an internal drive, if I can actually find one. Have a 4RU box to put this hardware in, but most of the software I've seen for this type of thing is Mac, except for a single drag and drop one I've seen for MS. I'd really like one that I can run with a few clicks say once a month or so and have it make incremental copies that I can access individually and have a database that tells me what files are on which tape unit. Willing to spend a bit on that as I see it as a worthwhile investment for the project.
TrueNAS does not support tape. Indeed tape are not really suitable to backup a 100TB pool because it will take a week to run a full backup.
Incremental block replication seems a better solution for such large pools.
 

VypR

Dabbler
Joined
Oct 15, 2021
Messages
19
TrueNAS does not support tape. Indeed tape are not really suitable to backup a 100TB pool because it will take a week to run a full backup.
Incremental block replication seems a better solution for such large pools.


That's kind of what I was thinking. I know the first backup will take ages but the idea is to run an incremental regularly then offsite the tapes as each one is filled, but that sort of thing needs software to co-ordinate. I've only seen mac software that has that sort of functionality. Right now I can't remember the name of it but I don't want to buy a mac just for that single purpose.

I do however have space and a plan for an I/O computer in a 4RU case that will run Linux or Windows. normally remotely but I've got a plan for a KVM in the rack, Just a 1RU all in one. 8 Port was the one I liked and over kill for that cabinet. but it's what I found as a single unit. there will only be 3 or 4 physical machines there, then a reasonable amount of virtualisation.

Need to further look into the Jail system of TrueNAS to see if I can put a bit of functionality there, maybe an FTP server and a few other things. The net here is fast enough for reasonable speed for remote access of some of the data, though most of the proper VM's will exist on a HPE unit that I got way cheaper than it should of been. So I don't really need to put VM's on the TrueNAS box as I understand it has that capacity but it is limited and may cause issues when compared to a more traditional hypervisor environment.

I saw a message that adding RAM to a production server may cause issues. So I plan to max out the ram on the R720 before I put TrueNAS on it. Are you aware of such issues, as it may delay the build of the server a little more than I want. Right now my files are shared on an old Win 10 rig that I really need to retire and only ~15% of the space has redundancy in case of drive failure, other than that it's just single drives which is a recipe for disaster

I know tape has it's limits but my only other option is to convince someone else to build a second server and rsync to that as off site but I don't know anyone with a budget that will allow for that sort of investment in dollars, physical space or net capacity. Though I do know one person who could with a little cajoling and help. They already do a small amount of hosting but it is only for web development so not on the same space scale that my project is aimed. But they are my best bet for an off site, even if it's not for 100% of the data I need to house.

AS far as the iSCSI is concerned I may just put those drives in the Dell box, as what I'm looking at will have enough free bays to put in a couple of good quality SSDs in and they're not be going over the SAS cable to the JBOD unit as I'm not sure how much load/latency/failure points that will add. So there would then be 3 pools on that box. the first for TrueNAS itself, one for jails/VMs and the last one as the iSCSI target location. Also that means I'm not taking up valuable bays in the bulk storage area of the server when there's free space in the host.
 

blanchet

Guru
Joined
Apr 17, 2018
Messages
516
That's kind of what I was thinking. I know the first backup will take ages but the idea is to run an incremental regularly then offsite the tapes as each one is filled, but that sort of thing needs software to co-ordinate. I've only seen mac software that has that sort of functionality. Right now I can't remember the name of it but I don't want to buy a mac just for that single purpose.
The closer solution to your idea is virtual full backup on tape. It is supported by Veeam even in the free version, so if you have already a tape drive you can make a try without purchasing a software.

Need to further look into the Jail system of TrueNAS to see if I can put a bit of functionality there, maybe an FTP server and a few other things. The net here is fast enough for reasonable speed for remote access of some of the data, though most of the proper VM's will exist on a HPE unit that I got way cheaper than it should of been. So I don't really need to put VM's on the TrueNAS box as I understand it has that capacity but it is limited and may cause issues when compared to a more traditional hypervisor environment.
FreeBSD Jails are very powerful and nice, but most software are really easier to install on Linux than on FreeBSD, so finally you end with Linux virtual machines to host the workload. Fortunately, TrueNAS Core can run virtual machines with bhyve, but it is not as powerful as VMware ESXi.


I know tape has it's limits but my only other option is to convince someone else to build a second server and rsync to that as off site but I don't know anyone with a budget that will allow for that sort of investment in dollars, physical space or net capacity. Though I do know one person who could with a little cajoling and help. They already do a small amount of hosting but it is only for web development so not on the same space scale that my project is aimed. But they are my best bet for an off site, even if it's not for 100% of the data I need to house.
If you succeed to find a suitable setup what involves tapes, please post your results on the forums, it may interest many folks.
 
Top