Requirements for an archive server, 12 drive volume

VypR

Dabbler
Joined
Oct 15, 2021
Messages
19
Looking to build an archive server. will only be accessed by one or two people at a time, nothing faster than gigabit network.
I've read the recommendation of 1GB/TB of space but this will not be possible with the hardware I have in mind. it has a maximum of 32GB on a quad core Xeon chip.
A stated it will be a 12 drive volume and speed is not a desired feature. Even 50MB/s is fast enough for this use case.
what I want to know is what is the largest raw capacity I can add to this system. I plan on using raidZ 2 or 3. the smallest drive I want to use is 12TB and the largest is 18TB this will give me a maximum raw capacity of around 200 TB. I'm pretty new to this TrueNAS thing but been using computers since the Apple II was the latest and greatest.
Like I said Archive usage only. it will be a warm back up of an existing file server. just needs to be online 24/7 but access will be occasional. I do plan to also built on a much newer platform with a maximum of 128 GB ram, but that too will need to be a larger volume, maxing out at 400 TB raw capacity. I'm investing in the drives. current plans are for UltraStar drives as the media. This will be a repository over broadband (100/40 Mbit fastest I can get here) of other file servers (both units) as off site backup for other organisations. So I need to get very familiar with rsync too. but that will be later after I've gotten all the TrueNAS stuff nailed down. Of course I'll be mirroring the SSD's for the TrueNAS install. I also need a way of backing up the requisite details of those, most likely to a USB drive hanging out the back. I know I can just import to a 'new' install later but I want to include all the 'groups' etc.

So, I'm guessing the important question is. Is 32 GB Ram enough for a 'slow' server with 150 - 200 TB raw capacity, and which raid level should I be looking at? eventually this will all also go on tape but this is an interim step.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
recommendation of 1GB/TB of space but this will not be possible with the hardware I have in mind. it has a maximum of 32GB on a quad core Xeon chip.

So, I'm guessing the important question is. Is 32 GB Ram enough for a 'slow' server with 150 - 200 TB raw capacity, and which raid level should I be looking at? eventually this will all also go on tape but this is an interim step.
It's only a rule of thumb and it becomes less relevant with larger sizes of pool. You should be fine for your use case with 32GB.

RAIDZ2 should be fine as it's only a backup (and interim) anyway.
 

VypR

Dabbler
Joined
Oct 15, 2021
Messages
19
Thanks for the quick reply.
It's very unlikely that there will be any Jails or other VM type issues. those will come with the second deployment.
can you offer any suggestions as to where I should be concentrating my research for that unit? it will be a 24 drive system + TrueNAS install.
I'm pretty sure that at that level I'll need z3. how much ram should I be using there, as a minimum. projected minimum raw capacity as I said would be 400TB. this device will also need to do things such as de-dupe and snapshots. I'd also like it to have 10GBE networking and be able to saturate that sort of connection at times but not all the time.
Another question I have that I'd not seen an answer to was how do you identify the drive when it fails? Serial No or something else? I've seen that 'port' my not be a reliable way of doing this.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
You should be fine for your use case with 32GB.

For "minimum raw capacity 400TB"? No way, bad idea.

Discard any idea of dedupe. For 400TB you would be requiring a massive amount of RAM (on the order of three terabytes) assuming a block size of 64KB and slightly pessimistic assumptions. Could be less depending on the specifics. It's also more complicated by the new special metadata pool device capabilities, but, we're still talking not even thinkable to have less than 512GB RAM under the best of conditions, I think.

For a non-dedupe pool, I imagine 96GB of RAM would be possible, or maybe 64GB *might* be okay. The ultimate issue you'll run into is that ZFS has to be able to cache a lot of metadata to find free space. If this is a true archival system, where you write once to the pool, and keep on going 'til it's full, and then never touch it again, you can get by on somewhat less RAM (like the 64GB).

You will not be reliably saturating 10GbE networking with RAIDZ3 and 24 hard drives. Disabuse yourself of any such notions. Especially if you are strangling the system for ARC.
 

VypR

Dabbler
Joined
Oct 15, 2021
Messages
19
yer, the 400 TB idea is for a completely different system. the specs of that are Dual Xeon's @ 3GHz and 96 GB Ram. it's an old HP DL 380 G7 that has a couple of raid cards in it. I know I have to convert them to 'IT' mode. still not there on that. That system will be 'live' data and modified on a fairly regular basis. I just have to find a SAS expander case that I can put in my rack. seen a few Supermicro one's right up to 45 drives and they're attractive.
So I have the first third of the hardware for that project ( DL380 ) second third will be the expander case and the last third will be the drives. a much longer term project. think 2 years from now. I only mentioned the 400TB idea to get a feeler for it. If need be I'll expand the ram in the HP unit up to the max of 192GB from memory. it will also do Jails and VM's though so the ram is most likely a really good idea. right now that system is running old Windows Server OS's as a practice environment. the two raid cards each have 4 SAS ports out the back, SAS2 I think def not 3. I'm building this on 2nd hand hardware due to cost constraints. the only new from the get go are the HDD's. But feel free to give me pointers. Asking questions is one of the best ways to learn.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
For "minimum raw capacity 400TB"? No way, bad idea.
If you read the OP, that's referring to the future system with 128GB of RAM.

The 32GB refers to a 200TB capacity server to be used to be a "slow" backup server with occasional access only by one or two users with 50 MB/s as an expectation.

I must have some eye trouble as I don't see any mention of dedupe. (EDIT: Ok I see it in the second post now...)
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Hazards of scanning backwards rapidly through a thread. :smile: I don't generally remember specifics of a thread between visits.
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
I plan on using raidZ 2 or 3. the smallest drive I want to use is 12TB and the largest is 18TB
Are you saying that the plan is to "wildly" mix different drive sizes (basically like you would do with unRAID)? If so, please be aware that it will not work this way. The size of the smallest drive limits the usable space of all other drives.
 

VypR

Dabbler
Joined
Oct 15, 2021
Messages
19
ChrisRJ, I'm not planning on mixing drive sizes. just saying that the drives will be a size somewhere between 12 and 18 TB.
not my first rodeo building a RAID just my first time doing in software defined in ZFS let alone on TrueNAS

Ok, did a little fact checking and that HP system that may form the basis of the long term large scale storage plan can take 384GB Ram, so that 192 that I remembered was per socket.

I'd have to pull the cards out of it to see what they are but they're HP branded raid cards with battery backed cache. 1GB ram on the card from memory. It's a while since I opened this thing up but it's not in a place that I can do that easily.

What would you suggest as the maximum raw size of a volume, either 24 or 45 drives with that 384 GB Ram. nominal access at most 5 clients, 2 local 3 over net. Def going Raidz3 and maybe a hot spare or two for the 45 drive version. It will gain 10 GbE network when I find a card I like but to begin with it will be 4x1GbE teamed if TrueNAS supports that sort of thing, I have no idea if it does or how it would do it at present. Max expected access speed 250 MB/s.

One of the purposes of this much beefier system may be to use iSCSI pointed at by VM's on another system in the same rack, also likely 10GbE but this is only a maybe and would depend a lot on the performance levels of said Raidz volume. to go along with a handful of jails for other services. which reminds me how well do those integrate with VPN's

Drives will be data centre style SATA most likely. no need for SAS drives IN the enclosure. but looking to get the maximum Vol size I can get. Especially with the 45 drive enclosure likely 2 Volumes with maybe a hot spare. all drives the same. (I know I have to wait for expansion of a raidz in ZFS.)
 

VypR

Dabbler
Joined
Oct 15, 2021
Messages
19
yer, I'm aware you can't use a straight raid controller, but what's this with cross flashing to IT mode?
Worst case scenario I'll just get a reasonable quality HBA with 4 SAS2 ports on the back. or maybe SAS3 for a more current enclosure. I know I can step down from 3 to 2 with the right cable(s). Later on I'll strip the controllers out of the chassis and see what they are and post here to see what people think of them.
 

VypR

Dabbler
Joined
Oct 15, 2021
Messages
19
Which card would you recommend? both are IT mode flashed
LSI 9201-16e
or
LSI 9206-16e
prices are reasonable to me for the purpose of building a very large pool in an enclosure
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Well, the 9206 will be based on the 2308, and will support PCIe 3.0, so that's a somewhat better choice between the two.
 

VypR

Dabbler
Joined
Oct 15, 2021
Messages
19
thanks, do you have any opinions on the supermicro JBOD enclosures looking at anywhere from 24 to 45 drives
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I'm not a fan of the high density ones. These things are usually deployed in a data center, where heat and rack cost are competing against each other. If you can afford the rack space, using a 4U/24 bay chassis results in your drives running a bit cooler, which can in turn extend their lifespan and reliability. This isn't a huge factor, though.
 

VypR

Dabbler
Joined
Oct 15, 2021
Messages
19
cool, just realised that the server I want to use as the host for TrueNAS uses a raid controller for it's primary hdd's. as stated below.

Smart Array P410i Controller with 256 MB, 512 MB battery-backed write cache (BBWC), 512 MB flash-backed write cache (FBWC), and 1 GB FBWC options

so I thought maybe I could make a VM of TrueNAS in Proxmox, I remember seeing a vid doing just that somewhere on you tube.
The idea would be to install Proxmox on the raid 1 146GB drives. other VM's on the raid 5, 300 GB drives.

Do you have any experience with Proxmox?

It's just an idea right now. the main thought was to max out the ram in the server and point the majority of it at the TrueNAS VM. if I max out the server it will hold 384 GB of ram IF I've read the manual right. need to do a LOT more research before I go much further with that particular idea
 

VypR

Dabbler
Joined
Oct 15, 2021
Messages
19
Looking at the forums a bit I see many people using Dell PowerEdge R720 server. this is within my budget envelope. can you suggest any particular pitfalls to this unit as it pertains to a TrueNAS installation.

as far as rack space in concerned that is fairly limited to around 20U. but that will house 2 other 4U 'regular' computers, a 1U server, a 2U server, a 3U UPS 2U of network equipment and a few other things so, yes rack space is very limited. there's probably a few other things I've forgotten that need to go in there but those are the main units to go in the cabinet. there's enough stuff that I'm installing a 15A circuit to handle the power requirements for the Tardis as it is affectionally referred to.

thanks for taking the time to talk to me about these ideas
 
Top