64GB RAM on 120TB system? ARC cache issue

Status
Not open for further replies.

madtsoia

Cadet
Joined
Sep 24, 2016
Messages
7
Hi, I am building a NAS for the business I work for. We want to have a volume close to 100TB.

The initial use will be to store 20TB of files and it will slowly fill from there. Performance is not that important.

The hardware is already purchased:

- 15 x Seagate ST8000NE0001 8TB SATA 6GB/S ; 8*15=120TB raw disk space.
- ASRock X99 Extreme 11 Motherboard Intel I7
- Intel Core I7-5820K 3.30GHZ (3.6GHZ Turbo Mode) Six Core 15MB Hyperthreading LGA2011-V3 Processor
- 4 x Corsair Vengeance Lpx 16GB 2X8GB DDR4 3200MHZ C16 1.35V Memory Kit
network cards: integrated on motherboard.

I'd like to use Freenas with this hardware if at all possible, even if the RAM totals 64GB which is below the 8GB + 1GB per TB rule. (the RAM is also non-ECC but I decided non-ECC is acceptable for me given the RAM passed a memory test).

So I am testing at the moment if Freenas works with this amount of memory. The issue I'm running into is:

1) created a raidz2 volume using all 15 of the 8TB disks, resulting in 94.5TB as per the setup dialog and
2) created a dataset on it (87TB is available on it initially)
3) shared the dataset via NFS
4) mounted that NFS share from another server and started rsync to copy files to the new dataset.

After around 4 hours about 50GB are written (that speed is fine, it's only the initial population of the filesystem) but the memory is basically fully used, mostly by the ARC cache and I have the impression rsync slows down at that same time. The top command shows that only a few MB are free now. Is that a problem? I rebooted the box (which took a long time shutting down) and when it came back it had 61GB free. Resuming rsync is causing ARC to consume RAM again however (df -h shows 80G is used now, and ARC is using 31GB again) . Is this high ARC RAM usage a problem?

I am still in the testing process but I wanted to know the chances of getting Freenas to work with this hardware. What settings a worth adjusting? I can't change the hardware at this stage. Would you recommend to use OpenMediaVault since it doesn't need as much RAM?

Any advice is much appreciated,

Madtsoia
 

adrianwi

Guru
Joined
Oct 15, 2013
Messages
1,231
This could be good :D
 

Jailer

Not strong, but bad
Joined
Sep 12, 2014
Messages
4,977
Who took my popcorn........
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Sounds like some of the regulars are expecting fireworks... Your hardware doesn't match what we recommend around here very well, but it sounds like you already know that. Your pool configuration really isn't what we'd recommend either--you don't want to go wider than 10-12 disks in a single vdev. That can cause performance problems, and when a disk fails, resilvering will take half of forever.

A good memtest isn't a guarantee that problems won't crop up later, as I can show from my own experience--when I bought my server a few months ago, I ran memtest for a few days straight, with no errors reported. Then, a month or so ago, I noticed very poor system performance. Checked the IPMI logs, and determined that there was a flood of memory errors on one of my (ECC) DIMMs. So things do fail, even if they were good to begin with.

But none of that's what you asked about. To your direct question, you'll likely be safe with 64 GB of RAM and a 120 TB pool. You can expect reduced performance the more data you have on your pool, and the more heavily you're using it, but it shouldn't eat your data or cause system instability (barring memory errors; see that discussion above).

To your other question, it's perfectly normal for the RAM to fill up while you're using the system. After all, you paid for it, you might as well use it. In particular, the ARC will expand to consume most of your available RAM, which is a large part of why ZFS loves lots of RAM.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
I would suggest considering adding another 8TB and going for a pool consisting of 2x8 drives in Raidz2. Better performance. Better reliability. Still 96TB.

You're not using ECC. You probably should be. Check if your motherboard will support ECC if you use an ECC enabled CPU.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Intel's high-end platforms don't seem to have the artificial dependency of ECC support from the chipset, but I have my doubts about ASRock's X99 boards and their ECC support. And this is coming from someone who bought an ASRock X99-WS plus a Xeon E5 1650 v3.
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
Your memory will be fine. Having it at 100% is the best way to use it.

Your hardware on the other hand should be returned since you seem to have bought it specifically for freenas. Why not buy the correct hardware from the start. You spent lots of money on your HDDs, buying the right motherboard, CPU and memory is cheap compared to the overall system. You should also buy one more HDD so you can make 2vdevs with 8 disks in raidz2. That's going to give you a little under 96TB usable but still really good.

Sent from my Nexus 5X using Tapatalk
 

madtsoia

Cadet
Joined
Sep 24, 2016
Messages
7
Thank you all very much. Re: the warning about the vdev size - I have to work 15 drives - would 8 drives RaidZ2 vdev + 7 drives RaidZ2 vdev also work?

Your concerns regarding non-ECC memory got me thinking. However, changing the hardware at this stage would be difficult.

The new NAS has to be at least as reliable as the old one which is a QNAP TS-1269U-RP which also does not have ECC memory.

The decision whether or not I can go ahead with non-ECC depends in this case on how big the risk is of losing all data on the NAS is (side note: Backups exist). ECC makes the risk lower but the question is:

(Risk of loosing the entire ZFS + non-ECC based pool) = (Risk of loosing the entire EXT4 + non-ECC filessystem) ?

If the answer is yes, then I can justify the decision to stick with non-ECC. If the answer is no, then I have to lower the risk of loss, I guess by using ext4 on OpenMediaVault (OMV).
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
ECC or no ECC does't matter to much as long as you know the risks. The risk is the same in your example and zfs will at least tell you that there is data loss where as ext4 will not.

What I'm pointing out is you spent more than $363 more than you had to for you cpu and motherboard. You could spend way less money and get a way better system that performs better, has better reliability and uses ECC. On a server things like audio which your motherboard has is worthless and things like ipmi are very important which a server motherboard has. If you want, send me that money instead of throwing it away with components that don't work for servers.

You should read the suggested threads in my signature. These will all come in handy on your adventures with FreeNAS.

EDIT: Closer numbers
You can build a X10SRL, E5-1620v3, 64GB ECC memory for $905 or throw in the e5-1650(6core) for an extra $307

or your setup(ASROCK X99 Extreme 11, i7-5820k, 4x16Corsair Vengeance LPX) runs ~$1,240

my x10srl build is much more solid of a build and comes in much less if using the 1620 or about the same if using the 1650. Mine also allows you to upgrade to 128GB of memory if needed.
 
Last edited:

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
I have to work 15 drives - would 8 drives RaidZ2 vdev + 7 drives RaidZ2 vdev also work?

It will work.

Why do you have to work with 15 drives? Budget limitation? Or a physical limitation?

If its a budget limitation, I'd suggest starting with the 8 drives, and then when the budget allows add the next 8 drive vdev.

BUT, there is no real reason to go with 2 x 8 vs the 1x8 + 1x7, except that the symmetry is better, and I would expect matched vdev performance to be good for overall performance, and the extra disk gets you an extra 8TB extra storage, which makes it good value.

Without the extra disk, you end up with (6+5)*8TB = 88TB... circa, instead of the 96.

Someone made a mistake speccing a server class system with consumer grade ram & cpu. Pity you can't rectify it. System already built? boxes already opened? No returns possible?
 

madtsoia

Cadet
Joined
Sep 24, 2016
Messages
7
@Stux: Regarding the vdev sizes: I can set up the first vdev with 8 drives and add the second vdev with another 8 drives a little later when budget allows to get 1 more drive. Thank you.

Thanks @SweetAndLow for the alternative recommendation - No, it wasn't my intention to waste money :) just insufficient planning. But changing everything for e.g. the X10SRL board would cause problems at this point: It would be time consuming to get a X10SRL board and compatible parts including ECC memory (and we need the NAS asap). Furthermore the store will take 15% restocking fee if they even accept the open boxes. Then the AsRock X99 also offers the advantage that it has 18 SATA ports out of the box - while I found in the meantime that expanding with a PCI expansion card isn't a problem, it still adds CAD$400 cost if I go with the recommended M1015 PC card (and if I re-do, then I should probably follow the recommendations this time... ) . So I'll go that route only if it's absolutely necessary.

Bottom line: I'll make this hardware work and if it performs well after the initial data is populated I'll manage the risk from non-ECC memory.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Btw, performance will improve when you add the second vdev.
 
Status
Not open for further replies.
Top