FreeNAS on high end home built platform

Status
Not open for further replies.

titan_rw

Guru
Joined
Sep 1, 2012
Messages
586
True, if it saves a bunch of seeks, then even a small hit percentage is worth it. Especially if the 'cost' to cache the extra data is almost zero, being that the heads are 'there anyway'.

The pool's not busy enough to show the savings in those seeks though, if they were saved. So I don't think this machine is going to be much help. I just need to start hitting it harder, that's all. ;)
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Actually it's per vdev based on what I've seen.

I'm not so sure...

Check out this thread: http://forums.freenas.org/showthread.php?1076-Notes-on-zfs-prefetch (from 2011.. what's old is new again :P ). He discusses vfs.zfs.vdev.cache.size and mentions per disk and not per vdev.

Also check out this video on youtube: https://www.youtube.com/watch?v=PIpI7Ub6yjo (skip to about 30:00 for the part on this)

He mentions the same parameter and that its a per disk setting and explains a little why its disabled(zpools with a large number of disks can turn a small value into a large amount of RAM).

I don't know. I thought it was a per-vdev thing.. after all it has 'vdev' in the parameter, but I don't know. Other forums said it was per disk too. It could be that they are saying "disk" in the context of disk=vdev.

I could try setting up my system with a 100MB setting and see what it does. My 18 drive zpool would definitely show more RAM usage on bootup if 1800MB are allocated vice the setting being disabled.


FYI, device-level prefetch will only prefetch metadata now.

Well, that may ruin the fun of it.
 

bazagee

Cadet
Joined
May 2, 2013
Messages
8
Best damned hijacked thread I've ever started :D

Very educational - thanks guys... going way deeper that I've ever cared (or had the time) to think about in the past.
Got my new M1015 sitting on desk right not so looking forward to cross-flashing. Can you link me to a good config doc that would be appropriate for a 36 drive box (currently only 12 drives installed). Use; Acronis image store, near-line archive of company data, images etc. More writes than reads. Needs to be rock solid and redundant for than a blazing performer..

@Cyberjock - if its in your guide - I'll take one up the side of the head now before heading over to read it... :o

Thanks again - great thread.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I happen to like the idea of RAIDZ3 with 11 drives plus one warm spare for 12 total. Multiple sets of those seems like an attractive way to go to incrementally increase a high reliability datastore.
 

paleoN

Wizard
Joined
Apr 22, 2012
Messages
1,403
I'm not so sure...
I looked into this further today and you are right. It's per disk or more precisely per device.

Also check out this video on youtube: https://www.youtube.com/watch?v=PIpI7Ub6yjo (skip to about 30:00 for the part on this)

He mentions the same parameter and that its a per disk setting and explains a little why its disabled(zpools with a large number of disks can turn a small value into a large amount of RAM).
I haven't seen the video, but the associated slide presentation PDF states "consumes constant RAM per vdev."

It could be that they are saying "disk" in the context of disk=vdev.
Like in this blog:
Best damned hijacked thread I've ever started :D
Heh, I wondered what this had to do with FreeNAS on high end home built platform.

I happen to like the idea of RAIDZ3 with 11 drives plus one warm spare for 12 total. Multiple sets of those seems like an attractive way to go to incrementally increase a high reliability datastore.
Or multiple 6 x raidz2 vdevs depending on the size of the drives. Have some tested spares around either way, warm or cold. If you go with jgreco's suggestion decide if you are going to have the warm spares just unused in the box or declared as ZFS spares. Then make sure to test disk replacement either way.​
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I'm glad I was right only because then the people giving the presentations and such aren't confusing terminology. I'd hate it some people say vdev but mean disk and vice versa. There's enough contradictions and such in the technical stuff we don't it to be worse by mixing terminology. :P

I can't believe I taught paleoN something. Someone record this day on the calendar ;)
 

bazagee

Cadet
Joined
May 2, 2013
Messages
8
Or multiple 6 x raidz2 vdevs depending on the size of the drives. Have some tested spares around either way, warm or cold. If you go with jgreco's suggestion decide if you are going to have the warm spares just unused in the box or declared as ZFS spares. Then make sure to test disk replacement either way.​

Cheap commercial 3TB drives likely to fail after 12 to 36 months I'd say... So if possible, maybe some of each? warm and cold... I still have some reading to do on the capabilities of ZFS and spares.
 

bazagee

Cadet
Joined
May 2, 2013
Messages
8
Reading through Cyberjocks guide I read that for each 1TB of storage using RAIDZ2 it's best to use double that in RAM. I currently have about 16GB and could possibly max that out at 32 but the Supermicro case can accommodate 36 drives. At the moment each at 3TB....

I don't think I'll need ZIL or L2ARC but now I'm worried this thing will be slow as using Samba shares?
 

titan_rw

Guru
Joined
Sep 1, 2012
Messages
586
The general rule of thumb is 6 gigs ram for the system, plus 1 gig ram for every tb of (usable) space. But remember, it's just a guideline. Depending on intended workload, some systems will need lots more, some will get by on less.

I've got a machine with 8 tb of usable space. The rule of thumb says 6 + 8 gigs of ram. I only have 8 gigs of ram in the machine, as that's got the board maxed out. And it works just fine.

If you're going to be populating all 36 drive bays right away, I'd definitely look to 32 gigs of ram. You may find that's all you need.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
For example, my system has 18x2TB drives on a RAIDZ3. I'm the only user and I only access it from 1 machine at any given time. But I had 12GB of RAM and it performed very poorly. Upgraded to 20GB and is blazingly fast.

How much RAM you need depends on your workload, your hardware, how many users, etc. Its pretty much something that you'll have to take an educated guess, but be ready to expect more if necessary. For that many drives I wouldn't even think about a system with less than 32GB RAM, but you might need 64GB+.
 

bazagee

Cadet
Joined
May 2, 2013
Messages
8
How much RAM you need depends on your workload, your hardware, how many users, etc. Its pretty much something that you'll have to take an educated guess, but be ready to expect more if necessary. For that many drives I wouldn't even think about a system with less than 32GB RAM, but you might need 64GB+.

Hmmm that's where software vs hardware RAID really comes into play then... upfront I don't expect too many users as that type of frequently stored data can stay on our FC and iSCSI SAN's. That and I don't know how reliable this thing will be. I'm also a bit put off by your guide when you state that recovery options are a little on the thin side. But as you point off - its all about the backups... Maybe I gotta start looking at tape again as another layer... :rolleyes:
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I added that comment about recovery options because people would do things that made their zpool unmountable and then wanted to know what recovery options they had. If you screw up so badly you make the zpool unmountable, you don't have much in terms of options with a likely chance of succeeding.
 

paleoN

Wizard
Joined
Apr 22, 2012
Messages
1,403
I'm also a bit put off by your guide when you state that recovery options are a little on the thin side. But as you point off - its all about the backups... Maybe I gotta start looking at tape again as another layer... :rolleyes:
Recovery as in disaster recovery. Like your hardware RAID controller died and in the process wrote junk to the drives. Then you send it out to a recovery company, pay a lot of money and wait a long time for a result, if any. You don't have that option with ZFS. That option also involves significant downtime all of which can be avoided by simply having backups. Given that no form of RAID is a substitute for backups, have backups. It sounds like you know that already. Tape is a valid option. You could also do a second FreeNAS to back up to.
 
Status
Not open for further replies.
Top