Will More Memory Help?

Status
Not open for further replies.

jasonboche

Dabbler
Joined
Oct 2, 2018
Messages
25
I've got a new FreeNAS box with 64GB ECC RAM backed by 14 spinning SAS drives. It will take more memory and I was thinking of bumping it up to 128GB ECC RAM (or more) but I'm unsure if more is always better and I'll realize any gains whatsoever. Mixed workloads but mostly Microsoft operating systems running as virtual machines within VMware vSphere.

Thank you,
Jas
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
It wont hurt. How's your ARC hit rate? Is this backing 1000's of VMs or 10s of VMs?
 

jasonboche

Dabbler
Joined
Oct 2, 2018
Messages
25
It won't hurt. How's your ARC hit rate? Is this backing 1000's of VMs or 10s of VMs?
I haven't "put it into production" yet running the various workloads so I'm really unsure on that. Thus far I've been running an Iometer VM against a few different vdev configurations (Mirror, RaidZ2, RaidZ3) and getting confusing, yet very pleasing results based on rule of thumb IOPS capabilities of the 7200 RPM SAS drives.

Based on the replies I've seen here I've added the additional RAM so I'm up to 128GB ECC now. I'm going to put it into production in the next few days and slide my VMs over. I'll reply back with some ARC reporting data.
 

jasonboche

Dabbler
Joined
Oct 2, 2018
Messages
25
Where ZFS is concerned, "more RAM" is essentially never a bad idea.

Post the output of arc_summary.pl in code tags if you can?
See above. I'll get back to you on this. Thanks so much.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
The short answer is "for VMs, use mirrors"

Thus far I've been running an Iometer VM against a few different vdev configurations (Mirror, RaidZ2, RaidZ3) and getting confusing, yet very pleasing results based on rule of thumb IOPS capabilities of the 7200 RPM SAS drives.

My immediate thought here is "are you using iSCSI, and if you are, did you enable sync=always on your dataset/zvol?"
 

jasonboche

Dabbler
Joined
Oct 2, 2018
Messages
25
The short answer is "for VMs, use mirrors"



My immediate thought here is "are you using iSCSI, and if you are, did you enable sync=always on your dataset/zvol?"
I am using iSCSI and I did not enable sync=always. Furthermore I do not have SSD or SLOG installed and configured..
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
I am using iSCSI and I did not enable sync=always. Furthermore I do not have SSD or SLOG installed and configured..

Right now you're seeing fantastic performance because all of your writes are being acknowledged as soon as ZFS commits them to RAM (async writes) so that's masking the performance deltas of the underlying pool configurations (mirrors will be way faster for small, random, block I/O than RAIDZ-anything)

If you're going to be running production VM workloads on this pool, you need to have sync writes - otherwise, a sudden power outage/system crash/other fault could lead to lost data or even the entire datastore becoming damaged (if it was updating metadata or otherwise doing some critical operation against the filesystem)

Without a fast SLOG device (see my signature for a benchmark comparison) sync writes are brutally slow on spinning disks, to the point where it's effectively not a usable configuration - hence you need a fast SLOG (or a mirror if you're really wanting to reduce your risk window) which then becomes the bottleneck for your write speed.
 

Ender117

Patron
Joined
Aug 20, 2018
Messages
219

jasonboche

Dabbler
Joined
Oct 2, 2018
Messages
25
After much testing, reading, and chin scratching, here's what I ended up with for my vSphere home lab on FreeNAS. Like many others I'm sure, balancing act between #1 data integrity for important things, capacity, and some decent throughput when needed for lab work:

Boot: 2 x 300GB 10k RPM SAS
Pool: 12 x 1TB 7.2k RPM SAS
vdevs: 2 x 6 x 1TB Mirrors
SLOG: 2 x 1 x 200GB SSD Mirror
Important infrastructure VMs go on shared iSCSI block zvols with sync=always (writes: SLOG)
Lab testing VMs go on shared iSCSI block zvols with sync=standard (writes: 128GB ECC RAM)

In the future I plan to replace the Dell PowerEdge R720 16 bays with an R720xd 24 bays because I am a little uncomfortable on capacity
 
Last edited:

jasonboche

Dabbler
Joined
Oct 2, 2018
Messages
25
What model SSD are you using for the SLOG devices?
Dell Enterprise Class 200GB SSD 12Gbps SAS
Model: PX02SMF020
Part No. SDFCP93DAA01
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112

jasonboche

Dabbler
Joined
Oct 2, 2018
Messages
25
Toshiba PX02SM SSD, older but still good. 12Gbps SAS and has the important power loss protection; you won't be saturating your 10GbE pair on writes certainly, but it should deliver good numbers.

If you're able to remove one from the mirror and benchmark it with the diskinfo one-liner and add your results to the SLOG thread it would be appreciated:

https://forums.freenas.org/index.php?threads/slog-benchmarking-and-finding-the-best-slog.63521/

Thanks!

I will do that in the near future. I ran into a problem over the weekend I need to sort first. FreeNAS took a dirt nap and I'm trying to troubleshoot.
 

jasonboche

Dabbler
Joined
Oct 2, 2018
Messages
25
Toshiba PX02SM SSD, older but still good. 12Gbps SAS and has the important power loss protection; you won't be saturating your 10GbE pair on writes certainly, but it should deliver good numbers.

If you're able to remove one from the mirror and benchmark it with the diskinfo one-liner and add your results to the SLOG thread it would be appreciated:

https://forums.freenas.org/index.php?threads/slog-benchmarking-and-finding-the-best-slog.63521/

Thanks!

Code:
root@freenas1:~ # diskinfo -wS /dev/da15
diskinfo: /dev/da15: Operation not permitted
root@freenas1:~ #

 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
If the disk is part of an active mirror/GEOM set, diskinfo won't do the write test as it's destructive. You'd have to zfs remove it from its current use. If that's too much effort or poses uptime/reliability issues then don't risk it just to give us numbers. :)
 

jasonboche

Dabbler
Joined
Oct 2, 2018
Messages
25
If the disk is part of an active mirror/GEOM set, diskinfo won't do the write test as it's destructive. You'd have to zfs remove it from its current use. If that's too much effort or poses uptime/reliability issues then don't risk it just to give us numbers. :)

It is part of a SLOG mirror but I took my production workloads off the FreeNAS last week to troubleshoot a SAS cabling issue which I think I have solved now.

I don't mind removing the SLOG mirror, running the test on a single SSD drive, and then re-extending the pool with the SLOG mirror if that's easy to do in the UI or CLI. I'd love to help out this community if I can. You have given me a bunch of help already.
 

jasonboche

Dabbler
Joined
Oct 2, 2018
Messages
25
Found it.

Code:
root@freenas1:~ # zpool list
NAME					  SIZE  ALLOC   FREE  EXPANDSZ   FRAG	CAP  DEDUP  HEALTH  ALTROOT
Mirror2x6x1TB_slog_pool  5.44T   681G  4.77T		 -	 7%	12%  1.00x  ONLINE  /mnt
freenas-boot			  278G   862M   277G		 -	  -	 0%  1.00x  ONLINE  -
root@freenas1:~ #
root@freenas1:~ #
root@freenas1:~ # zpool remove Mirror2x6x1TB_slog_pool mirror-6

 
Status
Not open for further replies.
Top