When to upgrade RAM?

Status
Not open for further replies.

icsy7867

Contributor
Joined
Dec 31, 2015
Messages
167
I have been searching around and i am not sure of the answer. but how do you know when your system is RAM Starved?

I currently have:
Supermicro X10SDV-6c+
Xeon D-1528
24GB DDR4 ECC Memory
6x 3TB WD Reds

Currently running 2 VM's using about 4GB of ram combined.

And everything has been pretty happy. However, I have recently been trying to use Freenas as an NFS mount for my hypervisors VM's (Second machine, very similar build). However I was experiecing some awful performance. I did some research and found:
https://forums.freenas.org/index.ph...xi-nfs-so-slow-and-why-is-iscsi-faster.12506/

Turns out ZFS is pretty awful for sync writes, but a SLOG would help with this. However I only have a single M.2 port and a PCI slot remaining (No more sata ports), so I dont know if I could use a single NVMe SSD or I could try to get a NVMe PCI adapter and add a second device to mirror.

However I tried to simply disable sync, and I got some better performance, but not much. The transfers will start at 200-400 MB/s, and then tumble to about 10-60 MB/s. And it will ramp up, and ramp back down repeatedly. After reading the article I created an iSCSI target, and moved my VM's to this target. And I seemed to improve my performance slightly once again, but it is still pretty poor. transfers from the same 10GB network start fast, but then topple to about 30-70 MB/s.

I am running out of ideas, but I know the iSCSI can be RAM hungry. is there a way to tell if my system is resource starved? Sorry for the long post!
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
Any system with ZFS and more data then ram can be "starved"
Turns out ZFS is pretty awful for sync writes
You mean to say "Turns out ZFS is pretty safe for sync writes";)
I don't know if I could use a single NVMe SSD
Sure can! @Stux has a fantastic thread about SLOGs and their performance.
After reading the article I created an iSCSI target, and moved my VM's to this target. And I seemed to improve my performance slightly once again, but it is still pretty poor. transfers from the same 10GB network start fast, but then topple to about 30-70 MB/s.
You listed your drive but did not mention how they are configured. this can affect the performance drastically! I run 8 disks as striped 2 way mirrors and can do a solid 1 GigaBYTE reads on high queue depth benchmarks. Writes are a bit slower and depend on my constantly changing SLOG config. the big catch here is that I'm using 8gb fiber channel.
However I tried to simply disable sync
Great for testing BAD for production! Keep in mind that you will never be faster than sync disabled. A SLOG will improve sync writes but it will never* be faster than sync disabled.
is there a way to tell if my system is resource starved?
If you look at the monitoring section do you see and swap usage? What does your ARC look like?
 

icsy7867

Contributor
Joined
Dec 31, 2015
Messages
167
Thanks for the quick reply! Yeah, I have disabled sync temporarily just to test. But this tells me a SLOG is not currently my issue. I am unsure, but you do not need to reboot for that to take place do you? Simply disabling/enabling sync from the volume manager is sufficient?

I went with RaidZ2 for more data, so I unfortunately do not get the benefit of striped mirrors.

SWAP utilization is pretty much all green around 6GB.
Arc Size is about 4GB, pretty horizontal line.
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
If you not swapping and your ARC is stable at 4GB... how are you testing the disk speed on the iSCSI side? Are you routing you iSCSI traffic? MPIO? do you know what you have selected for you PSP on ESXi? Back on the FreeNAS side, hows the ARC hit ratio? Have you tried crystal disk mark in a windows VM? for the iSCSI setup on FreeNAS did you use a file to back LUN or a ZVOL?

Sorry the questions are all over the place.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
However I tried to simply disable sync, and I got some better performance, but not much. The transfers will start at 200-400 MB/s, and then tumble to about 10-60 MB/s.
The problem is the number of drives you have. Also, I am guessing you have them in RAIDz2? You didn't say. To boost the speed of your pool, you need more drives. I put 16 drives in mirrored pairs to get high IOPS from my iSCSI pool and I have both SLOG and L2ARC also.
And it will ramp up, and ramp back down repeatedly.
That is the drive internal cash filling and flushing.
After reading the article I created an iSCSI target, and moved my VM's to this target. And I seemed to improve my performance slightly once again, but it is still pretty poor.
SLOG still needed for iSCSI and it is also sync when doing VMs especially. What hypervisor are you using?
I am running out of ideas, but I know the iSCSI can be RAM hungry.
You probably want to max out your RAM before adding SLOG, but you will probably need a SLOG also.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080

icsy7867

Contributor
Joined
Dec 31, 2015
Messages
167

Cool post about using a ramdisk to test. I did some testing and tried this, but my speeds/results were about the same.

Here is a picture of what I am seeing. Starts out strong 200+ MB/s. Maybe it is because the VM is running on the RaidZ2 volume, and pulling the ISO over NFS on the same RaidZ2 volume? however the I have tested this with only my windows server running, and I do not believe the I/O of the server itself is a great deal. It is currently just running AD and DNS as a BDC.

Untitled.png
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
The transfers will start at 200-400 MB/s, and then tumble to about 10-60 MB/s. And it will ramp up, and ramp back down repeatedly.

That is the drive internal cash filling and flushing.

Close, but not quite. This is the ZFS write throttle in action: (http://dtrace.org/blogs/ahl/2014/02/10/the-openzfs-write-throttle/)

This specific kind of "breathing" throughput results from a combination of a fast network, slow vdevs, and fast SLOG (or async writes) - basically, your network pipe is capable of filling a transaction group faster than your drives can write it.

Because you're writing asynchronously, the initial burst of writes goes into RAM - which gobbles it all up as fast as your source will send it.

Then you trip the threshold of vfs.zfs.dirty_data_max (10% of your RAM or 4GB, whichever is lower, in your case 2.4GB) and ZFS starts writing it out to disk. But you've got a single RAIDZ2 vdev, and there's always other operations going on, whether it's other network traffic, your VMs, or ZFS doing housekeeping/metadata updates. Your vdev can only sustain a much slower write speed.

ZFS says "whoa hold on there" to your incoming writes, and inserts an artificial delay. This gets larger and larger until it's able to "catch up" with the pending transaction, and things are caught up.

Then you start it all over again.
 

icsy7867

Contributor
Joined
Dec 31, 2015
Messages
167
That's some great information.

Is there a way to combat this? Or am I limited by my RaidZ2? if this is the case, I will stop using my 10Gb and move back to my 1Gb link. It is a shame, but everything seems to be more stable, and surprisingly faster.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
More vdevs.

Sent from my SAMSUNG-SGH-I537 using Tapatalk
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
That's some great information.

Is there a way to combat this? Or am I limited by my RaidZ2? if this is the case, I will stop using my 10Gb and move back to my 1Gb link. It is a shame, but everything seems to be more stable, and surprisingly faster.
Since it sounds like you're out of space in your case, you could use a single larger PCIe SSD for your VM datastore, and do snapshots with local replication back to the Z2 pool for some semblance of a "backup."
 

icsy7867

Contributor
Joined
Dec 31, 2015
Messages
167
I have plenty of space, about 8TB free actually.

I could also add a second NVMe drive using a PCI adapter, and mirror the two. However, I think I will be fine with a Gb link instead of 10Gb. Just seems like a waste!
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
I have plenty of space, about 8TB free actually.

I meant "physical space for drives" - if I was wrong in my guess then by all means just drop in a mirror of two SSDs and go to town.

I could also add a second NVMe drive using a PCI adapter, and mirror the two. However, I think I will be fine with a Gb link instead of 10Gb. Just seems like a waste!
The other option would be a moderate-speed SLOG device (like an Intel S3700) which could act as a throttle for the write side, but still let you have the 10Gbps connection for reads.
 

icsy7867

Contributor
Joined
Dec 31, 2015
Messages
167
You were correct! My node 304 is pretty much full!

However, I could add a pci-e card to mirror a couple nvme drives. But that sounds expensive haha.

I think I'm just going to try an iSCSI 1gbps port. Wish there was an easy way to limit the 10gb port to 2gbps or something
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
You were correct! My node 304 is pretty much full!

However, I could add a pci-e card to mirror a couple nvme drives. But that sounds expensive haha.

I think I'm just going to try an iSCSI 1gbps port. Wish there was an easy way to limit the 10gb port to 2gbps or something
You can use pf for this but I have no idea if pf is still in or enabled in FreeNAS. That would be a cool feature request! Outbound QOS configurable by service or client IP!
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
However, I could add a pci-e card to mirror a couple nvme drives. But that sounds expensive haha.

Doesn't have to be NVMe, you can use cheaper M.2 SATA drives like the $100 WD Blue 500GB[1] with something like the $55 StarTech PEX2M2[2] which has the ASMedia 106x chipset (and fits two M.2 cards on one slot as well)

[1] https://www.amazon.com/Blue-NAND-500GB-SSD-WDS500G2B0B/dp/B073SBX6TY
[2] https://www.amazon.com/StarTech-com-M-2-SATA-Controller-Card/dp/B017IM54GM
 
Status
Not open for further replies.
Top