I know this isnt technically a FreeNAS Qeustion, but it relates to zfs.
Im currently running Linux Mint tara 19.1, but i packed a custom ubuntu kernel with 5.1.9 kernel and zfs 0.8.1, which i install it.
Thus i have the latest zfs under the latest linux kernel.
System consists of 32core epyc cpu and 192gb Ram.
Heres the deal.
I created a datastore with lz4 compression, no dedup and 512kb sectors. On this dataset, i copied from a nvme ssd 140 gb of data, 8 times. Data consists of allmost 73k files, each aproximatly 2mb big. So thats allmost 600k of files.
The zpool consists of a single WD gold 10 Tb drive.
For the copy the system managed to copy with an allmost consinstent 175 mb per second from the ssd to the data store. 140gb times 8 means a bit more over 1 terabyte. Each 140 gb of data took between 10 and 12 minutes.
During this copy, i observed ram loading for a total of 44gb of ram. (Thats the amount of ram which remained occupied after the copy was done,and i killed all ram consuming apps)
Even though the copying happened with 170 mb per second, i barely heard the disk drive spinning, which is very strange. Moreover, after i finished copying the whole stuff, the ram remained occupied. The only way to "flush it" was to restart linux.
Alas after restart,i had to import the zpool again and to mount the data sets again. (Still need to figure how to make them mount at startup automagically)
Now the ram was free.
What gives? If i had waited more, would the ram had been cleared? How much ram does zfs really need? If i had 2tb of ram, would that have been occupied in its entirety as well?
Is there a way to manually flush the occupied ram after such an intensive write session? If the dataset is being constantly written onto, should i expect the whole remaining ram to get occupied? I need between 130 and 160 gb of ram for my load, and I dont really have that much ram to spare. If it is indeed necessary, I could add more ram, my motherboard has 16 slots and only 8 are occupied.
But hell, ive seen the whole ram load up. never had such a ram intensive load anywhere before.
Im currently running Linux Mint tara 19.1, but i packed a custom ubuntu kernel with 5.1.9 kernel and zfs 0.8.1, which i install it.
Thus i have the latest zfs under the latest linux kernel.
System consists of 32core epyc cpu and 192gb Ram.
Heres the deal.
I created a datastore with lz4 compression, no dedup and 512kb sectors. On this dataset, i copied from a nvme ssd 140 gb of data, 8 times. Data consists of allmost 73k files, each aproximatly 2mb big. So thats allmost 600k of files.
The zpool consists of a single WD gold 10 Tb drive.
For the copy the system managed to copy with an allmost consinstent 175 mb per second from the ssd to the data store. 140gb times 8 means a bit more over 1 terabyte. Each 140 gb of data took between 10 and 12 minutes.
During this copy, i observed ram loading for a total of 44gb of ram. (Thats the amount of ram which remained occupied after the copy was done,and i killed all ram consuming apps)
Even though the copying happened with 170 mb per second, i barely heard the disk drive spinning, which is very strange. Moreover, after i finished copying the whole stuff, the ram remained occupied. The only way to "flush it" was to restart linux.
Alas after restart,i had to import the zpool again and to mount the data sets again. (Still need to figure how to make them mount at startup automagically)
Now the ram was free.
What gives? If i had waited more, would the ram had been cleared? How much ram does zfs really need? If i had 2tb of ram, would that have been occupied in its entirety as well?
Is there a way to manually flush the occupied ram after such an intensive write session? If the dataset is being constantly written onto, should i expect the whole remaining ram to get occupied? I need between 130 and 160 gb of ram for my load, and I dont really have that much ram to spare. If it is indeed necessary, I could add more ram, my motherboard has 16 slots and only 8 are occupied.
But hell, ive seen the whole ram load up. never had such a ram intensive load anywhere before.