Quick SLOG question I need some help with.

Status
Not open for further replies.
Joined
Apr 9, 2015
Messages
1,258
So I have a couple datasets so that I can easily snapshot data as well as track how much is used in a particular area. One of the bad things about that is that when I am transferring data between datasets it is very slow since it is reading and also writing to the same pool. Is this a situation where a SLOG can help? Now that I have a case that will support 48 drives I will have more than enough room to toss in a couple SAS SSD. Often times I am transferring 2 to 4 GB files and sometimes do this in batches of up to 50GB at a time but since it is a Read/Write operation it runs really slow unless that data is already in ARC. Otherwise I will just look at getting a couple small drives and setting up a small pool with them and use that for the storage point and transfer pool to pool.
 

m0nkey_

MVP
Joined
Oct 27, 2015
Messages
2,739
No, a SLOG will not help in this case. A SLOG is only useful for synchronous writes. Copying files from one dataset to another uses asynchronous writes.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
Depending on your use case, you might create a Mirror pool of SSD(s) that you can use a temporary storage. Meaning a script that does something like this;

mv $SOURCE SSD_DATASET/
mv SSD_DATASET/* $DESTINATION


This eliminates the reading while writing on the same disks.
Careful scripting can make it look transparent. You simply give the command just like a regular "mv" and it does the work;

mv_ssd FILE1 FILE2 {FILE3 ...} DEST

And if you are really careful on the scripting, you can safely use a single SSD, (meaning not mirrored). You simply don't erase the source files until safely back in the protected pool.
 
Joined
Apr 9, 2015
Messages
1,258
Nahh it's all good, just wanted to check and see. I can easily get a bunch of cheap small drives and use them in a pool as once I am done with it there and transfer it over I can clear it off. The data isn't really important I just want a faster way to make the transfer but since it is often written to I don't want to just use an SSD to throw away. Before with a smaller case it was not as much of an option, now I have 41 bays sitting empty that are just begging to be put to use.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Joined
Apr 9, 2015
Messages
1,258
I was trying to decide how to use mine also.
LOL, I noticed you and someone else pulling EVERY drive available to drop in and fill it up.

For the data I have stored in the other dataset I have about 1TB right now and I will be shuffling a lot of it around soon and clearing it out. Will probably just pick up some 1TB drives and throw them into a RaidZ1 to hold what I have, or maybe get a bunch of little refurbs and use them instead. As long as I have about 2TB of storage I will be fine there. If I didn't do some renaming and dropping into multiple datasets having a script make the transfer would be fine. I did that at one point in time where it would just transfer from a folder and move across but now I have multiple folders in a dataset and that will not work as well.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
LOL, I noticed you and someone else pulling EVERY drive available to drop in and fill it up.
No. I am not trying to fill mine. I still only have the original 16 drives I transplanted from the old chassis. I am scheming about how best to implement the pool I want to create for iSCSI. I have a random assortment of leftover drives in the 500GB and 1TB size and I even have half a dozen 2TB drives lying about doing nothing, but I have not decided how I want to configure it yet.
 
Joined
Apr 9, 2015
Messages
1,258
I have a couple 250's I pulled from my desktop that are still working ok honestly they will probably outlast the drives in my FreeNAS and they have been ran for a long time now, but I would need to buy some more to make that work and probably do so at the same cost as some new 1TB drives. Now it's just deciding what exactly I want to do to achieve what I need. Probably end up just grabbing some WD blues and throw them into a RaidZ1 to hold the temp data before I transfer it to my main pool. Be a performance increase for the main pool and transfer speed anyway. If I could find a dozen or so smaller drives for dirt cheap I would just throw them together in a larger pool and keep a hot spare or two. It will probably take me a long time to fill the case and by the point where I am almost able to fill it I will be replacing old drives with something larger anyway.

Just tired of the transfers taking a long time since they are on the same pool to a separate dataset and want to figure a way to make it happen a little faster as right now it is tedious to say the least.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
More vdevs and more IOPS :)
 
Joined
Apr 9, 2015
Messages
1,258
Yeah I know I could do a second vDev for what I already have but that costs money and I am already thinking about going to a 12 drive vDev and replacing the 7 I have now. But it will take some time before that is able to be done, I am dreading to see the bill from when the wife was in the ICU for a couple days and then a regular room for a week not to mention two trips into the ER and two ambulance rides with 30 miles of transport each time one way..... When those bills come in I may have a kidney and a piece of liver along with a lung for sale.

Right now I will be happy to just offload the stuff onto a second small pool and go from there.
 
Status
Not open for further replies.
Top