Extra Performance?

skyline65

Explorer
Joined
Jul 18, 2014
Messages
95
A quick question I hope.

I have a Freenas 24 bay server comprised of:
Media 1 6x WD Red 3Tb RAIDZ2
Media 2 4x WD Red 3Tb RAIDZ2
Server 4x WD Red 3Tb RAIDZ2 - General stuff that I have backed up elsewhere and don't want to clutter my Mac.
Backup 4x WD Red 3Tb RAIDZ2 - Backup of my computer and all my work

I use QRecall backup software on my Mac which creates large archives of my 4 work drives between 400Gb - 1Tb. Backing up is pretty quick as it de-dupes on the fly, the issue is that when I’m doing a verify or compact it takes ages... although much quicker now I’m on 10Gbe! Is there anyway of adding any extra speed to the process? I’m not in desperate need as it can run when I’m asleep however as with all Freenas users there is an itch to see if I can improve the performance without breaking the bank?



X11SSL-CF Motherboard, i3 6100, 64GB ECC Ram, Chelsio T420, LSI 9201
 

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
I am not familiar with the backup software you're using, but if it is doing incremental mergers, then your pool design is wrong. Most incremental mergers are bottle-necked by the available random iops of the backing storage. You would be better off rebuilding your array to be a single pool comprised of 3 x 6 disk raidz2 vdevs, as it would increase your random iops 3x and increase your sequential write speeds.
 

skyline65

Explorer
Joined
Jul 18, 2014
Messages
95
QRecall works is by scanning the folder to be backed up and then comparing with the archive and only adding files if they aren't duplicates. It normally only takes a minute or two to backup any data. Usually throughput is around 2GB-5GB/min but a recent system upgrade somehow changed permissions on the source folder and now it has been corrected QRecall is re-checking all the files (A good thing!)

Screen Shot 2019-02-28 at 11.46.38.png

Screen Shot 2019-02-28 at 11.54.58.png

I should have mentioned the total size of the backups are only about 4TB.

I’m not too fussed about speed of the backups as they are pretty quick and doesn’t impact on me working. It is the verify, compact or merge that tends to take time as it would with 500GB+ archives although they are scheduled for 3am so I should be asleep. The merge is a rolling merge.

Screen Shot 2019-02-28 at 12.16.28.png

For me it’s the verification I would like to speed up and if the other aspects improve that would be a bonus but not a game changer as it runs at 3am.

As I only have 4x WD Red 3TB RAIDZ2 for a backup pool (The other drives wont be used as they general storage or Media pools.) I could rebuild by adding a few more disks and or using L2Arc or Slog etc (dirty words) help? I have read the horror stories of it slowing the system down. However as it is a home setup for my media and also backing up my freelance work I can live with limitations although its always good to try and improve it.

As always I appreciate the words of wisdom.
 

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
Unless you have some unstated reason for breaking your storage into 4 separate pools, you are only decreasing the performance and capacity of your array. Again, you would be better served by rebuilding the array into a single pool of 3 X 6 drive raidz2 vdevs. You can separate the single pool into the 4 uses layout that you stated above; only it would be done with datasets. Datasets allow you to set discrete permissoins/quotas for space consumption/recordsize/compression... etc.
For me it’s the verification I would like to speed up and if the other aspects improve that would be a bonus
The verification will be faster with the 3 X 6 drive raidz2 vdevs design as the random and sequential reads will be faster on the 3 vdev pool. As an added bonus you will also gain back 2 drives worth of storage that are currently devoted to parity without sacrificing the raidz2 resiliency.
 

skyline65

Explorer
Joined
Jul 18, 2014
Messages
95
I couldn’t afford 18 drives. When I made the pools I bought what I could afford at the time and worried that knowing my luck too many drives would die in a VDev and then destroy the pool. Or have i misinterpreted Cyberjocks comments about "If any VDev in a zpool fails, then all data in the zpool is unavailable." meaning the whole pool is dead?

I think I could go 12x3TB for Media in one pool (2x6TB Z3?) and 10x3TB for Server (5x3TB Z2?).
Just to keep work and pleasure apart plus the Server is backed up to Google Drive however media is all on DVD, CD, BD. Maybe I should buy a couple of external for Media backup.
 
Last edited:

skyline65

Explorer
Joined
Jul 18, 2014
Messages
95
I have both but not mixed. SMB seems to be a bit of a mess in Mac OS X, but for just Media etc I use SMB no issues with my Kodi box and transferring from Mac to Media server.
I switched to AFP for Time Machine and Backup/Server shares as it is quicker over 10Gbe and seemed to suffer less stutter in the transfers although both protocols do to some degree. It seems when transferring large files it will transfer 3-4 GB quickly then pause... then slowly speed up again.

I do have another 16GB RAM to put in to take Freenas to 64GB but haven’t yet just in case my Samsung PM863 SSD would be of use. It is a pain getting the server out and installing components I would rather do it all at once.
 

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
I couldn’t afford 18 drives. When I made the pools I bought what I could afford at the time and worried that knowing my luck too many drives would die in a VDev and then destroy the pool. Or have i misinterpreted Cyberjocks comments about "If any VDev in a zpool fails, then all data in the zpool is unavailable." meaning the whole pool is dead?

I think I could go 12x3TB for Media in one pool (2x6TB Z3?) and 10x3TB for Server (5x3TB Z2?).
Just to keep work and pleasure apart plus the Server is backed up to Google Drive however media is all on DVD, CD, BD. Maybe I should buy a couple of external for Media backup.
For a quick overview, a pool is made of vdevs and vdevs are made of disks. A 6 disk raidz2 vdev will have 4 drives for data and 2 for parity, so the vdev can withstand loosing up to 2 drives without failing. When a pool contains more than 1 vdev the data is striped across the backing vdevs, which means if one of the vdevs were to fail the pool would be lost. That being said, if you're performing regular smart tests and setup notifications it's rather unlikely to loose a raidz2 vdev. From what you've stated above there are 18 3TB drives which would divide up nicely into 3 x 6 drive raidz2 vdevs. The reason I'm suggesting the pool with 3vdevs is the significant performance advantage over a single vdev pool, which based on this thread is what you were looking for, increased performance. ;)
 

skyline65

Explorer
Joined
Jul 18, 2014
Messages
95
I do appreciate your advice. I didn’t realise the bit about Vdev data being striped across the backing Vdevs. I’m not sure off the top off my head if it is mentioned in the Cyberjocks Noob guide.

I’m quite anal about smart tests and notifications. I used Cyberjocks suggestions and have regular notifications and also config backups.

I did forget to mention that the reason I had 4 disk pools. When I started getting into Freenas I bought a N54 Microserver hence 4 disks, I then bought another one.... and then bought a G8 Microserver. Hence so many 4 disk pools. When I decided to go the Supermicro 24 bay way all I did was import the pools. Then decided to buy 6 more drives and make another pool.

I still keep thinking about my Lacie 5x4GB SCSI Raid Level 5 setup at work from the 90s... it was so slow and that has stuck in my mind when thinking about more drives in a Vdev making things slow. I know it was the 90s!

Okay I may buy some more drives and copy data off and make either a big pool or a personal and work one (better for tax!)
 

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
I do appreciate your advice. I didn’t realise the bit about Vdev data being striped across the backing Vdevs. I’m not sure off the top off my head if it is mentioned in the Cyberjocks Noob guide.

I’m quite anal about smart tests and notifications. I used Cyberjocks suggestions and have regular notifications and also config backups.

I did forget to mention that the reason I had 4 disk pools. When I started getting into Freenas I bought a N54 Microserver hence 4 disks, I then bought another one.... and then bought a G8 Microserver. Hence so many 4 disk pools. When I decided to go the Supermicro 24 bay way all I did was import the pools. Then decided to buy 6 more drives and make another pool.

I still keep thinking about my Lacie 5x4GB SCSI Raid Level 5 setup at work from the 90s... it was so slow and that has stuck in my mind when thinking about more drives in a Vdev making things slow. I know it was the 90s!

Okay I may buy some more drives and copy data off and make either a big pool or a personal and work one (better for tax!)
As far as too many drives in a vdev the rule of thumb is don't go "too wide" which I've read as being more than 10-12 drives per vdev.
 
Top