Backup Solution? HDD vs M-DISC vs Other?

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,829
Apologies, I have no such setup. However, I’d also like to think that my kids will be smart enough to figure it out if I leave them the disks with a note. If not, they don’t really deserve the pictures I took of them! :smile:

I’ll have to check out those multi-layer disks some day. For now, I’m using the ones with fewer layers, multi-layer disks certainly make a lot of sense for bulk archival storage. The price point for the 100GB disks is somewhat eye-watering though - it makes you want to throw out the RAW files and otherwise reduce bulk before archiving.
 
Last edited:

MalVeauX

Contributor
Joined
Aug 6, 2020
Messages
110
Apologies, I have no such setup. However, I’d also like to think that my kids will be smart enough to figure it out if I leave them the disks with a note. If not, they don’t really deserve the pictures I took of them! :smile:

I’ll have to check out those multi-layer disks some day. For now, I’m using the ones with fewer layers, multi-layer disks certainly make a lot of sense for bulk archival storage. The price point for the 100GB disks is somewhat eye-watering though - it makes you want to throw out the RAW files and otherwise reduce bulk before archiving.

Agree,

I was just looking at common (not just the best deal possible) prices.

Verbatim 100Gb Branded Surface BDXL M-Disc; 5x pack (500Gb) x $58 ($0.116/Gb)
Verbatim 50Gb Branded Surface BDDL M-Disc; 25x pack (1250Gb) x $140 ($0.112/Gb)
Verbatim 25Gb Branded Surface BD-R M-Disc; 5x pack (125Gb) x $16 ($0.128/Gb)

The 50Gb is the best cost per capacity, but not enough to worry over. The 100Gb is barely more expensive, only noticeable when you're talking several terabytes. But if your goal is only 1~2TB of archival storage, it's very reasonable considering it will last a very long time, survive the environment in case it was exposed, and doesn't take up much physical space to store and will be readable by common hardware that is inexpensive with common connection standards. The 100Gb is more convenient in that using less discs, less swapping, less fuss for a rebuild or for someone else to access if needed.

I picked up some 100Gb ones; as soon as I have a written disc, I will test it on various devices to see what can read it.

Very best,
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
I’d go m disk simply because the small delta in net cost. The much bigger cost is time. With m disk, you’ll only have to do it once.
Now see, I love this forum because we don't have to agree, it's okay to have an opinion that is different.

After seeing that an M-disk can be read in a standard drive (DVD/Blueray), I agree that the cost is not a huge determining factor, however as for doing it once, nope that is not reality. As you add lets say more photos and other data (because as a society we tend to horde data) then you will need to create more M-disk media. How often you do that is a personal decision, I myself would do it every 6 months. That might be writing a single new M-disk for addition to your collection or if you are growing data quickly then you might be adding multiple pieces of media. So it's never a one time thing. Of course maybe I misunderstood the comment, very possible.

I'm curious how resistant to scratches the M-disk is, on both sides of the media. That is typically the damage I find on CD/DVD/BR media, not the sun baked it to death.
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,829
I'd wager it depends on the use case.

For example, you could "bin" these disks by category and then sequentially add content as the years pass until a disk is full and then start a second one. That way, you can accommodate data creep yet never have to throw disks as obsolete away. More expensive up front as you'll likely have to buy more disks.

Alternatively, create entire backup sets that note the last date something was added, then only add newer content to the next set of disks, ideally in 25, 50, or 100GB chunks to accommodate the media. Could use a blank text file you keep stored in each directory of the stored data, assuming you name your files / folder in a chronological fashion (for example, I use year-month-day as a start for every file / folder name). Then you know without looking what was backed up and to what point.

Single-sided DVDs and CDs usually consist of a single polycarbonate disk with a thin reflective / dye layer, followed by a thin coating of conformal epoxy or whatever on top. That top layer is easily scratched and unlike the polycarbonate layer below, it cannot be polished. Most of my CDs were ruined by scratches on the side with the content description, not the polycarbonate layer. That conformal coat is super soft / easy to penetrate.

M-disks are build like double-sided DVDs, using two polycarbonate disks. Thus, the data is better protected. Additionally, the layer getting burned is much more durable than the dye layers typically used in writable DVDs and CDs. That's why M-disks need a stronger laser - they actually burn layers of metal away vs. changing a dye. The end result is closer to a commercially-made CD, where the pit pattern is pressed, not burnt.

MDisk.jpg

 
Last edited:

MalVeauX

Contributor
Joined
Aug 6, 2020
Messages
110
While it's true, any optical media can be gouged and scratched, but if it's used as archival backups, then it's not really meant to be some free floating disc in a bag or being handled commonly and carefree. I imagine one would take care to create it, handle it and store it carefully. Kind of like how no one is copying a bunch of data to a large HDD and then tossing, literally tossing it, into a bag with other stuff that will rub and bang around. I'm not saying we need to white-glove handle the optical discs, but still, it shouldn't be a concern for scratching anymore so than dropping a HDD is a risk, butterfingers produce the same issue no matter the medium you're handling.

Very best,
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,829
No doubt, this media will likely handle stuff that no HDD would be able to survive. Its longevity is unparalleled, cosmic rays are not going to lead to un-wanted bit-flips, they shake off water immersion, and so on. So M-Disc media has a very specific use for which it is VERY good and value-priced accordingly. Other storage media (like 2.5" HDDs) can offer even higher storage density, R&W, etc. though with the caveat that the data may not be as persistent, the carrier is more fragile (no baths, please), and so the risk profile is different.

M Disc makes a lot of sense for long-term documents (high quality scans of birth certificates and like documentation) that your descendants might want for years to come. Ditto the top 50 pics per year for each child (that still comes out to over 1,000 pics by the time they graduate college, surely enough to satisfy the next set of in-laws at the rehearsal dinner?). Similarly, you may do high-quality scans of family heirlooms like important photographs that would allow your kids to each print a copy.
 

MalVeauX

Contributor
Joined
Aug 6, 2020
Messages
110
M Disc makes a lot of sense for long-term documents (high quality scans of birth certificates and like documentation) that your descendants might want for years to come. Ditto the top 50 pics per year for each child (that still comes out to over 1,000 pics by the time they graduate college, surely enough to satisfy the next set of in-laws at the rehearsal dinner?). Similarly, you may do high-quality scans of family heirlooms like important photographs that would allow your kids to each print a copy.

This is a very good point; a lot of younger people totally neglect this kind of thing until the tragedy strikes. Having documents alive and ready and able to last through time are a big deal for all of us at one point or another. Having high resolution copies of legal documents, certificates, wills, etc, is pretty important. While a physical copy is needed, depending on where you are in the world, not always near by, and sometimes circumstances don't allow physical travel or risking it in the mail, a digital version reproduced and notarized can be a nice option to avoid major headaches for various things.

I think again the more elaborate the backup solution, the more issues potentially. So far, just evalating what all we have available that isn't crazy priced, not much comes close to optical in terms of survivability of the medium itself and total simplicity of accessing said data on the media.

Flash/SolidState would be even easier, higher capacity, faster, and would be fair costing and would simply be the most convenient right now I think. But there's just not enough data to show what this memory does over time. Even the enterprise ones are not there yet. I don't think anyone is ready to commit any important data to a 30+ year cold storage on solid state and expect it to be error free upon retrieval. But if this tech ever matures to a point that it becomes stable for a century with ideal conditions, that would be a huge deal I think.

Very best,
 

MalVeauX

Contributor
Joined
Aug 6, 2020
Messages
110
On this topic then:

What do you all use for backup in terms of software for scheduling or automation, be it local or over a network?

What precautions are used, if using automation, to avoid writing errors over good data onto the backups?

Very best,
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,829
snapshots enabled on NAS

Time machine for hourly/daily/etc backups

carbon copy cloner for datasets to backup. on occasion, the backups run with a hash-generating function that ensures every file on each disk matches to the last bit.

philosophy: NAS is the reference, everything else is but a copy. Hence focus on Z3 array, etc.
 

MalVeauX

Contributor
Joined
Aug 6, 2020
Messages
110
snapshots enabled on NAS

Time machine for hourly/daily/etc backups

carbon copy cloner for datasets to backup. on occasion, the backups run with a hash-generating function that ensures every file on each disk matches to the last bit.

philosophy: NAS is the reference, everything else is but a copy. Hence focus on Z3 array, etc.

Interesting thanks; are the snapshots available to be copied over the network so that you could have a backup of them separate from the NAS boot disk? Also, do you run redundancy on the boot disk (if non-USB; such as mirrored boot disk)? I'm thinking of running FreeNAS from SSD but I'm wondering if I should have it be redundant too to avoid having to rebuild and what happens to the snapshots if the host failed? I don't need insane uptime, but just thinking about it from the backup idea.

Very best,
 
Last edited:

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
One interesting concept that I have not tried, is to burn a ZFS pool onto an M-Disc. It would be R/O but you would be able to detect read faults. And if the data is not one monolithic file, recovery of all the other files is possible on a read error. Hmm, perhaps I need to do some testing, using handy zVols, (which I can set to R/O after filling).

If you do need to store large / huge files, breaking them up into smaller chunks, perhaps 250MB, (back to the old UU-Split command?), that would help. You create 2 or more M-Discs with the exact same data. Then if the first 250MB chunk is bad on disc 1, it maybe good on disc 2.

Remember, even paranoid people can have enemies :smile:.
 
Last edited:

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,829
The snapshots are local but it is possible to back them up between FreeNAS servers, using the replication process.

When I have too much time, I will try to set up a second FreeNAS server to do just that - create a initial copy of the pool on the main server, followed by replicating snapshots automatically via SSH tunnel. That really gets you the best of all worlds as being able to roll back on snapshots is key to dealing with ransomware, for example. Plus, if the second server is off-site, you also address the off-site data storage recommendation for backups without having to resort to the cloud.

Depending on the speed of the connection between the FreeNAS servers, block-level ZFS replications should be much faster than rsync. Additionally, based on painful personal experience, I'll note that latency has a huge impact on rsync performance so unless you have really fat pipes connecting the servers, rsync may behave pretty dismally. ZFS replication is simply faster and more efficient in every respect.

That is really helpful should you suffer a local emergency as you then have the option to drive to your backup server with a rebuilt main server and replicate everything back super fast, locally (say a 10GbE DAC). Or replicate everything back via the internet though that may take ages and may trigger datacap limits at your ISP.

Thus, the combination of two servers at a distance and ZFS replication feature-set is a huge benefit over most cloud providers who either provide the data at a relative snails pace or charge $$$ to copy stuff to disk. Plus, ZFS gives you the benefit of bit-rot protection, which is neither a guarantee with data in the cloud (depends on the provider) nor the disks they send you.

The only reason I currently do not use ZFS replication to my local backup arrays is that I want said backups to be readable by my home platform of choice. So that is why I suffer through the agony of exhaustive rsync-based, file-level replication instead of block-level, smart, ZFS replication.

Lastly, protecting that connection between servers is likely my biggest reason not to do this yet. I want to find a solution that does not expose the SSH port on the FreeNAS to the outside world, even if it's at a non-standard number, only available during certain hours, etc. A port is a port and a dedicated security appliance is likely called for.
 
Last edited:

MalVeauX

Contributor
Joined
Aug 6, 2020
Messages
110
Thanks,

I'm not sure I will be that deep into ZFS just yet; currently planning to run FreeNAS and simply have basic mirrors as pools. Starting with one to learn the system well enough before moving into a 2nd mirror pool. So I shouldn't have to rebuild or resilver or anything, other than replace a drive and direct copy the mirror back, which should be faster than a resilver anyways. So I'm curious on how to handle a mirror pool if the host server itself, like the boot drive, went kaput on me. If I could just attach the drives and spin up a distro of Linux to read the ZFS file structure and then copy the files off to another drive. Or, if I could simply respin up an install of FreeNAS on a new drive (or reload via a snapshot?) the same drives from the mirror pool and be back up and running in a few minutes basically?

Very best,
 

MalVeauX

Contributor
Joined
Aug 6, 2020
Messages
110
Update on M-disc:

The drive & discs showed up in the mail today. I got right to work on it.

I just burned a 100Gb M-Disc (BDXL) with a typical drive, nothing costly or special, using free software (ImgBurn) and it detected the full capacity (100Gb) and layers of the BDXL without any effort or special settings and the drive wrote the files to the 100Gb disc without any error, followed by a complete verification post-write at low speed (4x) to ensure accuracy of the data. The drive is a LG WH16NS40 from May of 2020 per its engraving of manufacture off Amazon ($65 new), latest firmware (nothing special for this though). The disc is a Verbatim M-Disc BD-R branded surface BDXL 100Gb multi-layer disc ($11 and change, but I bought a couple in a pack).

For this disc content, I did our 2013's photos. Just the RAWs and JPGs and that's it. It was 95Gb (after culling chimps out) so it fit just right on the 100Gb disc. I did not write on the disc with a market, and will not, to avoid any chemical reaction over time. File system is UDF, and it handled deep directories (8 directory trees) and long file names and folder names without issue. This is read natively by any OS on the planet. I checked with a Windows and Linux platform just to satisfy my own curiosity.

This was successful and error free, so this backup copy of the photos will now go into a fireproof/waterproof safe and be the 3rd physical copy of the data (the other two physical copies life on redundant drives that are separated), it will be hearty to environmental conditions and not suffer magnetic issues, or dye/pigment issues, and no mechanical failure options. Should last my lifetime and then some in fairly well controlled settings. $11 and change for the disc covers all our family photos for that year. I'll do each year separately until its all archived. Then it all goes into the safe.

I will test the disc in a separate drive, likely an external, to see how it is handled; since these drives are so low cost, having one or two new ones in packing to read the backups and refreshing hardware every decade or so should handle file retrieval if needed.

So far this is about the cheapest true backup that will survive most issues and be handled with the most common and least expensive hardware with the least complexity that I can figure without doing the cloud approach.

The capacity per cost is not the best compared to other mediums, but the survivability of the medium was the goal. As a 3rd physical copy that is. Not all of my data will be treated this way. But things like family photos, documents, etc, will all be "engraved into stone" so to speak with this for now. Unless a better method comes up in the future.

LGDriveInstalled.jpg


Mdisc_100Gb_BDXL.jpg


ImgBurn_Success_Mdisc_100Gb.jpg


Very best,
 

MalVeauX

Contributor
Joined
Aug 6, 2020
Messages
110
Update,

I just completed my 4th M-disc burn; all data written and verified to be 1:1 bit perfect copies. 400Gb down. This was made up of my family photos from 2013 to 2015 so far. I have a bit more to go. Each disc takes 1.5 hours to burn, another 1.5 hours to verify before I call it good and store it. So far, averaging 200Gb per year on average based on this projection is putting me at $22 per year to have a 25+ year archival format for irreplaceable data (family photos, documents, scans, etc) that can survive an EMP and has no chance of mechanical failure and will live in a safe; being the 3rd physical copy of the information completing the idea of a true backup. This projection works out to be a $1 per year cost to produce most-of-life archival level backups.

My server motherboard, CPU and RAM should arrive tomorrow. I hope to have it up and running by this week to report on that too and how it will work into this scheme as the 2nd physical copy of data with redundancy (mirror).

Very best,
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,829
Great news and thanks for the update! If we experience an EMP, we'll likely have bigger fish to fry. :smile: But I take your point re: actually saving the files to a more permanent carrier than HDD or SSD.
 

MalVeauX

Contributor
Joined
Aug 6, 2020
Messages
110
Great news and thanks for the update! If we experience an EMP, we'll likely have bigger fish to fry. :smile: But I take your point re: actually saving the files to a more permanent carrier than HDD or SSD.

This is true, in the next solar maximum (2024~2026) it's possible to eat a juicy CME and have a similar effect. It would be interesting if it did occur as it would prompt awareness to this sort of real phenomena.

Very best,
 

corponramp

Cadet
Joined
Sep 25, 2021
Messages
2
any status update to this? just curious since i just found the thread - have you used any type of sequential or incremental backup methods/software?

thanks!!
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
I admit that I did not read all posts on this thread. What strikes me, however, that what I have read does not differentiate much between backup and archiving. Those are two very different things.

In that light, I would also to anyone interested in archiving to first search the Internet. Seeing the many different aspects that fall into this category will certainly make it easier to develop a feeling what is actually needed for one's requirements.
 

corponramp

Cadet
Joined
Sep 25, 2021
Messages
2
thanks - i am more interested in the experience with archiving with the m-disc solution. however, if we are looking at archiving documents that get updated (new and modified), videos, pics, ect then one of the member alluded towards a software solution in one of the posts ... so yeah ... i guess a hybrid approach to archiving data with copies of backups.
 
Top