Wow, a lot of variation in this thread!
I get all my data to my server, then from there I have borgmatic do incremental backups to a backup drive on the same machine (nightly cronjob).
From there I use Rclone to get the encrypted borg backup to Backblaze B2 for cloud storage.
So for 3 2 1, my 3 copies are the original, the local backup, and the cloud backup.
My 2 media are local hard drives and cloud storage (I think it’s fair to consider this a different kind of media).
And my 1 offsite is the cloud backup.
Now I’m dumb and have a fear of screwing something up so I have also started burning M-Discs of my critical data (everything except TV/movie/music stuff I can redownload). Though this was a lot more expensive than I was expecting, because of aforementioned me being dumb I already screwed up two discs (they are write once). I’m also doing two copies of each disc.
Also I have photos/home videos additionally stored in ente, they are super important to me and I wanted a separated copy someone else is looking after.
Thanks for the tip, very helpful since I’m looking for MAX safety!
What makes you say they are shorter life span? The 25GB and 100GB both have the same “several hundred years” claim.
Thanks, I missed that post! Looks like the comment section would have answered a lot of my question.
In the end I have pulled the trigger and bought an M-Disc capable burner and a stack of M-Discs, so I’m gonna give that a go and see how it works out.
🐑
Sweet, thanks, I think that’s a good plan. I am thinking duplicate disks, one on site one off site. I do have a cloud backup, but if I die in a house fire then having the offsite disks is a much better solution than the random B2 bucket.
Thanks for the help 🙂
Seems the only M-Disc capable writer I can find locally is a portable USB-C connecting one, so if I go with it I’ll probably just store it with the discs. In theory M-Disc is supposed to be resistant to the kinds of things that destroy regular CDs, but making a second copy does sound like a good idea. I could even store the second copy somewhere else (another house) to protect against fire. I have cloud backup but you never know what’s going to happen over 50 years. Or if I die in the fire and no one knows I have the cloud backup.
Like this? https://www.pbtech.co.nz/product/DVWVER4618789/Verbatim-43888-External-Slim-Bluray-Writer-Ultra-H
That’s NZD by the way, conversion rates are terrible at the moment so about halve it for USD, seems in the price range you said.
The idea is that I’d swap out drives every 5 years or so. If USB A is no longer in use I’d swap out at that point for something newer. Plus the drives would be powered on every year for the update, it’s just the point that I stop doing it (too old/hit by bus/etc) that the clock would start ticking.
I do like the M-Disc idea though. Probably a similar price, and more in line with the shelf-stable solution I was looking for.
Everyone is saying to avoid flash memory. It doesn’t store well.
Another suggestion given is M-Disc, which might be a better option because then anyone should be able to throw it in a CD drive and load it without having to worry about the format of the drive and things like that. And even if CDs are not that common anymore, I think people will still know what they are and be able to find a way to load it even in 50 years. Like if someone found a cassette tape today (I know they were common much less than 50 years ago, but it’s hard to find an example since records came back into fashion). Plus M-Discs are designed for long term storage, so I could worry less about bitrot and files getting corrupt etc. They are write once so I’m not going to write over existing content.
Yeah it’s an interesting thought. They seem to come up to 100GB capacity, but the wikipedia page claims (with a [dubious] qualifier) that you need some sort of special higher power burning device to write to M-Disc.
I don’t have an optical drive at the moment. Would I just pick any rated for BDXL?
Yeah since then I’ve been convinced I need two drives mirrored under zfs, which should handle that scenario.
You are the first person who has recommended SSD for cold storage. Everyone else (including what I’ve googled) says HDD for cold storage, just spin up every year or two and they will be fine. Can you point me at further reading?
Don’t worry, I’ll SMART check the drives each year as I update as required.
As for types of drives dying out soon, I can reassess the situation every 5 years when I do drive replacement. I would be confident 2.5" drives will still be readable in 5 years.
I have automated backups including to cloud, but I want a separated manual system that cannot get erased if I mess something up (accidentally sync a delete, lose encryption key, forget to pay cloud bill). I have 3 2 1 but it’s all automated and backups are eventually replaced, if it’s not a critical failure I won’t necessarily know I’ve lost something.
Basically, I specifically want cold storage, and not cloud. I will only add, not delete from it. And I don’t want it encrypted.
Based on other conversations I’m planning on using duel disks mirrored, zfs, annual updates and disk checks with disks rotated out every 5 years (unless failing/failed). Handling the need for layman retrival of data by including instructions with the hard drives.
I’ve decided I should have a small number of physical prints, as extra redundancy. I’m thinking I’ll print 100 each year to store with the hard drive backup.
The printed photos are only there as an extra layer of redundancy in case everything else fails. It’s ok if they get discoloured a bit, it never put me off going through my grandparents’ suitcases of photos. Ideally the digital files survive, if not then at least there is something rather than nothing.
Is SSD really necessary? Everything I search up says SSDs have worse retention than HDD in cold storage. A couple TB of HDD is pretty cheap these days, and seems like a better cold storage option.
You can’t exactly make it fool-proof. Outside people will never know what you did to create your backup and what to do to access it. Who knows if the drives file system or file types are still readable after 20 years? Who knows if SATA and USB connectors are still around after that time?
Yes, so now I’m thinking a rotation cycle. About every 5 years replace the drives with new ones, copy over all data. If newer technology exists then I can move to that newer technology. This way I’m keeping it up to date as long as I can.
For example it is very likely that SATA will disappear within the next 10-15 years as hdds are becoming more and more an enterprise thing and consumers are switching to M.2 ssds.
Does this matter if I have a SATA->USB cable stored with it? Other than if USB A standards change or get abandoned for USB C, but that should be covered by the review every 5 years.
I have a terrible track record with USB sticks, including completely losing a stack of photos because of a USB stick.
I’m now thinking the benefits of a nice error-correcting file system probably outweigh the benefits of using a widely supported one. So I might use a pair of mirrored hard drives with SATA->USB cable, then include instructions along the lines of “plug into my linux laptop to access, or take to a computer repair show if you can’t work it out”.
Yip I think this is the setup I will want (probably both - zfs + a custom script for validation, just to be sure). Two mirrored drives. I do need to read up some about zfs mirroring to understand it a bit more but I think I have a path to follow now.
I have considered that exact message. It does seem making it easily plug and play may be out of the question if I want the error correction capabilities.
I don’t mind a call but I really appreciate the people who ask via message first!