Are you updating 1000’s of stacks every week? I update a couple critical things maybe once a month, and the other stuff maybe twice a year.
I don’t recommend auto updates, because updates break things and dealing with that is a lot of work.
Are you updating 1000’s of stacks every week? I update a couple critical things maybe once a month, and the other stuff maybe twice a year.
I don’t recommend auto updates, because updates break things and dealing with that is a lot of work.
Documentation is for onboarding other people. Why on earth would I need to onboard other people to something self-hosted?


I enjoyed the depth of this answer. That being said…
4 copies seems like a level of paranoia that is not practical for the average consumer.
3 is what I use, and I consider that an already more advanced use case.
2 is probably most practical for the average person.
Why do I say this? The cost of the backup solution needs to be less than the value of the data itself x the effort to recover the incrementally missing data x the value of your time x the chance of failure.
In my experience, very few people have data that is so valuable that they need such a very thorough backup solution. Honestly, a 2$ thumb drive can contain most of the data the average user would actually miss and can’t easily find again scouring online.
I guess it depends what you run, and how the projects/containers are configured to handle updates and “breaking changes” in particular.
But also, I’m being a bit broad with the term “breaking changes”. Other kinds of “breaking changes” that aren’t strictly crashing the software, but that still cause work include projects that demand a manual database migration before being operational, a config change, or just a UI change that will confuse a user.
The point is, a lot of projects demand user attention which completely eclipses the effort required to execute a docker update.