Back in the day it was nice, apt get update && apt get upgrade and you were done.
But today every tool/service has it’s own way to being installed and updated:
- docker:latest
- docker:v1.2.3
- custom script
- git checkout v1.2.3
- same but with custom migration commands afterwards
- custom commands change from release to release
- expect to do update as a specific user
- update nginx config
- update own default config and service has dependencies on the config changes
- expect new versions of tools
- etc.
I selfhost around 20 services like PieFed, Mastodon, PeerTube, Paperless-ngx, Immich, open-webui, Grafana, etc. And all of them have some dependencies which need to be updated too.
And nowadays you can’t really keep running on an older version especially when it’s internet facing.
So anyway, what are your strategies how to keep sanity while keeping all your self hosted services up to date?
I’d definitely go with Renovate + ArgoCD, or any other GitOps-based tooling.
Ansible. Basically if I need to upgrade something for the first time, I write or extend an Ansible script an run those periodically.
Renovate couple with FluxCD if you’re in k8s land, or noco-cd if you’re on docker. GitOps is the way.
Proxmox helper scripts - at least the ones i use - come with a tag updateable. Those tagged have a command
updatethat runs everything necessary on containers, VMs whatever.Makes life simple, mostly.
The only manual interaction I’ve had was upgrading some VMs Debian from 12 to 13.
Everything I run, I deploy and manage with ansible.
When I’m building out the role/playbook for a new service, I make sure to build in any special upgrade tasks it might have and tag them. When it’s time to run infrastructure-wide updates, I can run my single upgrade playbook and pull in the upgrade tasks for everything everywhere - new packages, container images, git releases, and all the service restart steps to load them.
It’s more work at the beginning to set the role/playbook up properly, but it makes maintaining everything so much nicer (which I think is vital to keep it all fun and manageable).
+1 for ansible.There’s a module for almost everything out there.
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:
Fewer Letters More Letters Git Popular version control system, primarily for code HTTP Hypertext Transfer Protocol, the Web LXC Linux Containers SSL Secure Sockets Layer, for transparent encryption TLS Transport Layer Security, supersedes SSL k8s Kubernetes container management package nginx Popular HTTP server
5 acronyms in this thread; the most compressed thread commented on today has 7 acronyms.
[Thread #233 for this comm, first seen 12th Apr 2026, 05:50] [FAQ] [Full list] [Contact] [Source code]
Renovate + GitOps. Check out https://github.com/onedr0p/cluster-template
If you don’t like Kubernetes, you can get a similar setup with doco-CD. Only limitation is that dococd can’t update itself, but you can use SOPS and Renovate all the same for the other services.
That or Komodo when using docker. Renovate is really good, you always know which version you’re at, you can set it up to auto merge on minor and/or patch level, it shows you the release notes etc.
This tutorial is good: https://nickcunningh.am/blog/how-to-automate-version-updates-for-your-self-hosted-docker-containers-with-gitea-renovate-and-komodo
I run most of my services in containers with Podman Quadlets. One of them is Forgejo on which I have repos for all my quadlet (systemd) files and use renovate to update the image tags. Renovate creates PRs and can also show you release notes for the image it wants you to update to.
I currently check the PRs manually as well as pulling the latest git commits on my server. But this could also be further automated to one’s liking.
- use APT repositories when possible -> then
unattended-upgrades - For OCI images that do not provide tagged releases (looking at you searxng…), podman auto-update
- for everything else, subscribe to releases RSS feed, read release notes when they come out, check for breaking changes and possibly interesting stuff, update version in ansible playbook, deploy ansible playbook
- use APT repositories when possible -> then
Since all my services are dockerized I just pull new images sporadically. But I think I should invest some time into finding automatic update reminders, especially when I have to hear about critical security updates from some random person on mastodon.
I switched to dockhand and it handles that nicely, including scanning for vulnerabilities in new images.
Snapshots and
for i in $hosts;do ssh -tt "sudo apt update -y && sudo apt upgrade -y";doneFor docker/k8s: argocd, helm, etc.
Wow, that sounds like a nightmare. Here’s my workflow:
nix flake update nixos-rebuild switchThat gives me an atomic, rollbackable update of every service running on the machine.
One of the reasons I switched to YunoHost (the other being backups).
All my services run in podman containers managed by systemd (using quadlets). They usually point to the :latest tag and I’ve configured the units to pull on start when there is a new version in my repository. Since I’m using opensuse microos, my server (and thus all services) restart regularly.
For the units that are configured differently, I update the versions in their respective ansible playbooks and redeploy (though I guess I could optimize this a bit, I’ve only scratched the surface of ansible).
Podman automatically updates my containers for me.
Because you point to :latest and everything is dockerized and on one machine? How does it know when it’s time to upgrade?
Yeah only for :latest containers, that’s true. It automatically runs a daily service to check whether there are newer images available. You can turn it off per container if you don’t want it.
One of the nice things about it is that I have containers running under several different users (for security reasons) so that saves me a lot of effort switching to all these users all the time.
It’s a bad practice to use latest tag
Depends on what you want to do. For production with sensitive data, yes it is. For my ytdl and jellyfin? Perfectly fine.
Depends. There are a few things I update by hand, but as long as you have proper backups it’s generally safer to run the latest versions of things automatically if you don’t mind the possibility of breakage (which is very rare in my experience). This is in the context of self-hosting of course, not a business environment.







