Starting Podman Containers with Systemd
June 25, 2025
When I first started building my homelab, I’d spin up containers with Podman, configure my services, and feel pretty good about my setup. Then I’d reboot my server for updates and watch in horror as all my carefully configured containers just… stayed off. That’s when I learned the hard way that containers don’t magically restart themselves after a system reboot.
This led me down the rabbit hole of learning how to properly manage container lifecycles with systemd, and I want to share what I discovered about running Podman containers as both regular users and root.
Containers That Don’t Survive Reboots
Unlike Docker with its daemon that can be configured to restart containers automatically, Podman is daemonless by design. This is actually a feature, not a bug – it’s more secure and doesn’t require a privileged daemon running constantly. But it means we need a different approach to ensure your containers start with the system.
After some research (and a few frustrated evenings), I discovered that systemd is the perfect tool for managing Podman containers. Since most modern Linux distributions use systemd as their init system, we can leverage it to start, stop, and monitor our containers just like any other system service.
Method 1: User-Level Containers (Rootless)
I always prefer running containers as a regular user when possible – it’s more secure and follows the principle of least privilege. Here’s how I set up user-level container management in my homelab.
Step 1: Enable Lingering
First, I had to enable “lingering” for my user account. This tells systemd to start user services even when the user isn’t logged in. Lingering also allows a service to continue when you log out.
sudo loginctl enable-linger username
Replace username
with your actual username. This was a crucial step I initially missed, wondering why my containers weren’t starting after reboot when I wasn’t logged in.
Step 2: Generate the systemd Unit File
Podman has a feature that automatically generates systemd unit files for your containers. Here’s how I used it:
# First, create and run your container normally
podman run -d --name my-web-server \
-p 8080:80 \
-v /home/username/html:/usr/share/nginx/html:Z \
nginx:alpine
# Generate the systemd unit file
podman generate systemd --files --name my-web-server
Step 3: Install and Enable the Service
# Move the generated file to the user systemd directory
mkdir -p ~/.config/systemd/user
mv container-my-web-server.service ~/.config/systemd/user/
# Reload systemd and enable the service
systemctl --user daemon-reload
systemctl --user enable container-my-web-server.service
systemctl --user start container-my-web-server.service
Step 4: Verify It Works
# Check the service status
systemctl --user status container-my-web-server.service
# Test by stopping and starting
systemctl --user stop container-my-web-server.service
systemctl --user start container-my-web-server.service
Method 2: System-Level Containers (Root)
Sometimes you need containers to run as root – maybe for privileged operations or to bind to ports below 1024. Here’s how I handle system-level containers:
Step 1: Create the Container as Root
sudo podman run -d --name system-proxy \
-p 80:80 -p 443:443 \
-v /etc/nginx:/etc/nginx:Z \
nginx:alpine
Step 2: Generate and Install the System Service
# Generate the systemd unit file as root
sudo podman generate systemd --files --name system-proxy
# Move to system directory
sudo mv container-system-proxy.service /etc/systemd/system/
# Enable and start
sudo systemctl daemon-reload
sudo systemctl enable container-system-proxy.service
sudo systemctl start container-system-proxy.service
Real-World Example: My Homelab Setup
Let me share how I applied this to user-level and root-level in my homelab.
- Grafana (user-level): My dashboard system
- Nginx Proxy Manager (root-level): Reverse proxy for services
For Grafana, I created a user service:
# Create the container
podman run -d --name grafana \
-p 3030:3000 \
-v grafana-data:/var/lib/grafana \
grafana
# Generate and install the service
podman generate systemd --files --name grafana
mkdir -p ~/.config/systemd/user
mv container-grafana.service ~/.config/systemd/user/
systemctl --user daemon-reload
systemctl --user enable container-grafana.service
For Nginx Proxy Manager, since it needs to bind to ports 80 and 443, I ran it as root:
# Create the container
sudo podman run -d --name proxy-manager \
-p 80:80 -p 443:443 -p 81:81 \
-v proxy-data:/data \
-v letsencrypt:/etc/letsencrypt \
jc21/nginx-proxy-manager:latest
# Generate and install the service
sudo podman generate systemd --files --name proxy-manager
sudo mv container-proxy-manager.service /etc/systemd/system/
sudo systemctl daemon-reload
sudo systemctl enable container-proxy-manager.service
Lessons Learned and Best Practices
Through trial and error, I discovered several important practices:
Use named volumes or bind mounts for persistent data. The --new
flag means containers are recreated each time, so any data stored in the container filesystem will be lost.
Test your services thoroughly. After setting up systemd services, always reboot your system to make sure everything comes back up correctly. I learned this the hard way when my Prometheus service failed to start after a reboot due to a missing volume mount.
Monitor your services. Use systemctl status
and journalctl
to monitor your containers. For user services, don’t forget the --user
flag. Next I will be monitoring my container services with Grafana.
Troubleshooting Common Issues
Containers not starting after reboot: Make sure you’ve enabled lingering for user services and that your systemd unit files are in the correct directories.
Permission errors: Double-check your volume mounts and SELinux labels (use the :Z
flag for bind mounts).
Network issues: Remember that rootless containers use different networking. You might need to configure port forwarding or use host networking for some services.
Conclusion
Learning to properly manage Podman containers with systemd transformed my homelab. My containers now survive reboots, updates, and even the occasional power outage.