26 Commits

Author SHA1 Message Date
83c0547605 initial finished post 2024-09-07 22:05:57 +00:00
d2cdaa024d add notice and update app link 2023-10-15 20:54:10 +00:00
Ray Lyon
b53db5ea29 Merge pull request #10 from skoobasteeve/podman-auto-update
Podman auto-update
2023-10-08 13:33:37 -07:00
974e6abc64 final fixes 2023-10-08 20:32:16 +00:00
1677407e63 final draft for review 2023-10-08 20:26:57 +00:00
Ray Lyon
5069405ad6 fix email 2023-10-03 14:09:47 +00:00
Ray Lyon
7149fca6e5 Merge pull request #9 from skoobasteeve/nextcloud-podman-part2
Nextcloud podman part2
2023-10-03 09:21:34 -04:00
Ray Lyon
4fd878bf88 final edits 2023-10-03 13:18:49 +00:00
Ray Lyon
268a2688c2 updated dates 2023-10-03 12:56:23 +00:00
Ray Lyon
b9479367aa remove social share buttons 2023-10-03 00:50:35 +00:00
Ray Lyon
7cad06cd58 consistent tabs vs spaces 2023-10-03 00:47:15 +00:00
Ray Lyon
ba178e07c7 new avatar 2023-10-03 00:36:07 +00:00
Ray Lyon
38548ac908 working final draft 2023-10-03 00:32:10 +00:00
Ray Lyon
d52620607b link to part 2 from part 1 2023-10-02 18:35:08 +00:00
Ray Lyon
3db8970208 final draft before testing 2023-10-02 18:29:11 +00:00
Ray Lyon
3ff11270fc arm gems 2023-10-01 22:56:48 +00:00
b473a3297c first draft complete 2023-10-01 16:39:49 +00:00
e744855dcf save point - moving to production 2023-10-01 13:55:38 +00:00
9796edbd9c save point - working systemd configs 2023-09-30 21:54:47 +00:00
e381e5adbf post scaffold 2023-09-24 22:40:41 +00:00
aa4bba1b72 new blog email 2023-08-27 15:42:16 +00:00
01fa59afa6 update email address 2023-08-27 15:34:19 +00:00
021b15423a fix repo url 2023-08-27 14:41:32 +00:00
Ray Lyon
55c9d64b8c Merge pull request #8 from skoobasteeve/nextcloud-podman
Nextcloud podman
2023-08-27 10:07:52 -04:00
0bb96f6308 update date 2023-08-27 14:07:26 +00:00
8732edbd28 finish post 2023-08-27 14:06:40 +00:00
16 changed files with 843 additions and 56 deletions

View File

@@ -15,6 +15,7 @@ GEM
faraday-net_http (3.0.2) faraday-net_http (3.0.2)
ffi (1.15.5) ffi (1.15.5)
forwardable-extended (2.6.0) forwardable-extended (2.6.0)
google-protobuf (3.24.0-aarch64-linux)
google-protobuf (3.24.0-x86_64-linux) google-protobuf (3.24.0-x86_64-linux)
http_parser.rb (0.8.0) http_parser.rb (0.8.0)
i18n (1.14.1) i18n (1.14.1)
@@ -77,6 +78,8 @@ GEM
rouge (4.1.3) rouge (4.1.3)
ruby2_keywords (0.0.5) ruby2_keywords (0.0.5)
safe_yaml (1.0.5) safe_yaml (1.0.5)
sass-embedded (1.64.2-aarch64-linux-gnu)
google-protobuf (~> 3.23)
sass-embedded (1.64.2-x86_64-linux-gnu) sass-embedded (1.64.2-x86_64-linux-gnu)
google-protobuf (~> 3.23) google-protobuf (~> 3.23)
sawyer (0.9.2) sawyer (0.9.2)
@@ -88,6 +91,7 @@ GEM
webrick (1.8.1) webrick (1.8.1)
PLATFORMS PLATFORMS
aarch64-linux
x86_64-linux x86_64-linux
DEPENDENCIES DEPENDENCIES

View File

@@ -23,7 +23,7 @@ name : "Ray Lyon"
description : "Linux, self-hosting, and privacy." description : "Linux, self-hosting, and privacy."
url : "https://rayagainstthemachine.net" url : "https://rayagainstthemachine.net"
baseurl : # the subpath of your site, e.g. "/blog" baseurl : # the subpath of your site, e.g. "/blog"
repository : "skoobasteeve/skoobasteeve.github.io.2" repository : "skoobasteeve/rayagainstthemachine.net"
teaser : # path of fallback teaser image, e.g. "/assets/images/500x300.png" teaser : # path of fallback teaser image, e.g. "/assets/images/500x300.png"
logo : # path of logo image to display in the masthead, e.g. "/assets/images/88x88.png" logo : # path of logo image to display in the masthead, e.g. "/assets/images/88x88.png"
masthead_title : # overrides the website title displayed in the masthead, use " " for no title masthead_title : # overrides the website title displayed in the masthead, use " " for no title
@@ -100,7 +100,7 @@ author:
url: "https://keybase.io/scubasteve/pgp_keys.asc?fingerprint=2dc3a1066bba7040fe7963d9e20106cb86fe0b4d" url: "https://keybase.io/scubasteve/pgp_keys.asc?fingerprint=2dc3a1066bba7040fe7963d9e20106cb86fe0b4d"
- label: "Email" - label: "Email"
icon: "fas fa-fw fa-envelope-square" icon: "fas fa-fw fa-envelope-square"
url: "mailto:ray@raylyon.net" url: "mailto:ray@rayagainstthemachine.net"
- label: "Keybase" - label: "Keybase"
icon: "fab fa-fw fa-keybase" icon: "fab fa-fw fa-keybase"
url: "https://keybase.io/scubasteve" url: "https://keybase.io/scubasteve"
@@ -276,7 +276,7 @@ defaults:
author_profile: true author_profile: true
read_time: true read_time: true
comments: true comments: true
share: true share: false
related: true related: true
classes: wide classes: wide
show_date: true show_date: true

View File

@@ -10,13 +10,16 @@ comments: true
![kalendar01](/assets/images/screenshots/kalendar01.png){:class="img-responsive" .align-center} ![kalendar01](/assets/images/screenshots/kalendar01.png){:class="img-responsive" .align-center}
{: .notice--info}
***UPDATE 2023-10-15**: Kalendar's name has changed to "Merkuro Calendar". Still a great app!*
2022 was a great year for my Python skills. I had some unique problems to solve in my day job that got me over the hump of learning the language, and finally I was able to write comfortably without Googling syntax every five minutes. Quickly my team's Github repo filled up with borderline-unnecessary one-off scripts to solve all sorts of niche problems in our environment. Due to the nature of being a system administrator at a SaaS-heavy company, most of these scripts deal with third-party APIs: moving data from "service a" to "service b", pulling information about "service c" and correlating it with "service d", etc. These types of scripts are fun to write because they have narrow scopes and easily achievable goals, and I find completing them to be immensely satisfying. 2022 was a great year for my Python skills. I had some unique problems to solve in my day job that got me over the hump of learning the language, and finally I was able to write comfortably without Googling syntax every five minutes. Quickly my team's Github repo filled up with borderline-unnecessary one-off scripts to solve all sorts of niche problems in our environment. Due to the nature of being a system administrator at a SaaS-heavy company, most of these scripts deal with third-party APIs: moving data from "service a" to "service b", pulling information about "service c" and correlating it with "service d", etc. These types of scripts are fun to write because they have narrow scopes and easily achievable goals, and I find completing them to be immensely satisfying.
Filled with confidence in my Python skills, I set out to embark on my first GUI project: a desktop to-do application with CalDAV sync. This is an app I feel has been missing on Linux, something akin to Apple Reminders where I can use my own backend for sync. To get started, I built a local-only terminal client, bought a book to start learning PyQt, and I sat down today to write the first of a series of blog posts where I would document the project. I got to the part of the blog post where I confidently say that there are "currently no working Linux desktop apps with this functionality". Then I thought, *maybe I should Google this once more and confirm there really is nothing out there*. Well, shit. Filled with confidence in my Python skills, I set out to embark on my first GUI project: a desktop to-do application with CalDAV sync. This is an app I feel has been missing on Linux, something akin to Apple Reminders where I can use my own backend for sync. To get started, I built a local-only terminal client, bought a book to start learning PyQt, and I sat down today to write the first of a series of blog posts where I would document the project. I got to the part of the blog post where I confidently say that there are "currently no working Linux desktop apps with this functionality". Then I thought, *maybe I should Google this once more and confirm there really is nothing out there*. Well, shit.
## Enter Kalendar ## Enter Kalendar
The last time I researched this space, there were no functional standalone to-do apps that supported CalDAV sync. The closest I could find was Thunderbird, my beloved email client, which is far more complex than what I was looking for. [Kalendar](https://apps.kde.org/kalendar/) didn't even pop up on my radar. Even today when I searched, I almost didn't find it. I ended up seeing it on the [Nextcloud Tasks Github page](https://github.com/nextcloud/tasks#apps-which-sync-with-nextcloud-tasks-using-caldav) in a list of compatible apps with sync. Within minutes, I had it installed and synced with my tasks in Nextcloud, and **wow**, this thing is good. The last time I researched this space, there were no functional standalone to-do apps that supported CalDAV sync. The closest I could find was Thunderbird, my beloved email client, which is far more complex than what I was looking for. [Kalendar](https://apps.kde.org/merkuro.calendar/) didn't even pop up on my radar. Even today when I searched, I almost didn't find it. I ended up seeing it on the [Nextcloud Tasks Github page](https://github.com/nextcloud/tasks#apps-which-sync-with-nextcloud-tasks-using-caldav) in a list of compatible apps with sync. Within minutes, I had it installed and synced with my tasks in Nextcloud, and **wow**, this thing is good.
Kalendar bills itself mainly as a new calendar app, but my task lists feel right at home here. The app opens instantly, and the task view is designed almost exactly as I envisioned for my own app; toggleable lists on the left and tasks on the right. Type on the bottom and hit enter to quickly create a new task and it syncs right up to Nextcloud. Right click on a task to easily set priority and due date, or add a subtask. I hate how good this is. Kalendar bills itself mainly as a new calendar app, but my task lists feel right at home here. The app opens instantly, and the task view is designed almost exactly as I envisioned for my own app; toggleable lists on the left and tasks on the right. Type on the bottom and hit enter to quickly create a new task and it syncs right up to Nextcloud. Right click on a task to easily set priority and due date, or add a subtask. I hate how good this is.

View File

@@ -1,52 +0,0 @@
---
layout: single
title: "Building a Reproducible Nextcloud Server, Part one: Choosing the stack"
date: 2023-08-15 11:59:00
excerpt: "After successfully hosting a Nextcloud instance on the same VPS for 7 years, I decided to rebuild it from scratch with modern tooling."
categories: [Self-Hosting, Linux Administration]
tags: linux nextcloud podman docker container vps
comments: true
---
Nextcloud was the first application I *really* self-hosted. I don't mean self-hosting like running the Plex app in the system tray on your gaming PC; I mean a dedicated VPS, exposed to the world, hosting my personal data. The stakes were high, and over the last seven years, it pushed me to grow my Linux knowledge and ultimately made me a far better sysadmin.
A lot happened during that seven years. Containers and infrastructure-as-code blew up and changed the IT industry. Nextcloud as a company and an application grew tremendously. I got married. Throughout all these changes, my little $5 DigitalOcean droplet running Nextcloud on the LAMP stack kept right on ticking. Despite three OS upgrades, two volume expansions, and fifteen(!) Nextcloud major-version upgrades, that thing refused to die. It continued to host my (and my wife's) critical data until the day I decommissioned it just under 60 days ago.
# Why change?
As a sysadmin and a huge Linux nerd, I'd been following the technology and industry changes closely, and every time I heard about something new or read a blog post I couldn't help but wonder "if I rebuilt my Nextcloud server today, how would I do it?". Everything is a container now, and infrastructure and system configuration is all defined as text files, making it reproducible and popularizing the phrase "cattle, not pets". I wanted a chance to embrace these concepts and use the skills I spent the last seven years improving. Plus, what sysadmin doesn't like playing with the new shiny?
# Goals
So what did I want to accomplish with this change?
1. **Cutting-edge technologies** - Not only did I want to play with the latest tools, I wanted to become proficient with them by putting them into production.
2. **Reproducibility** - Use infrastructure-as-code tooling so I could spin up the whole stack and tear it back down with only a few commands.
3. **Reliability** - Whatever combination of hardware and technologies I ended up with, it needed to be absolutely rock-solid. The only reason this thing should break is if I tell it to (intentionally or not)
# Hosting provider
I chose DigitalOcean back in 2016 mainly due to its excellent guides and popularity around the Jupiter Broadcasting community (got that sweet $100 promo code!). It was much easier to use than most other VPS providers and could have you up-and-running with an Ubuntu server and a public IP in minutes. In 2023, the VPS market is a bit more commoditized and there are some other great options out there. Linode initially came to mind, but their future became a bit murkier after they got acquired by Akamai in 2022, hyperscalers like AWS and Azure are too expensive for this use-case. I eventually landed on [Hetzner Cloud](https://www.hetzner.com/cloud) for the following reasons:
- Incredible value - for roughly $5 USD per month you get 2 vCPUs and 2GB of ram with 20TB of monthly traffic. That's basically double the specs of competing offerings.
- Great reputation - Hetzner has been around for 20+ years and has lots of good will in the tech community for their frugal dedicated server offerings. I wouldn't have chose them initially since their Cloud product didn't have offerings in the U.S., but recently they've expanded to include VPS's in Virginia and Oregon.
- Full-featured Terraform provider - This isn't unique to Hetzner, but it was a requirement for my new setup and their provider works great.
### Why not self host?
While I have a reliable server at home and 300mbps uploads, it's never going to match the bandwidth and reach of a regional data center. This wouldn't matter to me for most things, but I treat my Nextcloud server as a full Dropbox replacement, and it needs to perform as such. On that same note, I feel comfort knowing that it's separated from the more experimental environment of my homelab.
# Linux Distribution
One of the great benefits of containerized applications is that the host operating system matters much less than it used to, and the choice will likely come down to personal preferences. As long as it can run your chosen container runtime and you're familiar with the tooling, your choice will probably work as well as any other.
I've been running Ubuntu on my servers for years due to ease-of-use and my familiarity with it on the desktop. However, I've recently been using Fedora on my home computers and have gotten accustomed to Red Hat / RPM quirks and tooling in recent years. For this reason, and the ease of getting the latest Podman release (more below), I ended up choosing CentOS Stream 9.
# Docker vs. Podman

View File

@@ -0,0 +1,76 @@
---
layout: single
title: "Building a Reproducible Nextcloud Server, Part one: Choosing the stack"
date: 2023-08-27 10:00:00
excerpt: "After successfully hosting a Nextcloud instance on the same VPS for 7 years, I decided to rebuild it from scratch with modern tooling."
categories: [Self-Hosting, Linux Administration]
tags: linux nextcloud podman docker container vps
comments: true
---
Nextcloud was the first application I *really* self-hosted. I don't mean self-hosting like running the Plex app in the system tray on your gaming PC; I mean a dedicated VPS, exposed to the world, hosting my personal data. The stakes were high, and over the last seven years, it pushed me to grow my Linux knowledge and ultimately made me a far better sysadmin.
A lot happened during that seven years. Containers and infrastructure-as-code blew up and changed the IT industry. Nextcloud as a company and an application grew tremendously. I got married. Throughout all these changes, my little $5 DigitalOcean droplet running Nextcloud on the LAMP stack kept right on ticking. Despite three OS upgrades, two volume expansions, and fifteen(!) Nextcloud major-version upgrades, that thing refused to die. It continued to host my (and my wife's) critical data until the day I decommissioned it just under 60 days ago.
# Why change?
As a sysadmin and a huge Linux nerd, I'd been following the technology and industry changes closely, and every time I heard about something new or read a blog post I couldn't help but wonder "if I rebuilt my Nextcloud server today, how would I do it?". Everything is a container now, and infrastructure and system configuration is all defined as text files, making it reproducible and popularizing the phrase "cattle, not pets". I wanted a chance to embrace these concepts and use the skills I spent the last seven years improving. Plus, what sysadmin doesn't like playing with the new shiny?
# Goals
So what did I want to accomplish with this change?
1. **Cutting-edge technologies** - Not only did I want to play with the latest tools, I wanted to become proficient with them by putting them into production.
2. **Reproducibility** - Use infrastructure-as-code tooling so I could spin up the whole stack and tear it back down with only a few commands.
3. **Reliability** - Whatever combination of hardware and technologies I ended up with, it needed to be absolutely rock-solid. The only reason this thing should break is if I tell it to (intentionally or not)
# Hosting provider
I chose DigitalOcean back in 2016 mainly due to its excellent guides and popularity around the Jupiter Broadcasting community (got that sweet $100 promo code!). It was much easier to use than most other VPS providers and could have you up-and-running with an Ubuntu server and a public IP in minutes. In 2023, the VPS market is a bit more commoditized and there are some other great options out there. Linode initially came to mind, but their future became a bit murkier after they got acquired by Akamai in 2022, while hyperscalers like AWS and Azure are too expensive for this use-case. I eventually landed on [Hetzner Cloud](https://www.hetzner.com/cloud) for the following reasons:
- Incredible value - for roughly $5 USD per month you get 2 vCPUs and 2GB of ram with 20TB of monthly traffic. That's basically double the specs of competing offerings.
- Great reputation - Hetzner has been around for 20+ years and has lots of goodwill in the tech community for their frugal dedicated server offerings. I wouldn't have chose them initially since their Cloud product didn't have offerings in the U.S., but recently they've expanded to include VPSs in Virginia and Oregon.
- Full-featured Terraform provider - This isn't unique to Hetzner, but it was a requirement for my new setup and their provider works great.
### Why not self host?
While I have a reliable server at home and 300mbps uploads, it's never going to match the bandwidth and reach of a regional data center. This wouldn't matter to me for most things, but I treat my Nextcloud server as a full Dropbox replacement, and it needs to perform as such. On that same note, I feel comfort knowing that it's separated from the more experimental environment of my homelab.
# Linux Distribution
One of the great benefits of containerized applications is that the host operating system matters much less than it used to, and the choice mostly comes down to personal preference. As long as it can run your chosen container runtime and you're familiar with the tooling, your choice will probably work as well as any other.
I've been running Ubuntu on my servers for years due to ease-of-use and my familiarity with it on the desktop. However, I've recently been using Fedora on my home computers and have gotten accustomed to Red Hat / RPM quirks and tooling in recent years. For this reason, and the ease of getting the latest Podman release (more below), I ended up choosing [CentOS Stream 9](https://www.centos.org/centos-stream/).
# Docker vs. Podman
I've been using [Docker](https://www.docker.com/) to host a number of applications on my home server for the last few years with great success, and Docker is still far-and-away the most popular way to run individual containers. However, as the [OCI standard](https://opencontainers.org/) has become more widely adopted, other tools like [Podman](https://podman.io/) have started to appear. Podman, backed by Red Hat, offers near 1:1 command compatibility with Docker and has some lovely added benefits such as:
- Designed to run without root - Podman runs containers as a standard user, greatly reducing the risk to the server if one of the containers is compromised.
- No daemon required - On the same note, there isn't a continuously running daemon in the background with root access to your system. The risks of the Docker socket are [well-documented](https://docs.docker.com/engine/security/protect-access/), and this negates that risk entirely.
- Modern and lightweight - One of the benefits of not being first is that you can learn from everyone else's mistakes. Podman is built using lessons learned from Docker while creating an easy pathway to move from individual containers to full Kubernetes deployments.
Podman has been under rapid development recently, and there's a lot of excitement about it in Linux circles. While Docker would have worked just fine for my purposes, I decided to use this project as an opportunity to get familiar with Podman and see if it could potentially replace my other Docker-based applications.
# Deployment
Unlike my previous Nextcloud server which was like a zen garden that I tended carefully, I wanted my new server to be completely reproducible on a moment's notice. Using containers accomplishes part of this approach, but still leaves many parts of the server configuration to automate! Thankfully, there are a ton of tools available in 2023 to help with this.
## Terraform
To deploy the server itself, with associated volumes, firewall, etc, [Terraform](https://www.terraform.io/) was the obvious choice. While there are some competitors coming up like [Pulumi](https://www.pulumi.com/), Terraform is still the dominant player in the field and popularized the infrastructure-as-code concept. I had some experience using it at work, but I had never had the opportunity to build something from scratch with it. After reading the documentation for the [Hetzner Cloud provider](https://registry.terraform.io/providers/hetznercloud/hcloud/latest/docs), I was confident Terraform would be able to give me everything I needed.
## Ansible
Once the VPS is deployed and I have SSH access, Terraform's job stops. This is where I would typically connect to the server and start installing packages, configuring the webserver, and doing all the other server setup tasks I've done a thousand times over the years. If only there was a tool that could do all these steps for me while simultaneously documenting the entire setup!
Enter [Ansible](https://www.ansible.com/). Anything you could possibly think to do on a Linux box, Ansible can do for you. Think of it like a human-readable Bash script that handles all the rough edges for you. While writing the playbooks takes some work, once you have them written, you can run them again and again and expect (mostly) the same results each time. I chose Ansible due to it's stateless, agent-less architecture and the ability to run it from any computer with SSH access to the target hosts. Like Terraform, I love that the entire configuration is text-based and easily managed with Git.
# What's next?
This post talked about the ideas and goals I had going into this project, and in Part 2 I'll talk about the details of the implementation, and how sometimes things seem a lot easier in a blog post than they turn out to be in reality! If you're interested in the nitty-gritty of how these tools work for a project like this, stay tuned for the next post in the series.
[*Link to Part two*]({% link _posts/2023-10-03-nextcloud-podman-part2.md %})

View File

@@ -0,0 +1,589 @@
---
layout: single
title: "Building a Reproducible Nextcloud Server, Part two: Podman containers and Systemd units"
date: 2023-10-03 08:00:00
excerpt: "In the second installment of my Nextcloud server rebuild, we'll get our containers set up with Podman and deploy them on a public-facing server."
categories: [Self-Hosting, Linux Administration]
tags: linux nextcloud podman docker container vps
comments: true
---
[*Link to Part one*]({% link _posts/2023-08-27-nextcloud-podman.md %})
## Overview
Now that I've established the stack, let's dive in to setting up the Nextcloud application with Podman. In this post, we'll get our containers running on your local computer and generate Systemd service files that we can move into a production server. If all goes well, you'll have rootless Nextcloud running on a publicly accessible domain.
### Steps
* [Create a Podman Pod](#create-a-pod)
* [Create the containers](#create-the-containers)
* [Generate Systemd service files](#generate-systemd-files)
* [Move to production](#move-to-production)
* [Troubleshooting](#troubleshooting)
### Requirements
* Computer with [Podman](https://podman.io/) installed
* Linux server with a publicly routable IP address
* Domain name and the ability to add an "A" record
### Notes on rootless Podman
One of the big advantages of using Podman over Docker is that Podman was designed from the beginning to run without root privileges. This has many positive security implications, but there also a few "gotchas" to be aware of, and I'll be pointing them out as I go through the instructions.
For more details, the Podman project maintains a helpful doc on their Github: [The Shortcomings of Rootless Podman](https://github.com/containers/podman/blob/main/rootless.md).
## Create a pod
Podman "pods" are logical groupings of containers that depend on one another. Think of a pod like a Service in Docker Compose; a group of containers that work together to run a single application or service. Once we have a pod that contains our containers, we can stop and start all of them with a single command. Containers within a pod also share a private network so they can exchange data freely with one another.
For a much more thorough explanation on what pods are and how they work, check out this [excellent post](https://developers.redhat.com/blog/2019/01/15/podman-managing-containers-pods) on the Red Hat developer blog.
**Rootless Gotcha #1**
In most Linux distributions, unprivileged applications are not allowed to bind themselves to ports below 1024. Before we get started, we'll need to update a system parameter via `sysctl` to solve this issue:
``` shell
sudo sysctl net.ipv4.ip_unprivileged_port_start=80
```
To make the change persist on reboot, create a new file under `/etc/sysctl.d/` named `99-podman.conf` and paste the line `net.ipv4.ip_unprivileged_port_start=80`. You'll need to use `sudo` privileges for this.
After that's done, let's create a new pod called "nextcloud".
``` shell
podman pod create \
--publish 80:80 \
--publish 443:443 \
--network slirp4netns:port_handler=slirp4netns \
nextcloud
```
\
You can see the newly created pod by running `podman pod ps`.
``` shell
POD ID NAME STATUS CREATED INFRA ID # OF CONTAINERS
d1b78054d6f4 nextcloud Created 2 minutes ago f4a80daae64f 1
```
#### Options explained
* `--publish 80:80` and `--publish 443:443` opens ports 80 and 443 for the webserver. Containers within pods can communicate with eachother fully on their own isolated network, but for outside traffic to reach the containers, we need to open the necessary ports at the pod level. If you plan to use different ports and and put these containers behind a load balancer, you can use different values here.
* `--network slirp4netns:port_handler=slirp4netns` solves **Rootless Gotcha #2**. By default, the webserver in rootless mode sees all HTTP requests as originating from the container's local IP address. This isn't very helpful for accurate logs, so the above option changes the pod's port handler to fix the issue. There may be some performance penalties for doing this, but for low to medium traffic servers it shouldn't be a problem.
## Create the containers
To get Nextcloud up and running, we'll use the following containers:
* [nextcoud-fpm](https://hub.docker.com/_/nextcloud/) - A minimal install of Nextcloud that requires a separate webserver.
* [mariadb](https://hub.docker.com/_/mariadb) - Database officially supported by Nextcloud.
* [caddy](https://hub.docker.com/_/caddy) - The Caddy webserver, which I love for the simplicity of its config and the built-in automatic SSL via Let's Encrypt.
First, create a working directory structure where you'll store all the container data. For this project, I broke mine out like this:
``` shell
.podman
└── nextcloud
├── caddy
│   ├── config
│   └── data
├── mariadb
└── nextcloud
├── config
└── data
```
\
Next, I'll go over each container, showing you the full command I used to create them and explaining each option.
{: .notice--info}
**Note on container image versions**
As general advice when using container images, use a major version tag (e.g. `mariadb:11`) instead of `:latest` or a specific point release. This is a happy medium where minor versions and security fixes get pulled automatically when you run `podman pull` or `podman auto-update`, but you still retain control on when to update to the latest major version.
### MariaDB
We'll create the database container first since it doesn't technically depend on either of the other containers.
``` shell
podman run \
--detach \
--env MYSQL_DATABASE=nextcloud \
--env MYSQL_USER=nextcloud \
--env MYSQL_PASSWORD=nextcloud \
--env MYSQL_ROOT_PASSWORD=nextcloud \
--volume $HOME/.podman/nextcloud/mariadb:/var/lib/mysql:z \
--name mariadb \
--pod nextcloud \
docker.io/library/mariadb:11
```
#### Options explained
* `--env MYSQL_DATABASE=nextcloud` - Name of the database Nextcloud will use, created the first time you run the `mariadb` container.
* `--env MYSQL_USER=nextcloud` - Database user Nextcloud will use, created the first time you run the `mariadb` container.
* `--env MYSQL_PASSWORD=nextcloud` - Password for the Nextcloud database user. Be sure to change this to something more secure and save it somewhere!
* `--env MYSQL_ROOT_PASSWORD=nextcloud` - Password for the database root user. Like the above, be sure to change this to something more secure and save it somewhere! Note that Nextcloud will not use this password, but you'll want it for any manual database maintenance you have to do in the future.
* `--volume $HOME/.podman/nextcloud/mariadb:/var/lib/mysql:z` - Creates a bind mount in the folder you created for MariaDB to store its database and configuration data. The `:z` option is needed to give directory access on selinux systems.
* `--name mariadb` - Sets the name of the container so we can easily reference it later.
* `--pod nextcloud` - Attaches the container to the `nextcloud` pod we previously created.
* `docker.io/library/mariadb:11` - Container image we're going to download and run.
<br>
After you run the command, you can check if the container is running with the `podman ps` command.
``` shell
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f4a80daae64f localhost/podman-pause:4.7.0-1695839078 About an hour ago Up 29 seconds 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp d1b78054d6f4-infra
c5961a86a474 docker.io/library/mariadb:11 mariadbd 29 seconds ago Up 29 seconds 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp mariadb
```
**Note:** The other container you see in the output, `d1b78054d6f4-infra`, is a helper container for the nextcloud pod.
### Nextcloud
``` shell
podman run \
--detach \
--env MYSQL_HOST=mariadb \
--env MYSQL_DATABASE=nextcloud \
--env MYSQL_USER=nextcloud \
--env MYSQL_PASSWORD=nextcloud \
--volume $HOME/.podman/nextcloud/nextcloud/config:/var/www/html:z \
--volume $HOME/.podman/nextcloud/nextcloud/data:/var/www/html/data:z \
--name nextcloud-app \
--pod nextcloud \
docker.io/library/nextcloud:27-fpm
```
#### Options explained
* `--env MYSQL_HOST=mariadb` - Name of the container hosting the database. Thanks to Podman's built-in DNS, container names will resolve to their private IP address, so all we have to do is point Nextcloud at `mariadb` and it will find the database on its internal pod network.
* `--env MYSQL_DATABASE=nextcloud` - Name of the database Nextcloud will use, the same that you created in the `mariadb` container.
* `--env MYSQL_USER=nextcloud` - Database user Nextcloud will use, the same that you created in the `mariadb` container.
* `--env MYSQL_PASSWORD=nextcloud` - Password for the Nextcloud database user, the same that you created in the `mariadb` container.
* `--volume $HOME/.podman/nextcloud/nextcloud/config:/var/www/html:z` - Creates a bind mount in the folder you created for Nextcloud to store its configuration files.
* `--volume $HOME/.podman/nextcloud/nextcloud/data:/var/www/html/data:z` - Creates a bind mount in the folder you created for Nextcloud's data directory.
* `--name nextcloud-app` - Sets the name of the container (container names can't be the same as a pod, hence the `-app` in the name.)
* `--pod nextcloud` - Attaches the container to the `nextcloud` pod we previously created.
* `docker.io/library/nextcloud:27-fpm` - Container image we're going to download and run, `27` being the latest major version of Nextcloud as of this writing.
<br>
You should now have two containers running, plus the pod helper:
``` shell
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f4a80daae64f localhost/podman-pause:4.7.0-1695839078 About an hour ago Up 18 minutes 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp d1b78054d6f4-infra
c5961a86a474 docker.io/library/mariadb:11 mariadbd 18 minutes ago Up 18 minutes 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp mariadb
13d5c43c0b4d docker.io/library/nextcloud:27-fpm php-fpm 5 seconds ago Up 5 seconds 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp nextcloud-app
```
### Caddy
Before we start the Caddy container, we'll need to write a config in the form of a [Caddyfile](https://caddyserver.com/docs/caddyfile). Since we're just focused on getting the containers working locally, let's do a simple configuration without HTTPS.
Create file named `Caddyfile` in `$HOME/.podman/nextcloud/caddy/config/` and paste the below contents.
```
http://localhost:80 {
root * /var/www/html
file_server
php_fastcgi nextcloud-app:9000 {
root /var/www/html
env front_controller_active true
}
}
```
The above is a bare-minimum configuration to run Nextcloud locally on port 80. We'll make lots of tweaks to this file before we move to production.
Assuming the Caddyfile is in place, run the below command to spin up the final container:
``` shell
podman run \
--detach \
--volume $HOME/.podman/nextcloud/nextcloud/config:/var/www/html:z \
--volume $HOME/.podman/nextcloud/caddy/config/Caddyfile:/etc/caddy/Caddyfile:z \
--volume $HOME/.podman/nextcloud/caddy/data:/data:z \
--name caddy \
--pod nextcloud \
docker.io/library/caddy:2
```
#### Options explained
* `--volume $HOME/.podman/nextcloud/nextcloud/config:/var/www/html:z` - Creates a bind mount in the folder you created for Nextcloud to store its configuration files. This is the content Caddy serves to the web, so it needs access.
* `--volume $HOME/.podman/nextcloud/caddy/config/Caddyfile:/etc/caddy/Caddyfile:z` - Creates a bind mount for the CaddyFile.
* `--volume $HOME/.podman/nextcloud/caddy/data:/data:z` - Creates a bind mount for Caddy's data folder.
* `--name caddy` - Sets the name of the container.
* `--pod nextcloud` - Attaches the container to the `nextcloud` pod we previously created.
* `docker.io/library/caddy:2` - Container image we're going to download and run.
Verify that all (3) containers are running with `podman ps`.
``` shell
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f4a80daae64f localhost/podman-pause:4.7.0-1695839078 2 hours ago Up 45 minutes 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp d1b78054d6f4-infra
c5961a86a474 docker.io/library/mariadb:11 mariadbd 45 minutes ago Up 45 minutes 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp mariadb
13d5c43c0b4d docker.io/library/nextcloud:27-fpm php-fpm 26 minutes ago Up 26 minutes 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp nextcloud-app
b29486a99286 docker.io/caddy:2 caddy run --confi... 4 minutes ago Up 4 minutes 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp caddy
```
\
Go to http://localhost in your browser and...
![nextcloud-podman01](/assets/images/screenshots/nextcloud-podman01.png){:class="img-responsive"}
Ta-da! We have Nextcloud!
Since the containers are part of the `nextcloud` pod, you can stop and start all of them with one command. You can run `podman pod stop` to take them down and `podman pod start` to bring them back up. Pretty cool huh?
## Generate Systemd files
Even better than starting and stopping your containers at the pod level is doing it with systemd! This will allow you to manage your Nextcloud pod the same way as any other systemd service, including enabling it to run at system start.
Instead of writing all the systemd unit files by hand, we're going to use a handy subcommand of the podman application, `podman generate systemd`.
First, make sure the pod and all its containers are running. Then, run the below command:
``` shell
podman generate systemd --new --files --name nextcloud
/home/raylyon/container-nextcloud-app.service
/home/raylyon/container-caddy.service
/home/raylyon/container-mariadb.service
/home/raylyon/pod-nextcloud.service
```
\
The output gives you the path to each file, and we'll need to move these files into the systemd user directory, `$HOME/.config/systemd/user/`. Create the directory if it doesn't already exist.
``` shell
mkdir -p $HOME/.config/systemd/user
```
\
Copy each of the files into the above directory.
``` shell
cp $HOME/*.service $HOME/.config/systemd/user/
```
\
Reload the systemd user daemon.
``` shell
systemctl --user daemon-reload
```
\
Start the service corresponding to the pod.
``` shell
systemctl --user start pod-nextcloud
```
\
`podman ps` should show that all your containers are running. If you have issues, you can troubleshoot the same way you would for another systemd service.
Check the status of the pod.
``` shell
systemctl --user status pod-nextcloud
```
\
Check the status of an individual container.
``` shell
systemctl --user status container-nextcloud-app
```
\
Check the service output for errors (note that you need `sudo` for this one).
``` shell
sudo journalctl -xe
```
## Move to production
Up until now we've been working with our containers on localhost, but now it's time to move them to a public-facing server with a public IP and domain name. This step in the process highlights one of the biggest selling points of containers; we can develop and configure locally, then push that exact working configuration to another server and it Just Works™. Beyond that, our systemd unit files save us the trouble of remembering the exact podman commands to run on the server, so we can simply copy the files and start the service.
First, copy the `*.service` files from your computer to the public-facing server with a tool like `scp` or `rsync`.
``` shell
scp $HOME/.config/systemd/user/*.service user@your.server.com:/home/user/
```
\
Then, on the **production server** recreate the folder structure you used locally.
``` shell
mkdir -p $HOME/.podman/nextcloud/nextcloud/config
mkdir -p $HOME/.podman/nextcloud/nextcloud/data
mkdir -p $HOME/.podman/nextcloud/caddy/config
mkdir -p $HOME/.podman/nextcloud/caddy/data
mkdir -p $HOME/.podman/nextcloud/mariadb
```
\
Also, create the systemd folder if it's not already there.
``` shell
mkdir -p $HOME/.config/systemd/user
```
\
Copy the service files into the systemd user directory and reload systemd.
``` shell
cp $HOME/*.service $HOME/.config/systemd/user/
systemctl --user daemon-reload
```
### Caddyfile
The Caddyfile we used earlier won't be suitable for production since it doesn't use a FQDN or HTTPS. Create a new Caddyfile on the server in `$HOME/.podman/nextcloud/caddy/config/` with the below contents, replacing the domain with one you've set up for the server.
```
your.server.com {
root * /var/www/html
file_server
php_fastcgi nextcloud-app:9000 {
root /var/www/html
env front_controller_active true
}
encode gzip
log {
output file /data/nextcloud-access.log
}
header {
Strict-Transport-Security "max-age=15768000;includeSubDomains;preload"
}
# .htaccess / data / config / ... shouldn't be accessible from outside
@forbidden {
path /.htaccess
path /data/*
path /config/*
path /db_structure
path /.xml
path /README
path /3rdparty/*
path /lib/*
path /templates/*
path /occ
path /console.php
}
respond @forbidden 404
redir /.well-known/carddav /remote.php/dav 301
redir /.well-known/caldav /remote.php/dav 301
}
```
The above configuration will use Caddy's built-in automatic HTTPS to pull a certificate from Let's Encrypt. It also blocks web access to certain directories in your Nextcloud folder and adds redirects for Nextcloud's CalDAV and CardDAV endpoints.
### MariaDB optimizations
After running this setup in production for a couple months and going through my first Nextcloud version upgrade, I had issues with Nextcloud losing access to the database during the upgrade process. I did some research and found this [helpful article](https://docs.nextcloud.com/server/latest/admin_manual/configuration_database/linux_database_configuration.html) in Nextcloud's documentation which points to some MariaDB options we can use to fix these issues.
The MariaDB container allows us to pass any additional configuration options as command line arguments to the container run command. This makes it simple to tweak our systemd service file to enable the optimizations.
Open the `container-mariadb.service` file in a text editor and add the following arguments after `docker.io/library/mariadb:11` in the `ExecStart` block:
``` systemd
--transaction-isolation=READ-COMMITTED \
--log-bin=binlog \
--binlog-format=ROW \
--max_allowed_packet=256000000
```
The `ExecStart` block should look something like this when you're done:
``` systemd
ExecStart=/usr/bin/podman run \
--cidfile=%t/%n.ctr-id \
--cgroups=no-conmon \
--rm \
--pod-id-file %t/pod-nextcloud.pod-id \
--sdnotify=conmon \
--replace \
--detach \
--env MYSQL_DATABASE=nextcloud \
--env MYSQL_USER=nextcloud \
--env MYSQL_PASSWORD=nextcloud \
--env MYSQL_ROOT_PASSWORD=nextcloud \
--volume %h/.podman/nextcloud/mariadb:/var/lib/mysql:z \
--name mariadb docker.io/library/mariadb:11 \
--transaction-isolation=READ-COMMITTED \
--log-bin=binlog \
--binlog-format=ROW \
--max_allowed_packet=256000000
```
### Nextcloud maitenance cron job
Nextcloud has an ongoing [background task](https://docs.nextcloud.com/server/latest/admin_manual/configuration_server/background_jobs_configuration.html) that needs to run on a regular basis. There are a few different ways to schedule this, but the recommend method is using cron on the host server.
Edit your **user crontab** by running `crontab -e` (without `sudo`) and add the following line:
```
*/5 * * * * podman exec -u 33 nextcloud-app php /var/www/html/cron.php
```
The command is opening a shell inside the `nextcloud-app` container and running Nextcloud's `cron.php` script every 5 minutes. The `-u 33` option is telling Podman to run the command as UID 33, which is the UID of the www-data user inside the Nextcloud container.
### (Optional) Use an env file for credentials in systemd files
Instead of pasting the database credentials and other secrets directly into the systemd unit files, we can use the `EnvironmentFile` parameter to dump those into a `.env` file with locked-down permissions.
Create the `.env` file somewhere on the system that makes sense. I recommend placing it in the `$HOME/.podman/nextcloud` directory and naming it `.nextcloud-env`. The syntax of the file should look like this:
``` shell
NEXTCLOUD_VERSION=27
MYSQL_PASSWORD=SuperSecretPassword
MYSQL_DATABASE=nextcloud
MYSQL_USER=nextcloud
MYSQL_ROOT_PASSWORD=EvenMoreSuperSecretPassword
```
\
Update the permissions of the file so that only your user on the host system can read it. Replace `youruser` in the below command with the user running your containers.
``` shell
chown youruser:youruser $HOME/.podman/nextcloud/.nextcloud-env
chmod 0600 $HOME/.podman/nextcloud/.nextcloud-env
```
\
Update each of your systemd unit files that need to access the file with the `EnvironmentFile` parameter in the `[Service]` block:
``` systemd
EnvironmentFile=%h/.podman/nextcloud/.nextcloud-env
```
`%h` in systemd lingo is a variable for your home directory.
Lastly, replace the values in your systemd unit files with `${VARIABLE_NAME}`. In the end your files will look something like this, using the `container-mariadb.service` file as an example:
``` systemd
[Unit]
Description=Podman container-mariadb.service
Documentation=man:podman-generate-systemd(1)
Wants=network-online.target
After=network-online.target
RequiresMountsFor=%t/containers
BindsTo=pod-nextcloud-pod.service
After=pod-nextcloud-pod.service
[Service]
Environment=PODMAN_SYSTEMD_UNIT=%n
EnvironmentFile=%h/.podman/nextcloud/.nextcloud-env
Restart=on-failure
TimeoutStopSec=70
ExecStartPre=/bin/rm -f %t/%n.ctr-id
ExecStart=/usr/bin/podman run \
--cidfile=%t/%n.ctr-id \
--cgroups=no-conmon \
--rm \
--pod-id-file %t/pod-nextcloud-pod.pod-id \
--sdnotify=conmon \
--replace \
--detach \
--env MYSQL_DATABASE=${MYSQL_DATABASE} \
--env MYSQL_USER=${MYSQL_USER} \
--env MYSQL_PASSWORD=${MYSQL_PASSWORD} \
--env MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD} \
--volume %h/.podman/nextcloud/mariadb:/var/lib/mysql:z \
--name mariadb docker.io/library/mariadb:11 \
--transaction-isolation=READ-COMMITTED \
--log-bin=binlog \
--binlog-format=ROW \
--max_allowed_packet=256000000
ExecStop=/usr/bin/podman stop --ignore --cidfile=%t/%n.ctr-id
ExecStopPost=/usr/bin/podman rm -f --ignore --cidfile=%t/%n.ctr-id
Type=notify
NotifyAccess=all
[Install]
WantedBy=default.target
```
### Start your service!
At this point, everything should be in place for your Nextcloud production server. Make sure of the following:
* A DNS A record exists pointing to the public IP address of your server.
* That domain matches the domain in your Caddyfile.
* The host firewall is allowing incoming ports `80` and `443`. This is usually `firewalld` on REHL-based systems or `ufw` on Debian-based.
Before starting the service, reload the systemd user daemon.
``` shell
systemctl --user daemon-reload
```
\
Enable the pod service so it starts on boot.
``` shell
systemctl --user enable pod-nextcloud
```
\
**Rootless gotcha #3**: enable lingering for your user. This allows non-root users to start services at boot without a console login.
``` shell
sudo loginctl enable-linger youruser
```
\
If you haven't done so already, make the change to update the unprvivileged ports that I referenced [earlier](#create-a-pod) in the post.
``` shell
sudo sysctl net.ipv4.ip_unprivileged_port_start=80
```
Don't forget to create the file at `/etc/sysctl.d/99-podman.conf` so it persists on reboot!
\
Finally, start the Nextcloud service!
``` shell
systemctl --user start pod-nextcloud
```
\
On the first run, it may take a few mintues for Podman to pull down the container images. Check the output of `podman ps` and you should see the containers appearing there one after the other, eventually showing all three.
``` shell
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f4a80daae64f localhost/podman-pause:4.7.0-1695839078 2 hours ago Up 45 minutes 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp d1b78054d6f4-infra
c5961a86a474 docker.io/library/mariadb:11 mariadbd 45 minutes ago Up 45 minutes 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp mariadb
13d5c43c0b4d docker.io/library/nextcloud:27-fpm php-fpm 26 minutes ago Up 26 minutes 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp nextcloud-app
b29486a99286 docker.io/library/caddy:2 caddy run --confi... 4 minutes ago Up 4 minutes 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp caddy
```
\
At this point you should have rootless Nextcloud accessible at your FQDN on the public internet with HTTPS!
![nextcloud-podman02](/assets/images/screenshots/nextcloud-podman02.png){:class="img-responsive"}
Walk through the first-time setup of Nextcloud to create your admin account and install apps.
![nextcloud-podman03](/assets/images/screenshots/nextcloud-podman03.png){:class="img-responsive"}
![nextcloud-podman04](/assets/images/screenshots/nextcloud-podman04.png){:class="img-responsive"}
I recommend navigating to **Administration Settings -> Overview** and reading the "Security & setup warnings". The Nextcloud app always has a few recommendations for fixes and changes to the configuration, with documentation to back it up.
![nextcloud-podman05](/assets/images/screenshots/nextcloud-podman05.png){:class="img-responsive"}
## Troubleshooting
If the Nextcloud page isn't loading as expected or you're getting an error when launching your service, the container output logs are your friends! Run `podman ps` to see if your containers are running. If they are, use `podman logs <container name>` to see the latest output from each container. It's usually pretty easy to spot red flags there.
If the containers aren't running, use `sudo journalctl -xe` to check the output of each service. You may have to scroll up a bit to get useful information, since services will often try to restart multiple times after an error and fill up the output. Make sure you scroll up past the messages that say "service start request repeated too quickly" and try to find the first messages shown from each container's service.
**Common problems**
* Directory or file referenced in the `*.service` file doesn't exist or is in the wrong location (your container directories and Caddyfile). Make sure the paths are consistent in all your files.
* Caddy can't get the certificate from Let's Encrypt. Make sure your A record points to the correct IP and that it's had time to propagate across the web. This takes up to 30 minutes after you add the record.
* Firewall blocking ports 80 and 443. Beyond `ufw` and `firewalld` on the system, make sure there aren't any additional firewalls set up in your VPS provider or home network that could be blocking the incoming ports.
* Nextcloud can't connect to the database. Make sure the `$MYSQL_HOST` value matches the container name of the MariaDB container. Make sure the same is true for the database username and password.
**Helpful links**
* [Nextcloud documentation](https://docs.nextcloud.com/)
* [podman run](https://docs.podman.io/en/stable/markdown/podman-run.1.html)
* [podman generate systemd](https://docs.podman.io/en/stable/markdown/podman-generate-systemd.1.html)
## Next steps
Now that we have a working server, let's make sure we never have to do it by hand again! In Part 3 of the series, I'll go over how you can automate the entire configuration with an [Ansible playbook](https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_intro.html). Not only can you re-use that playbook to spin up multiple servers or re-deploy on a new hosting provider, it also acts as documentation that writes itself.
As always, feel free to leave a comment below with any questions or suggestions. You can also reach me by [email](mailto:ray@rayagainstthemachine.net) or [Mastodon](https://fosstodon.org/@skoobasteeve).
Happy hacking!

View File

@@ -0,0 +1,108 @@
---
layout: single
title: "Easily Update Your Containers with Podman Auto-Update"
date: 2023-10-08 16:00:00
excerpt: "Use this handy built-in feature of Podman to update all your container images with a single command."
categories: [Linux Administration]
tags: linux nextcloud podman docker container update
comments: true
---
I've written previously about the joys of using Podman to manage your containers, including the benefits of using it over Docker, but one of my favorite quality-of-life features is the [podman auto-update](https://docs.podman.io/en/stable/markdown/podman-auto-update.1.html) command.
In short, it replaces the series of commands you would normally run to update containers, for example:
1. `podman pull nextcloud-fpm:27`
2. `podman stop nextcloud-fpm`
3. `podman rm nextcloud-fpm`
4. `podman run [OPTIONS] nextcloud-fpm:27`
5. Repeat for each container.
Not only does podman auto-update save you all these steps, it will also automatically roll back to the previous image version if there are errors starting the new version, giving you some peace of mind when updating important applications.
## Requirements
* Podman installed
* Containers [managed with systemd](https://docs.podman.io/en/stable/markdown/podman-generate-systemd.1.html)
* Containers you want to update must use the `--label "io.containers.autoupdate=registry"` run option
## Instructions
Recreate your existing systemd-managed containers with the `--label "io.containers.autoupdate=registry"` option. To do this, just edit your container's service file to include the option. See the below partial example for my Nextcloud container:
``` systemd
ExecStartPre=/bin/rm -f %t/%n.ctr-id
ExecStart=/usr/bin/podman run \
--cidfile=%t/%n.ctr-id \
--cgroups=no-conmon \
--rm \
--pod-id-file %t/pod-nextcloud-pod.pod-id \
--sdnotify=conmon \
--replace \
--detach \
--env MYSQL_HOST=mariadb \
--env MYSQL_DATABASE=nextcloud \
--env MYSQL_USER=${MYSQL_USER} \
--env MYSQL_PASSWORD=${MYSQL_PASSWORD} \
--volume %h/.podman/nextcloud/nextcloud-config:/var/www/html:z \
--volume /mnt/nextcloud-data/data:/var/www/html/data:z \
--label "io.containers.autoupdate=registry" \
--log-driver=journald \
--name nextcloud-app docker.io/library/nextcloud:${NEXTCLOUD_VERSION}-fpm
ExecStop=/usr/bin/podman stop --ignore --cidfile=%t/%n.ctr-id
```
\
Once you're done, reload the systemd daemon and restart the service.
``` shell
systemctl --user daemon-reload
systemctl --user restart container-nextcloud.service
```
\
Next, run the auto-update command with the `--dry-run` option. With this option, you'll get a preview of which containers will be updated without the update taking place.
``` shell
podman auto-update --dry-run
UNIT CONTAINER IMAGE POLICY UPDATED
pod-nextcloud-pod.service 643fd5d3e2cb (nextcloud-app) docker.io/library/nextcloud:27-fpm registry pending
pod-nextcloud-pod.service 71e48b691447 (mariadb) docker.io/library/mariadb:10 registry pending
pod-nextcloud-pod.service 9ed555fecdfa (caddy) docker.io/library/caddy registry pending
```
### Output explained
* `podman auto-update` will show updates for every container that has the "io.containers.autoupdate=registry" label and do them all at once
* The `UNIT` column shows the same "pod" service for each container. This is because my containers are all managed by a single Podman pod.
* The `UPDATED` column shows "pending", which means there is an update available from the container registry.
\
Once you're ready to update, run the command again without the `--dry-run` option.
``` shell
podman auto-update
```
\
Podman will begin pulling the images from the registry, which may take a few minutes depending on your connection speed. If it completes successfully, you'll get fresh output with the `UPDATED` column changed to `true`.
``` shell
UNIT CONTAINER IMAGE POLICY UPDATED
pod-nextcloud-pod.service 643fd5d3e2cb (nextcloud-app) docker.io/library/nextcloud:27-fpm registry true
pod-nextcloud-pod.service 71e48b691447 (mariadb) docker.io/library/mariadb:10 registry true
pod-nextcloud-pod.service 9ed555fecdfa (caddy) docker.io/library/caddy registry true
```
\
During this process, the containers were restarted automatically with the latest image. You can verify this with `podman ps`.
``` shell
podman ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0c5523997648 localhost/podman-pause:4.6.1-1692961071 2 minutes ago Up 2 minutes 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp 0e075fb7b67b-infra
4ba992e83eeb docker.io/library/caddy:latest caddy run --confi... 2 minutes ago Up 2 minutes 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp caddy
2a7d448b1b6b docker.io/library/nextcloud:27-fpm php-fpm 2 minutes ago Up About a minute 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp nextcloud-app
9ec017721f16 docker.io/library/mariadb:10 --transaction-iso... 2 minutes ago Up 2 minutes 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp mariadb
```
\
That's it! Your Podman containers were updated to the latest image version with a single command. This is a small feature, but one I've come to love in my time using Podman. If you get stuck, check out the project's [documentation for the auto-update command](https://docs.podman.io/en/stable/markdown/podman-auto-update.1.html). If you have broader questions about running Podman, I recommend reading my [series on building a reproducible Nextcloud server with Podman]({% link _posts/2023-08-27-nextcloud-podman.md %}).
Happy hacking!

View File

@@ -0,0 +1,59 @@
---
layout: single
title: "Fix \"flicker\" problem in Dolby Vision MKVs made with MakeMKV"
date: 2024-09-07 12:00:00
excerpt: "Resolve this rare but frustrating issue using open-source tools."
categories: [Own Your Media]
tags: makemkv dv dolbyvision blu-ray shield rip 4k
comments: true
---
![saving private ryan as displayed in kodi on my nvidia shield](/assets/images/screenshots/saving_private_ryan_kodi.png)
[➡ **Skip to solution**](#the-solution)
While I consume most of my media in my life like everyone else (an ever expanding list of ever more expensive streaming services), I also have a carefully curated collection of Blu-rays that I watch from time to time. Being who I am, purchasing a Blu-ray player and plugging it into my TV was never a valid option for viewing these. No, I needed a way to rip the discs to files, store them on my server, and access them through a media-center-like interface. After expending far more mental energy than necessary, I came up with the following method of watching my Blu-ray collection:
1. Rip discs to uncompressed MKV files using [MakeMKV](https://www.makemkv.com/) and a Blu-ray drive [patched for ripping UHD discs.](https://forum.makemkv.com/forum/viewtopic.php?t=19634)
2. Copy them to a folder on my NAS available to the network via NFS.
3. Play them on my TV using an Nvidia Shield TV Pro (2019) and Kodi.
### Notes on the Nvidia Shield and Kodi
- The Shield Pro is one of the only Android TV devices that can properly recognize and play Dolby Vision MKV files encoded with Profile 7. No need to dig into what that means other than to say that it's a special type of DV that's only used in Blu-ray discs. Generally the only devices "licensed" to play DV Profile 7 are Blu-ray players.
- Until recently, I had to use a special [patched version of Kodi](https://www.kodinerds.net/thread/69428-maven-s-kodi-builds-f%C3%BCr-android/) to play Dolby Vision MKVs, but as of Kodi 21, it's supported natively! This is awesome and generally works very well.
## The problem
The other night when I started to watch my UHD copy of Saving Private Ryan, I noticed a strange "flickering" in the brighter spots of the image. This is especially noticeable in the opening scene in the cemetary where the sky is bright white. A quick [Kagi](https://kagi.com/) search of the problem led me to a few posts in the [MakeMKV forum](https://forum.makemkv.com/forum/viewtopic.php?p=135914) and [AVSForum](https://www.avsforum.com/threads/dune-hd-pro-vision-4k-solo.3180599/page-29) that also identified the issue, instantly making me feel better for not being crazy.
## Why?
After digging around in forums, I learned that UHD Blu-rays can either apply Dolby Vision using MEL (minimum enhancement layer) or FEL (full enhancement layer). The latter is more problematic for non-standard players and ripped files MKVs, and it happens to be what was used for Saving Private Ryan. This is a rare issue because most DV Blu-rays use MEL.
## The solution
The fix for this problem involves ripping the Blu-ray disc in MakeMKV's backup mode, making changes to the files, and repackaging them into an MKV. **Credit to MakeMKV Forum user adamthefilmfan** who [originally posted the solution](https://forum.makemkv.com/forum/viewtopic.php?t=32107). I've cleaned up the instructions a bit and adapted them to work on Linux, though they should also work on macOS.
1. Install prerequisites
- [DGDemux](https://www.rationalqm.us/dgdemux/binaries/) - Unpackages or "demuxes" the `.mpls` file in the Blu-ray backup containing the film. Download the latest .zip file, extract it, and mark the `dgdemux` file as executable. I like to put files like this somewhere on my PATH, like `~/bin`.
- [dovi_tool](https://github.com/quietvoid/dovi_tool) - Extracts the Dolby Vision metadata and re-applies it to the new file. Download the latest .tar.gz, extract it, and mark the `dovi_tool` file as executable. dgdemux also includes a dovi_tool binary in their .zip last time I checked, but it may not be the latest.
- [MKVToolNix](https://mkvtoolnix.download/downloads.html) - Repackages or "remuxes" the modified Blu-ray files into a playable MKV. Follow the installation instructions for your platform on their website.
2. Rip the disc via the MakeMKV backup function, being sure to check the "decrypt video" box.
3. Open the movie playlist with of DGDemux.
``` shell
dgdemux -d ~/Videos/backup/SAVING\ PRIVATE\ RYAN
```
4. Locate the main film title in the list, then demux it. You'll get separate files for all audio and subtitle tracks (and chapters), as well as two separate video (.hevc) files. Rename the large video file to "BL.hevc" and the small one to "EL.hevc".
``` shell
dgdemux -i ~/Videos/backup/SAVING\ PRIVATE\ RYAN/BDMV/PLAYLIST/00800.mpls -o ~/Videos/demux/SPR/00800
```
5. Extract the Dolby Vision RPU from the EL.hevc using `dovi_tool`
``` shell
dovi_tool -m 2 extract-rpu -i ~/Videos/demux/SPR/EL.hevc -o ~/Videos/demux/SPR/RPU.bin
```
6. Inject the RPU into BL.hevc and save it out to a new file
``` shell
dovi_tool inject-rpu -i ~/Videos/demux/SPR/EL.hevc -r ~/Videos/demux/SPR/RPU.bin -o ~/Videos/demux/SPR/final.hevc
```
7. Open the MKVToolNix GUI and add all the files in the demux directory (video, audio, subtitles, and your hevc files)
8. Remove EL.hevc, BL.hevc, and RPU.bin from the list. Make sure final.hevc is still included.
9. Set a destination file and click "Start multiplexing".
![how mkvtoolnix window should look before multiplexing.](/assets/images/screenshots/mkvtoolnix-remux.png)
After a few minutes, you'll have a playable, flicker-free MKV file with Dolby Vision still intact. So far this is the only Blu-Ray in my collection I've had to do this for 🤞, but I'm glad to have a solution.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 612 KiB

After

Width:  |  Height:  |  Size: 335 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 246 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 471 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 575 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 662 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 563 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 541 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 723 KiB