Hi, I am Stefan Römer.

Stefan Römer

Welcome on my website which is accessible via senvang.org, senvang.it and sroemer.org and serves as:

  1. the web presence of my business providing software development services on a freelance basis (via my Vietnam based company SENVANG IT Solutions Co. Ltd.)
  2. a personal web presence and blog (see posts or check below)

Via the menu on top you can find more information about me and my business and about my previous working experience (including references from my previous employers).

You can contact me via one of the ways shown on the contact page.

Radicale installation on Devuan Daedalus 5.0

Since years I am running my own CalDAV / CardDAV server to synchronize my calendars and address books. I use Radicale for this and on my Uberspace it was installed via Python’s pip, but now it was time to move this to my own server too and I chose to install the package directly from the Devuan repository via apt-get install radicale.

The installation via apt-get just does the installation and more steps are needed to set everything up.

So this will be updated soon …

Linux Containers (LXC) on Devuan Daedalus 5.0

I started some investigation about how to run Linux Containers on Devuan Linux 5.0 (Daedalus), as I run it on my server. For this I first did set up a local VM which replicates my virtual server. I use qemu for this and created a script called vm‑netcup which is part of my dotfiles repository and sets qemu parameters accordingly.

Installation via apt-get install lxc works as expected. So far so good - but things should not go that smoothly for much longer.

cgroups

LXC relies on cgroups, a Linux kernel feature for isolation, limitation and accounting of resource usage of processes. On distributions using systemd this is set up by systemd, but with Devuan I chose to not use systemd.

You guessed it: lxc-checkconfig and ls /sys/fs/cgroup show that cgroups are not set up by default (at least not with my minimal installation). LXC can work with cgroups v1 but cgroups v2 provide a cleaner, unified hierarchy and therefore are my preferred way.

To mount the cgroup v2 filesystem at boot time, I simply created an additional entry in the /etc/fstab file:
none /sys/fs/cgroup cgroup2 defaults 0 0

In case you want to use cgroups v1 you can install the cgroupfs-mount package which installs a service to perform the required mounts at boot time.

That’s all that is required to set up cgroups for LXC.

unpriviledged vs. priviledged containers

In short: We want unpriviledged containers whenever possible. Those map user ids inside the container to a different range on the host and therefore are the safest option. For example user id 0 (root) in an unpriviledged container would be mapped to user id 100000 on the host. A user id 0 (root) on a priviledged container on the other hand also would be root on the host.

/etc/subuid and /etc/subgid contain the ranges for mapping uid and gid on the host, but those values also need to be reflected in the lxc configuration as described below.

configuration

For configuration LXC distinguishes between system and container configuration. See the according man pages lxc.container.conf and lxc.system.conf for further information.

By default LXC on my system used ~/.local/share/lxc/ for container storage. This resulted in LXC complaining about the users home directory not having x permission for the root user of the container and starting the container failed.

Therefore I decided to move the container storage to /var/lib/lxc/<user>. This directory needs to be created as root and owernship as well as permissions need to be set as follows:
chown <user>:<group> /var/lib/lxc/<user>
chmod 711 /var/lib/lxc/<user>

Once this is done we can continue as regular user. We create a file .config/lxc/lxc.conf and configure the container storage path in it:
lxc.lxcpath = /var/lib/lxc/<user>

/etc/lxc/default.conf contains the system wide container default configuration. We copy this file to ~/.config/lxc/default.conf and modify it to create our user specific defaults.

Be aware that this file only contains the default configuration used during creation of a new container. Once created every container has its own configuration file in its according folder within the container storage path.

We now add the following lines for configuration of the uid and gid mapping. The values used here must match the values defined in /etc/subuid and /etc/subgid for the according user:
lxc.idmap = u 0 100000 65536
lxc.idmap = g 0 100000 65536

Additionaly we change the AppArmor profile as follows:
lxc.apparmor.profile = unconfined

creating a new container

For creation of a new container we use the lxc-create command. lxc-create -t download -n <container-name> is a good start and provides an interactive way to create a new container. In my case I want to install the 3.18 release for the amd64 architecture of the alpine distribution.

This all also can be packed into the command directly:
lxc-create -t download -n <container-name> -- --dist alpine --release 3.18 --arch amd64

network

With all this set up the container still fails to start because it cannot set up its network. By default unprivileged containers cannot use any networking. We need to create an additional configuration file /etc/lxc/lxc-usernet as root. This file must have an entry to allow the user to add veth devices - up to 16 in in my case - to the lxcbr0 bridge:
<user> veth lxcbr0 16

using the container

To start an already existing container the command lxc-start -n <container-name> is used. With everything I described above done the container does start without issues. Previously appearing tmpfs: Bad value for 'uid' error message were related to the Devuan container image used. The errors disappeared after I switched to using the Alpine Linux image.

Once the container is running we can start a process in the container with lxc-attach. For example lxc-attach -n <container-name> bash will provide shell access to the container.

For stopping the container again we use lxc-stop -n <container-name>.

In case the container isn’t running we directly can run a command in it with lxc-execute. For example lxc-execute -n <container-name> bash will provide shell access as before.

conclusion

I am not done with this topic yet but for now this is a good foundation and a playground to further investigate if I want to run containers on the server. I might update this post at some point if I have any points worth mentioning.

Especially the network setup was barely touched here and could be a topic for a separate post at some point. For now that’s it …

I ordered a new virtual server

After running my own server for some years in the past I did move on to using an Uberspace. Uberspace as a hoster is great and offers a lot of flexibility but it has its limitations anyway (for example it is not possible to run a VPN server). I am planning to run a VPN server and also like the admin work overall. So I ordered a small virtual server (VPS 200 G10s) from Netcup again.

Why did I choose Netcup?

The answer to that is easy. It’s cheap and I did have some good experience with it already. For 3.25 Euro/month I get a VPS with 2vCPUs, 2GB RAM and 40GB SSD. For the virtualization they use KVM and via the web interface I can mount any ISO file and install whatever OS I like. The drawback of their cheap virtual servers is the quite low minimum availability of just 99.6% but for my use case that’s not an issue.

Which OS did I install?

After initially even considering Gentoo Linux I quickly dropped that idea due to the limitations of the virtual hardware. I then - after some research - did a kind of test installation of Alpine Linux. It was the first time ever I used Alpine and I must say that I am pretty impressed with it - especially due to its low footprint. Anyway I decided to stick with something I know better and trust a bit more in the long term.

I installed Devuan Linux - Devuan Daedalus 5.0. For those who are not aware, Devuan is a fork of Debian Linux which does not use systemd and therfore I can count on the great selection of software available for Debian while still using the good old sysvinit.

As always I did a minimal installation and add what is required for my needs from there. My base installation with Nginx, Certbot and unattended upgrades uses about 60MB of RAM. A nice small footprint as well even if its not as small as Alpine Linux (but that’s expected).

Further plans?

With Nginx and Certbot up and running I already moved the website on to the new server. Additionally I want to install the following components:

Regarding all this I am not in a hurry. For now I am using the Uberspace and my virtual server in parallel and before installing any of those components I want to do some research about Linux Containers. So far I do not have experience with containers of any kind and I am not a huge fan of Docker and its approach of isolating single applications. Linux Containers on the other hand are available on any Linux system, are OS-level virtualization and do not rely on infrastructure controlled by a single company. So my idea is to isolate some parts of my installation into Linux Containers (probably using Devuan images as well). Therefore I should gain some flexibility when updating the main server OS and be able to move and update some parts independently.

In any case containers are an interesting topic and digging deeper into it will be worth it …