haxala1r

My Setup

Containers! Docker!

Oct 25, 2024 - 6 minute read

My Setup

In this ‘series’ I will be walking you through my process of how I host everything on this server.

I’m currently running, on top of my blog, a gogs instance.

When first creating this website, I just had my blog. I generated this blog using hugo: a static site generator. Hugo allowed me to focus on writing whatever I wanted in Markdown format, it would take care of converting my writing into HTML and CSS.

I had a small issue with how I wrote my code and deployed it though: whenever I made a small change to the page, I had to manually rebuild it, then upload the updated version to my server and put it in the web directory.

This is a cumbersome process. The whole point of using hugo is to focus on the writing, so having to zip and reupload for every typo is… not great. I wanted to be able to do a simple git push, and not worry about the rest.

The “manual” approach also depends on me having already installed all necessary software. If you have a dedicated server that you’re running yourself, that’s probably okay, you just have to setup once, but I’m running this on a VPS that I’m not sure I’ll keep forever. The ability to reproduce this exact setup within minutes actually matters.

After reading a bit on this topic, I decided I would use docker for this. Podman would work just as nicely (any containerization software would work, really), but I decided on docker because it’s been the standard for a while now

Motivation

Basically, I’m already running a web server. Why shouldn’t I also host several other services for friends and family while I’m at it? Why shouldn’t I make the entire setup reproducible?

Here are some of the services I wanted to self-host:

  • Web server: obviously, who doesn’t want a website?
  • Some git server: having my own place to show off all the things I’ve done is certainly really cool. For this, something like Gitea would normally be great. I went with Gogs instead, because it is far more lightweight.
  • Wireguard: Free VPN along with the website? sign me up.
  • CI/CD: automatic testing and releases of my software is cool, and also incredibly useful.

Of course, there are always more things I could be self-hosting. So it makes sense to automate the setup, and that’s where docker comes in.

Basics of docker

Before we can get to the exciting stuff, we need to go over what docker is, and how to use it. Essentially, docker is a container engine: it lets you build and run applications in a containerized environment. Containers are useful because they provide security, easy setup and most importantly, reproducibility.

I’m not going to spend any more time explaining what containers are and why they’re good, that’s been done to death already. Right now, what matters is the actual setup, so let’s get on with it.

If you’ve used docker before, you’ll feel right at home. Many commands are unchanged from docker, making docker a suitable drop-in replacement. Some things like network setups tend to be a little different, but that won’t matter too much right now.

In case you’re unfamiliar with docker, here are some basic commands (run these either as root, or as a user in the docker group):

# Search for container images (on docker.io unless you configure otherwise)
$ docker search <image name>

# Download (pull) an image from remote repo
$ docker pull <image name>

# list the images you have pulled.
$ docker images

# run a container.
$ docker run <image name>

# run a container, but with a LOT of flags. I just listed the most useful ones.
$ docker run
    -i # interactive, so you can e.g. run a shell in the container
    -t # allocates a tty. useful with -i so that shell completion etc. can work
    -d # opposite of -i, detach and run in the background
    --port <HOST PORT>:<CONTAINER PORT> # port forwarding, for when you need a server.
    -v <HOST DIR>:<CONT DIR>:<FLAGS> # give the container access to some directory
    <image name>
    <command> # ... want a shell?

# list running containers. add -a to list ALL containers, running or stopped.
$ docker ps <-a>

# stop a running container.
$ docker stop <id>

# stopped containers don't automatically get removed. This command removes it.
$ docker rm <id>

Compose is nice.

Docker compose is a nice way to essentially “group together” some containers, and ship them in an easy way.

Usually, on a server, each application isn’t totally separate from each other

  • for my own use case, I want my git server (e.g. gogs) to automatically build and update my website whenever I push to its git repository. That means my git server and web server can’t be totally separate, there’s some amount of relation.

At the same time… I don’t really want to set up both containers, then their volumes, and their ports etc. by hand. Sure I could stick it in a shell script, but that’s hardly elegant.

Docker compose helps with this: you can create a compose.yaml file, and define containers, ports, volumes, secrets all inside this file. Then, when you run docker compose up this configuration is read, and all of it is processed as you would want it to.

e.g you might have this for starting a web server on port 3000:

services:
    web:
        image: "nginx"
        ports:
            - 3000:80

Or you could even have two servers, like, a Gogs and a web server!

services:
    web:
        image: "nginx"
        ports:
            - 80:80
    gogs-haha:
        image: "gogs/gogs"
        ports:
            - 3000:3000
            - 3022:22

See how we got multiple services to run, very very easily? Isn’t that just really nice? You can just keep adding stuff. And compose even sets up dns for these containers! That means, for example, you can have your web server act as a reverse proxy by having it access http://gogs-haha:3000 in the above config! It just works!

Of course, you can add volumes to tie it all, and ‘secrets’ to manage sensitive files (like keys and certificates). I won’t go into those here though.

Automation

Yeah, okay. I like automating, I know I talked big game up in the introduction. It’s obvious. I just love it when things do themselves instead of me having to do them. I remember wanting to make a robot to do my homework when I was a kid. I still want to do that. Of course, the fact that it would be easier to just do my homework is irrelevant. It was always irrelevant. It’s not about doing less work, it’s about the principle.

Anyway, so the point is that I want to automate the process of deploying from a git repository directly to my blog. Thankfully, gitea provides a nice feature for Continuous Integration (CI), gitea actions. It’s similar to GitHub actions.

Essentially, you install gitea normally, then register an act_runner to it with a registration token you get from the instance. I use docker compose and the gitea/act_runner image to automate this step as well.

Once a runner is registered, you can write a workflow file, which tells the runner what it should do once you push some changes. In my case, for example, this would be building the website using hugo, then putting the files into the web directory.

Conclusion

Docker and Docker Compose are a lot more useful than my younger self would have given them credit for. It was really fun learning about them, and hopefully I’ll use this knowledge in the future.

Thank you for reading.