Welcome! On this post I'll provide a detailed outline of my Docker deployment setup and how I handle it using Traefik (with automatic Let's Encrypt SSL support), Watchtower (automatically updating your Docker stacks), Backblaze B2, and a custom backup script. This guide is intended for my future reference, but I hope it will be helpful to anyone else who may be looking for some advice on a setup.

I'd also love to improve my deployment and this guide! Leave a comment with suggestions if you have any. I'm sure there is a lot of stuff that can be improved.

For the sake of this guide I'll also setup a few example services at the end which should give you a brief example for creating your own services in the future. I'll try and be as descriptive as possible to answer any potential questions, so be prepared - you're in for the long haul.

Prerequisites

In this guide I assume:

  • you're starting on a base Ubuntu Server 20.04.01 LTS install
  • aware of your public facing IP
  • you've already configured your user, ssh key(s), and are able to SSH into the machine
  • you have already setup your networking & DNS (whether it be port forwarding to your home or a VPS with a dedicated IP)

In the test environment I made for this post I created a KVM VM under Proxmox and installed using the Ubuntu Server ISO - configuring my IPv4, initial user (zikeji), and SSH key (import from Github) during the wizard. I also disabled password authentication over SSH.

Package Install

Ensure your repositories and packages are up to date by running a simple update + upgrade.

sudo apt update && sudo apt upgrade

Now we'll install packages we'll need later.

sudo apt install apache2-utils backblaze-b2 apt-transport-https ca-certificates curl software-properties-common

Firewall (UFW)

Skip this step if you already have a firewall setup, have other preference, or so forth. For my basic deployment I know 3 facts: I need SSH, I need HTTP, and I need HTTPS. With UFW we can allow these fairly easily.

Let's enable our basic rules to cover our needs.

sudo ufw allow OpenSSH
sudo ufw allow http
sudo ufw allow https

Finally, we'll enable the firewall.

sudo ufw enable
Command may disrupt existing ssh connections. Proceed with operation (y|n)? y
Firewall is active and enabled on system startup
sudo ufw status
Status: active

To                         Action      From
--                         ------      ----
OpenSSH                    ALLOW       Anywhere
80/tcp                     ALLOW       Anywhere
443/tcp                    ALLOW       Anywhere
OpenSSH (v6)               ALLOW       Anywhere (v6)
80/tcp (v6)                ALLOW       Anywhere (v6)
443/tcp (v6)               ALLOW       Anywhere (v6)

Installing Docker

Here is a quick overview for installing it on Ubuntu Server 20.04. First we'll add the Docker key and repository and then update apt.

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu focal stable"
sudo apt update

Next we'll install Docker. Their install will handle everything else.

sudo apt install docker-ce docker-compose

Now add yourself to the docker group so you are able to run docker's commands without sudo.

sudo usermod -aG docker ${USER}

Be sure to reconnect to the server so the membership is properly applied.

Create our default proxy network. This network will be used by Traefik and our other stacks primarily. Rare use cases may require using the default network or other networks, but you likely won't.

docker network create proxy

At this point you should have a fully functioning Docker install. If you ran into any issues installing Docker please troubleshoot and resolve them before continuing.

Folder Structure

Let's setup our initial folder structure.

mkdir archive backups services
ls
archive  backups  services

Both archive and backups are used by our backup script (which we will create later), and services is where we will store our containers, their configs, and their data.

Setup Core Services

The "core services" as I like to call it is Traefik and Watchtower. They're core elements and will likely be running on any Docker server you have containers on. You can naturally tweak things as needed, for example if you won't be exposing any HTTP services you probably have no need for Traefik. Let's start by creating a directory to store it in.

mkdir ~/services/core
cd ~/services/core

Now we'll create our docker-compose.yml in ~/services/core. The template is below, be sure to update the relevant lines before saving it on your server. It's a Docker compose yaml file, you can find the reference here for more information on it. I expect you have knowledge of yaml and some knowledge of compose files, but I'll explain various lines below to give you a better understanding of what it's doing.

version: '3.3'

services:
  traefik:
    image: traefik:latest # https://hub.docker.com/_/traefik
    command:
      - "--api.insecure=true"
      - "--providers.docker=true"
      - "--providers.docker.exposedbydefault=false"
      - "--entryPoints.http.address=:80"
      - "--entryPoints.https.address=:443"
      - "[email protected]"
      - "--certificatesResolvers.http.acme.storage=/acmestore/acme.json"
      - "--certificatesResolvers.http.acme.httpChallenge.entryPoint=http"
    ports:
      - 80:80
      - 443:443
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - ./acmestore:/acmestore
    restart: unless-stopped
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.traefik.entrypoints=http"
      - "traefik.http.routers.traefik.rule=Host(`traefik.example.com`)"
      - "traefik.http.middlewares.traefik-auth.basicauth.users=zikeji:$$apr1$$Q1Wvst0h$$CsgEyaB4oP1fHWe4OHb/O."
      - "traefik.http.middlewares.traefik-https-redirect.redirectscheme.scheme=https"
      - "traefik.http.routers.traefik.middlewares=traefik-https-redirect"
      - "traefik.http.routers.traefik-secure.entrypoints=https"
      - "traefik.http.routers.traefik-secure.rule=Host(`traefik.example.com`)"
      - "traefik.http.routers.traefik-secure.middlewares=traefik-auth"
      - "traefik.http.routers.traefik-secure.tls=true"
      - "traefik.http.routers.traefik-secure.tls.certresolver=http"
      - "traefik.http.routers.traefik-secure.service=api@internal"
      - "com.centurylinklabs.watchtower.enable=true"
  watchtower:
    image: containrrr/watchtower # https://hub.docker.com/r/containrrr/watchtower
    command: --cleanup --label-enable
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock

networks:
  default:
    external:
      name: proxy

Lines 6-14) The Traefik image takes command line arguments for it's configuration. We expose the API as we will be protecting it with basic auth later on in the config, we tell it to use it's Docker provider. I then disable exposed by default (which means you have to explicitly set the traefik.enable=true label). This is up to personal preference however. We define our entrypoints and then configure Let's Encrypt.
Line 12) Be sure to update the email on this line to reflect the email you want to use with Let's Encrypt.
Lines 15-17) Self explanatory, we're defining the ports we want Docker to expose on the container.
Lines 18-20) First we're mounting the Docker socket to give the Traefik container control over the daemon, then we're making a bind mount within the folder to store our acme credentials and certificate info.
Line 21) Set the restart policy so it'll always be online unless you've explicitly shutdown the container.
Lines 224-35) These are the service labels. Labels are used by Traefik and Watchtower to control how they interact. They are more or less readable, so I won't go into an indepth explanation.
Line 25 & 30) These are the  Traefik router rule for HTTP and HTTPS, you'll want to change the domain at the end to reflect the actual domain you have pointed to your Docker server and want to reach the Traefik panel on.
Line 26) This is our basic auth middleware we're creating to secure our Traefik panel. You can use htpasswd -nb zikeji example to generate the line you'll put there. Be sure to escape every $ by adding a second $. In my example, zikeji:$apr1$Q1Wvst0h$CsgEyaB4oP1fHWe4OHb/O. becomes zikeji:$$apr1$$Q1Wvst0h$$CsgEyaB4oP1fHWe4OHb/O.
Line 38) These are the command line arguments for our Watchtower container. --cleanup tells it to remove old images once it updates the container, and --label-enable tells it to look for the com.centurylinklabs.watchtower.enable=true label instead of defaulting to true.
Lines 39-40) Again we mount the Docker socket, this time to the Watchtower container. This way the container can monitor your services for image updates and update as needed.
Lines 42-45) Here we define our "default" network to point to our network we created earlier. These lines will make a common appearance in any service you need to be proxied through Traefik.

Create the docker-compose.yml

So assuming you updated the relevant lines (lines 12, 27, 28, 32) you should be ready to create the docker-compose.yml and bring up your first services. Create ~/services/core/docker-compose.yml using your preferred terminal editor (or scp, wget, curl, etc.) and ensure it has your modified contents of the above template.

Now we'll bring up the compose file:

cd ~/services/core
docker-compose up -d

Assuming you didn't make any formatting errors or mistakes Docker should bring up those services. You can always take a peak at the container logs using docker-compose logs. Give it a minute to think, then browse to the domain you setup (so, traefik.your-domain.com). If your DNS is setup properly, you should get a basic authentication prompt (and if you kept the default user the username will be zikeji with the password example). Once in, you'll see the Traefik panel which provides information about Traefik.

Additional Core Service Configurations & An Edge Case

Both Traefik and Watchtower are powerful in their own right and provide additional options you can provide in the config. For example, I configure Watchtower to notify me via Slack. It has other options as well. They're configured with either additional command line args or by adding environment variables.

One edge case to keep in mind is private images and private registry. If you login to the Docker Hub or a private registry, you need to pass those credentials to Watchtower in order to auto update those images. With the default credential storage in Docker you can mount it read-only to the watchtower container. You can accomplish this by adding a new volume to watchtower in the config (under line 44):

    - ~/.docker/config.json:/config.json:ro

You can update the container by simply running docker-compose up -d from the folder again. You'll need to look up specific documentation if you're using a credential helper or other approach.

Up Next: Backups!

At this point we've got our core infrastructure running, can reach Traefik, Watchtower is churning away in the background, and we're ready to do more. Before we move onto our two example services, first we want to setup our backup script and then cronjob. Below is a template of the script I use - it is setup for Backblaze B2 so you'll need to fill in the config section with your keys and bucket name for it to function properly.

#!/bin/bash
#################################################################
#                                                               #
# Takes folders in the services directory (all docker compose), #
# stops them, archives them, and then brings them back up,      #
# finally backing them up to the cloud.                         #
#                                                               #
#################################################################

# CONFIG

KEY_ID=
APPLICATION_KEY=
BUCKET=blog-example

# END CONFIG

center() {
  termwidth="80"
  padding="$(printf '%0.1s' ={1..500})"
  printf '%*.*s %s %*.*s\n' 0 "$(((termwidth-2-${#1})/2))" "$padding" "$1" 0 "$(((termwidth-1-${#1})/2))" "$padding"}
}
echo "Beginning Docker backup @ `date`"
echo
BACKUPDIR="$HOME/backups/`date +"%Y-%m-%d"`"
mkdir $BACKUPDIR
echo Saving backed up archives to $BACKUPDIR

for d in $HOME/services/*/; do
    cd $d
    SERVICE=${PWD##*/}

    echo
    echo `center "$SERVICE"`
    echo

    if [ -f "_backup_sql.sh" ]; then
        echo
        echo Found extra sql backup script, running.
        ./_backup_sql.sh
    fi

    if [ -f "_disable_docker_stop" ]; then
        echo "_disable_docker_stop present, won't stop"
    else
        echo Begin processing $SERVICE, bringing down gracefully.
        docker-compose stop
    fi

    echo
    echo Archiving directory...
    echo

    cd ../
    sudo tar -czf "$BACKUPDIR/$SERVICE.tar.gz" "$SERVICE"
    sudo chown $USER:$USER "$BACKUPDIR/$SERVICE.tar.gz"

    cd $d
    if [ -f "_disable_docker_stop" ]; then
        echo
        echo "_disable_docker_stop don't need to bring back up"
    else
        echo
        echo Bring back up $SERVICE
        echo

        cd $d
        docker-compose up -d
    fi

    if [ -f "_backup_sql.sh" ]; then
            rm backup.sql
    fi
    echo
    echo `center "FINISHED"`
done

echo
echo Archiving old files, nuking older files...
find $HOME/backups -mtime +3 -exec mv {} $HOME/archive \;
find $HOME/archive -mtime +10 -exec rm {} \;
echo
echo Finished, syncing backup directory.
echo
backblaze-b2 authorize_account $KEY_ID $APPLICATION_KEY
backblaze-b2 sync --noProgress --compareVersions none --delete --replaceNewer $HOME/backups b2://$BUCKET
echo
echo "Finished @ `date`"

The script itself is fairly readable (I hope) and easy to modify. You could easily replace the backblaze-b2 command with rsync, scp, or s3. The script loops over each folder in ~/services and archives them, moving them to the backups folder for that day. At the end (before the sync) we use the find command to move folders more than 3 days old out of ~/backups into ~/archive, and deletes backups more than 10 days old out of the archive folder. Essentially it provides 3 days rolling backups on the cloud and 10 days rolling locally.

Keep in mind it utilizes sudo, so you should enable passwordless sudo in your sudoers file. One way to accomplish this would be to update /etc/sudoers and add NOPASSWD: before ALL on the %sudo ALL=(ALL:ALL) ALL line.

It also uses filenames within the service folder to affect it's behavior. For example, if we don't want the script to stop the service before archiving (be sure to always stop services with databases though - you don't want unusable backups) it you would just touch _disable_docker_stop in the service directory. Or create a custom script to backup SQL to backup.sql and name it _backup_sql.sh. These are just behavior I implemented for personal use, but should serve as examples of how you could customize the script.

Create the script at ~/backup_services.sh and make it executable chmod +x ~/backup_services.sh. Test it to ensure it runs properly by simply executing it locally ~/backup_services.sh. You should get output for each stage of the process and see the folder in ~/backups afterwards. Assuming it works without issue, it's time to setup our cronjob!

Cronjob Setup

I assume you know how to make cronjobs, but if not the cronjob I use is 0 7 * * * /home/user/backup_services.sh. Place it in your user's crontab using crontab -e This runs the script daily at 7AM. You can also set the MAILTO variable at the top of your crontab to have it automatically email the specified email, but you'll want to setup SSMTP or something similar to ensure email deliverability.

Restoring a Backup Archive or Migrating a Service

Restoring the archives is fairly simple. Thanks to the simplicity of Docker and versatility of bind mounts, everything is contained in that archive (unless otherwise configured). If you wanted to restore an older version of the core service you'd cd ~/services/core, bring it down using docker-compose down, navigate up cd .., remove the directory rm -rf core, extract your archive using sudo tar --same-owner -xvf core.tar.gz, navigate back into core cd core, and run docker-compose up -d again.

This process works pretty fluidly across the board with many different containers and setups. You can migrate from one host to another by simply updating the DNS records, moving the archive to the new host, bring it down on the old host, decompress it, then bring it up on the new host. This entire setup has made my life significantly easier in terms of my self hosted projects.

Quick Review

At this point you should have the core services running, Traefik should be reachable on the domain you setup, your backup script should execute properly, and your cronjob should execute at the set time (and email you the results if you set that up in the crontab). All that's left really is to add new services. The only real nuance is the Traefik labels and Watchtower label. The rest you can usually just copy straight from whichever Docker image you're setting up's docker-compose.yml example.

If you ran into any issues it probably isn't wise to continue. Try to work them out and resolve them, or leave a comment and I can try and help.

Service Examples

I provide 3 examples below. For the most part they're provided to cover common questions I had when first getting into Docker and Traefik.

Static Site using NGINX

In this example we'll create a compose file to run a basic NGINX web server, exposing two domains (www and non-www), and redirecting from non-www to www.

version: '3.3'

services:
  nginx:
    image: nginx:alpine # https://hub.docker.com/_/nginx
    labels:
      - "traefik.enable=true"
      - "traefik.http.services.nginx.loadbalancer.server.port=80"
      - "traefik.http.routers.nginx.entrypoints=http"
      - "traefik.http.routers.nginx.rule=Host(`example.com`) || Host(`www.example.com`)"
      - "traefik.http.middlewares.nginx-https-redirect.redirectscheme.scheme=https"
      - "traefik.http.middlewares.nginx-redirectregex.redirectregex.regex=^https?://example.com/(.*)"
      - "traefik.http.middlewares.nginx-redirectregex.redirectregex.replacement=https://www.example.com/$${1}"
      - "traefik.http.routers.nginx.middlewares=nginx-https-redirect"
      - "traefik.http.routers.nginx-secure.entrypoints=https"
      - "traefik.http.routers.nginx-secure.rule=Host(`example.com`) || Host(`www.example.com`)"
      - "traefik.http.routers.nginx-secure.tls=true"
      - "traefik.http.routers.nginx-secure.tls.certresolver=http"
      - "traefik.http.routers.nginx-secure.middlewares=nginx-redirectregex"
      - "com.centurylinklabs.watchtower.enable=true"
    volumes:
      - ./html:/usr/share/nginx/html
    restart: unless-stopped

networks:
  default:
    external:
      name: proxy

Lines 6-20) More labels!
Line 10 & 16) Our host rules. Notice how we can use the || to add a second host. You'll want to replace the hosts with your test domains and have those DNS records setup.
Line 12-14 & 19) This is our redirect middleware we're creating. We named it "nginx-https-redirect" and on line 12 you can see the rule and replacement rule on line 13. On line 14 and 19 we assign that middleware to both routers. This is how we're redirecting from non-www to www. Be sure to update the regex and replacement to match your domain.
Lines 21-22) We're creating a bind mount to the local ./html directory to expose it on the root of the NGINX web server.

We'll make a "nginx" service using mkdir ~/services/nginx and create our docker-compose.yml in there. Then create the html directory using mkdir ~/services/nginx/html and create your own index.html in there. For the purpose of this demonstration I did echo "Hello World" > ~/services/nginx/html/index.html. Once you've done all that and have your file setup, you can bring it up by navigating to ~/services/nginx and running docker-compose up -d.

If everything was configured properly you should see your "Hello World" html on the configured domain (in this example www.example.com).

This basic configuration is a great starter for any sort of static HTML you need to host and an example of how to do a redirect. In my own use cases I name the services after my domains so the folder would be www.example.com and my docker-compose.yml would have the service as www_example_com.

Ghost Blog

This configuration will serve as an example of running two containers in one Docker compose file, as well as internal hostnames, a few environment variables, and data directories with more content (as well as the _backup_sql.sh file).

Starting off we have the example configuration:

version: '3.3'

services:
  ghost:
    image: ghost:alpine # https://hub.docker.com/_/ghost
    restart: unless-stopped
    environment: # https://ghost.org/docs/concepts/config/#running-ghost-with-config-env-variables
      url: https://blog.example.com
      server__port: 2368
      server__host: 0.0.0.0
      database__client: mysql
      database__connection__host: ghost_db
      database__connection__user: root
      database__connection__password: example
      database__connection__database: ghost
      mail__transport: SMTP
      mail__from: 'Example''s Blog '
      mail__options__host: 'smtp.gmail.com'
      mail__options__port: 587
      mail__options__auth__user: '[email protected]'
      mail__options__auth__pass: 'fake password'
    volumes:
      - ./content:/var/lib/ghost/content
    labels:
      - "traefik.enable=true"
      - "traefik.http.services.ghost.loadbalancer.server.port=2368"
      - "traefik.http.routers.ghost.entrypoints=http"
      - "traefik.http.routers.ghost.rule=Host(`blog.example.com`)"
      - "traefik.http.middlewares.ghost-https-redirect.redirectscheme.scheme=https"
      - "traefik.http.routers.ghost.middlewares=ghost-https-redirect"
      - "traefik.http.routers.ghost-secure.entrypoints=https"
      - "traefik.http.routers.ghost-secure.rule=Host(`blog.example.com`)"
      - "traefik.http.routers.ghost-secure.tls=true"
      - "traefik.http.routers.ghost-secure.tls.certresolver=http"
      - "com.centurylinklabs.watchtower.enable=true"
    depends_on:
      - ghost_db
  ghost_db:
    image: mysql:5.7 # https://hub.docker.com/_/mysql
    restart: unless-stopped
    volumes:
      - ./db:/var/lib/mysql
    labels:
      - "traefik.enable=false"
      - "com.centurylinklabs.watchtower.enable=true"
    environment:
      MYSQL_ROOT_PASSWORD: example
      MYSQL_DATABASE: ghost

networks:
  default:
    external:
      name: proxy

Lines 7-21) Our environment variables for the Ghost container. You can find more info about them here but they are mostly self explanatory.
Line 12) This particular line is the hostname for our database - this matches the container service name on line 38. So if you adjust either, be sure to make sure they match.
Lines 14, 15, 47, & 48) Our database name and password. You can adjust the password by changing line 47, and adjust the default database name using line 48. Make sure to update lines 14 and 15 to reflect this change. You could also use a user instead of root, you can refer to the environment variables section of the DockerHub page here.
Lines 22-23) We're creating a local bind mount for our Ghost content directory.
Lines 24-35) Our labels - you should be familiar with most of these by now so I won't reiterate.
Lines 36-37) Our dependency, as we are saying our ghost service depends on ghost_db, ghost_db will always come online first before ghost is started.
Lines 41-42) A local bind mount to our mysql data dir to store our MySQL database data.
Lines 43-45) Our labels for our MySQL container. We're explicitly disabling Traefik and enabling Watchtower.

We'll make a "ghost" service using mkdir ~/services/ghost and create our docker-compose.yml in there. The folders we specified in the config (./content & ./db) will be created automatically. You can bring it up by navigating to ~/services/ghost and running docker-compose up -d.

Assuming everything was configured properly and the DNS records exist, give it a few minutes for first run and then you should be able to navigate to blog.example.com and see the Ghost blog! You can setup Ghost at blog.example.com/ghost.

Now assuming you're utilizing some variant of my backup script and would prefer to backup SQL dumps as well as the data dir, you can create a _backup_sql.sh in ~/services/ghost with the following contents:

#!/bin/bash
docker exec ghost_ghost_db_1 /usr/bin/mysqldump -uroot -pexample ghost > backup.sql

That script assumes the generated container name is ghost_ghost_db_1 so be sure to double check the output of docker-compose up -d to ensure you're using the correct container name. Additionally configure the username and password as well as database in the script to match whatever you've configured. Make the file executable with chmod +x ./_backup_sql.sh and do a test run. It should output backup.sql into the directory with the contents of your database.

Next time your backup script runs it will recognize the _backup_sql.sh and execute it before compressing the directory. Now your ghost.tar.gz backup will contain the backup.sql file as well.

ZNC IRC Bouncer w/ Web IRC Client

In this example we'll create a compose file to run the popular ZNC IRC bouncer. This example will cover a few extra topics, like extracting the Let's Encrypt certs from Traefik and exposing container ports, as well as running on a path prefix. If you're using a service like Cloudflare be sure to disable proxy mode so you are able to connect to the bouncer without issue. I won't cover configuration of ZNC past the basics, they have plentiful on that topic on their wiki. We use the linuxserver/znc image as it is simpler to setup than the official image.

version: '3.3'

services:
  znc:
    image: linuxserver/znc # https://hub.docker.com/r/linuxserver/znc
    restart: unless-stopped
    ports:
      - "6667:6667"
      - "6697:6697"
    volumes:
      - ./znc:/config
    labels:
      - "traefik.enable=true"
      - "traefik.http.services.znc.loadbalancer.server.port=6501"
      - "traefik.http.routers.znc.entrypoints=http"
      - "traefik.http.routers.znc.rule=Host(`znc.example.com`) && PathPrefix(`/znc`)"
      - "traefik.http.middlewares.znc-https-redirect.redirectscheme.scheme=https"
      - "traefik.http.routers.znc.middlewares=znc-https-redirect"
      - "traefik.http.routers.znc-secure.entrypoints=https"
      - "traefik.http.routers.znc-secure.rule=Host(`znc.example.com`) && PathPrefix(`/znc`)"
      - "traefik.http.routers.znc-secure.tls=true"
      - "traefik.http.routers.znc-secure.tls.certresolver=http"
      - "com.centurylinklabs.watchtower.enable=true"
  thelounge:
    image: thelounge/thelounge:latest # https://hub.docker.com/r/thelounge/thelounge
    restart: unless-stopped
    volumes:
      - ./thelounge:/var/opt/thelounge
    labels:
      - "traefik.enable=true"
      - "traefik.http.services.thelounge.loadbalancer.server.port=9000"
      - "traefik.http.routers.thelounge.entrypoints=http"
      - "traefik.http.routers.thelounge.rule=Host(`znc.example.com`)"
      - "traefik.http.middlewares.thelounge-https-redirect.redirectscheme.scheme=https"
      - "traefik.http.routers.thelounge.middlewares=thelounge-https-redirect"
      - "traefik.http.routers.thelounge-secure.entrypoints=https"
      - "traefik.http.routers.thelounge-secure.rule=Host(`znc.example.com`)"
      - "traefik.http.routers.thelounge-secure.tls=true"
      - "traefik.http.routers.thelounge-secure.tls.certresolver=http"
      - "com.centurylinklabs.watchtower.enable=true"

networks:
  default:
    external:
      name: proxy

Lines 7-9) Our compose file ports configuration. We'll want to expose those ports in our firewall as well using sudo ufw allow 6667 && sudo ufw allow 6697. These will be used by your IRC client to connect to ZNC. If you only ever plan to connect using the web client you can drop these lines as you won't need to expose any ports.
Lines 10-11) Our bind mounts for ZNC's data dir and the znc.pem file used for the bouncer's SSL IRC connection. More on znc.pem later.
Lines 12-23) Our labels - these should all be self explanatory to you at this point. Note the addition of the PathPrefix rule to Traefik rules labels on line 16 and 20.
Lines 27-28) Our bind mounts for TheLounge's data directory.
Lines 29-40) Our Traefik labels and Watchtower label. Nothing new here.

We could add a depends_on but realistically we won't need it.

We'll make a "znc" service using mkdir ~/services/znc and create our docker-compose.yml in there. Be sure to create the znc.pem file using mkdir ~/services/znc/znc && touch ~/services/znc/znc/znc.pem before first launch. You can bring it up by navigating to ~/services/znc and running docker-compose up -d.

We aren't ready yet though! Be sure to give it a few minutes to finish (or check the logs using docker-compose logs -f). Once ready, let's stop the services using docker-compose stop. We're using a path prefix for our ZNC config interface, we'll need to update the config file manually using sudo nano ~/services/znc/znc/configs/znc.conf. Near the top in the <Listener l> section you'll want to add URIPrefix = /znc under the SSL = false line.

Now we can bring it back up, but before we do that let's add a bash script to extract the SSL certificate into the znc.pem file. Let's start by installing jq with sudo apt install jq. Now create a ~/services/znc/update_pem.sh with the following contents:

#!/bin/bash
sudo cat ~/services/core/acmestore/acme.json | jq -r '.http.Certificates[] | select(.domain.main == "znc.example.com") | .key' | base64 --decode | sudo tee ~/services/znc/znc/znc.pem
echo | sudo tee -a ~/services/znc/znc/znc.pem
sudo cat ~/services/core/acmestore/acme.json | jq -r '.http.Certificates[] | select(.domain.main == "znc.example.com") | .certificate' | base64 --decode | sudo tee -a ~/services/znc/znc/znc.pem
docker-compose restart

Be sure to update the script with the proper location and domain names where relevant. Make it executable using chmod +x ~/services/znc/update_pem.sh. Now execute it. You shouldn't see any error output and cat ~/services/znc/znc/znc.pem should output your Let's Encrypt certificate. The script also started it for you. You'll want to run this script whenever your Let's Encrypt certificate renews - you can accomplish this just by setting a monthly cronjob in your crontab.

You should be able to access the ZNC config panel at znc.example.com/znc and TheLounge at znc.example.com. For the purpose of this example you're done, but you'll want to configure ZNC if you intend to keep it deployed.

Start by logging into the ZNC config panel - default user & password is admin / admin.

If you removed the ports from the compose file and don't intend to expose it publicly you can omit this step. Now navigate to Web Config > Global Settings and add 2 ports - 6667, uncheck SSL, uncheck HTTP.  Then add 6697, check SSL, uncheck HTTP.

While here go to Web Config > Your Settings and update your password. Let's create an IRC network we want to stay connected to by going to Web Config > Your Settings and clicking "Add" on the Networks table. I'll name the network "freenode" and set the username and other fields to the values I want. Now I'll click "Add" under "Servers of this IRC network" and do chat.freenode.com, port 6697, and leave SSL checked. Now I'll hit "Save and Continue" at the bottom and it's ready to go.

Keep in mind there is a lot more you can do with ZNC, it's settings, and individual networks. It's extremely powerful, for the purpose of this guide I'm just doing the bare minimum.

Now restart the container using docker-compose restart znc. Now an external IRC client can connect at znc.example.com on those ports (unless you chose not to let this).

If you configured it to allow external connections, go ahead and test the external IRC connections, I'll be using Hexchat so I'll create a network znc and set the server to znc.example.com/6697. Now uncheck use global user information and set the username to admin/freenode and the password to whatever you updated it to earlier. Close it and connect. If all went well you should be able to connect externally with a valid SSL certificate.

Now for TheLounge let's configure our user. You can do so using docker exec --user node -it znc_thelounge_1 thelounge add admin (keep in mind znc_thelounge_1 may be different - review the output of docker-compose ps to figure out the proper container name). Now you should be able to navigate to znc.example.com and login with that user.

Now you can create your first network - change the server to znc and the port to 6501. Uncheck "Use Secure Connection" (it's a local connection and 6501 isn't SSL). Set the password to your password you configured earlier in ZNC and then set the username to "admin/freenode". Now click connect! If all went well you are now connected to your ZNC network "freenode" that we created earlier. If you opted to allow external connections, or want to use SSL internally, you can also connect over 6697 and leave "Use Secure Connection" checked.

While this entire process was definitely lengthy, convoluted, and probably unwieldy - it's a start. There's a lot more to go and probably random issues to troubleshoot that you may run into. But it's a start on showing you how flexible and powerful Docker and Docker Compose can be!

Summary

This is all I have for now! Hopefully this guide and information was insightful in getting you started with Docker and Docker Compose and helped instruct you and give you resources and knowledge to better prepare you for running your own services. Feel free to drop a comment if you have any feedback or questions.