Home | Send Feedback | Share on Bluesky |

Self-Host Bitwarden Lite on Debian 13

Published: 5. March 2026  •  selfhost

In this blog post, I share a production-ready approach to self-hosting Bitwarden Lite on Debian 13 using Docker. The guide covers everything from initial provisioning with cloud-init to secure TLS setup and automated backups.

I'll show you a cloud-init script that installs everything, so if you want to replicate this setup, you need a VPS provider that supports cloud-init (most do). The script is written for Debian, but should be adaptable to other distros with minor tweaks.

The guide follows the Bitwarden Lite installation documentation.

Prerequisites

If you want to follow along, make sure you have the following ready.

  1. A VPS provider that supports cloud-init (e.g., DigitalOcean, Linode, Hetzner, AWS EC2, etc.)
  2. A domain/subdomain (example: vault.example.com)
  3. Bitwarden installation credentials from Bitwarden Host
    • BW_INSTALLATION_ID
    • BW_INSTALLATION_KEY
  4. S3-compatible bucket and credentials (AWS S3, MinIO, Scaleway Object Storage, Backblaze B2 S3 API, etc.) for backups

Cloud-init

Here's a breakdown of the cloud-init script that sets up the entire stack. You can download the full script from this GitHub repository.

Base OS updates and required packages

The cloud-init script starts by updating the package lists and upgrading existing packages, then it installs the necessary dependencies for the Docker installation and firewall setup.

#cloud-config
package_update: true
package_upgrade: true

packages:
  - ca-certificates
  - curl
  - gnupg
  - ufw

cloud-init.yml


Write the Docker Compose stack

Next, cloud-init uses write_files to create the docker-compose.yml file that defines the four services: db, bitwarden, backup, and caddy. Each service is configured with appropriate images, environment variables, volumes, and health checks. Bitwarden Lite supports MySQL/MariaDB, MSSQL, SQLite, and PostgreSQL. In this example, I chose PostgreSQL.

  - path: /opt/bitwarden/docker-compose.yml
    content: |
      services:
        db:
          image: postgres:18.3-alpine3.23
          container_name: postgres_db
          restart: always
          env_file: .env
          environment:
            - POSTGRES_DB=bitwarden
          volumes:
            - postgres_data:/var/lib/postgresql/data
          healthcheck:
            test: ["CMD-SHELL", "pg_isready -U $${POSTGRES_USER} -d bitwarden"]
            interval: 10s
            timeout: 5s
            retries: 5

        bitwarden:
          image: ghcr.io/bitwarden/lite:latest
          container_name: bitwarden_app
          restart: always
          depends_on:
            db:
              condition: service_healthy
          env_file: .env
          environment:
            - BW_DB_PROVIDER=postgresql
            - BW_DB_SERVER=db
            - BW_DB_DATABASE=bitwarden
            - BW_DB_USERNAME=${POSTGRES_USER}
            - BW_DB_PASSWORD=${POSTGRES_PASSWORD}
          volumes:
            - bitwarden_data:/etc/bitwarden

        backup:
          image: ghcr.io/ralscha/postgres-s3-backup
          container_name: postgres_backup
          restart: always
          depends_on:
            db:
              condition: service_healthy
          env_file: .env
          environment:
            - POSTGRES_HOST=db
            - POSTGRES_DATABASE=bitwarden
            - SCHEDULE=0 2 * * *
            - BACKUP_KEEP_DAYS=7

        caddy:
          image: caddy:2-alpine
          container_name: caddy_proxy
          restart: always
          ports:
            - "80:80"
            - "443:443"
            - "443:443/udp"
          env_file: .env
          volumes:
            - ./Caddyfile:/etc/caddy/Caddyfile
            - caddy_data:/data
            - caddy_config:/config

      volumes:
        postgres_data:
        bitwarden_data:
        caddy_data:
        caddy_config:

cloud-init.yml


Write the Caddy reverse proxy config

In this section, cloud-init creates a Caddyfile that configures Caddy. It sets up a reverse proxy to the Bitwarden app which listens on port 8080 internally.

  - path: /opt/bitwarden/Caddyfile
    content: |
      {
          email {$ACME_EMAIL}
      }
      {$DOMAIN} {
          reverse_proxy bitwarden:8080
          header {
              Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"
              X-Content-Type-Options "nosniff"
              X-XSS-Protection "1; mode=block"
          }
      }

cloud-init.yml

Write environment variables and secrets

This section creates a .env file with all the necessary environment variables for the stack, including database credentials, Bitwarden installation keys, ACME email, and S3 backup configuration. Make sure to replace placeholder values with your actual secrets and credentials.

The backup sidecar supports not only AWS S3 but any S3-compatible storage, so you can use this with providers like Scaleway Object Storage or Backblaze B2 by setting the appropriate S3_ENDPOINT and credentials.

  - path: /opt/bitwarden/.env
    permissions: '0600'
    content: |
      DOMAIN=vault.example.com
      ACME_EMAIL=admin@example.com

      POSTGRES_USER=bw_admin
      POSTGRES_PASSWORD=CHANGE_ME_TO_A_LONG_RANDOM_PASSWORD

      BW_INSTALLATION_ID=REPLACE_WITH_BITWARDEN_ID
      BW_INSTALLATION_KEY=REPLACE_WITH_BITWARDEN_KEY

      S3_REGION=REPLACE_ME
      S3_ACCESS_KEY_ID=REPLACE_ME
      S3_SECRET_ACCESS_KEY=REPLACE_ME
      S3_BUCKET=CHANGE_ME_TO_A_UNIQUE_BUCKET_NAME
      PASSPHRASE=CHANGE_ME_TO_A_LONG_RANDOM_PASSPHRASE

cloud-init.yml

For generating strong random passwords, you can use the following command:

openssl rand -base64 24 | tr '+/' '-_' | tr -d '='

Add weekly update automation

Next, cloud-init creates a maintenance script that performs a docker compose pull to refresh images, then restarts the stack and prunes old images. This script is scheduled to run weekly using a systemd timer that runs every Sunday at 03:00 (with a randomized delay).

  - path: /opt/bitwarden/update-stack.sh
    permissions: '0700'
    content: |
      #!/bin/bash
      set -e
      cd /opt/bitwarden
      /usr/bin/docker compose pull
      /usr/bin/docker compose up -d
      /usr/bin/docker image prune -f

  - path: /etc/systemd/system/bitwarden-update.service
    content: |
      [Unit]
      Description=Weekly Bitwarden Stack Update
      After=docker.service

      [Service]
      Type=oneshot
      ExecStart=/opt/bitwarden/update-stack.sh

  - path: /etc/systemd/system/bitwarden-update.timer
    content: |
      [Unit]
      Description=Run Weekly Bitwarden Stack Update

      [Timer]
      OnCalendar=Sun *-*-* 03:00:00
      RandomizedDelaySec=1h
      Persistent=true

      [Install]
      WantedBy=timers.target

cloud-init.yml


First-boot commands (runcmd)

The final part of the cloud-init script lists all the commands to run on first boot. This includes installing Docker, setting up the firewall with ufw, enabling the Docker service, and enabling the weekly update timer. The actual docker compose up command is intentionally left out here to allow for manual triggering after DNS is properly set up. These commands run after all the files have been written.

runcmd:
  - for pkg in docker.io docker-doc docker-compose docker-compose-v2 podman-docker containerd runc; do apt-get remove -y $pkg || true; done
  - install -m 0755 -d /etc/apt/keyrings
  - curl -fsSL https://download.docker.com/linux/debian/gpg -o /etc/apt/keyrings/docker.asc
  - chmod a+r /etc/apt/keyrings/docker.asc
  - echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/debian $(. /etc/os-release; echo $VERSION_CODENAME) stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null
  - apt-get update
  - apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
  - systemctl enable --now docker
  - ufw allow 22/tcp
  - ufw allow 80/tcp
  - ufw allow 443/tcp
  - ufw allow 443/udp
  - ufw --force enable
  - systemctl daemon-reload
  - systemctl enable --now bitwarden-update.timer

cloud-init.yml

The script intentionally does not run docker compose up on first boot. The reason is that you usually get the server's IP address from your provider after provisioning. The problem is that Caddy needs the A and AAAA records (if you support both IPv4 and IPv6) to point to that IP address before it can successfully obtain TLS certificates.


Debug cloud-init on first boot

If provisioning does not behave as expected, check cloud-init status and logs first.

sudo cloud-init status --long
sudo tail -n 200 /var/log/cloud-init.log
sudo tail -n 200 /var/log/cloud-init-output.log
sudo journalctl -u cloud-init -u cloud-config -u cloud-final --no-pager -n 200

For a full report with timing and module outcomes:

sudo cloud-init analyze show
sudo cloud-init analyze blame

Post-provision workflow

After the server is created:

  1. Get the VM public IP from your provider
  2. Add DNS A/AAAA records for your domain that point to the server's IP address

Log in to the new server, check if all environment variables are set correctly in /opt/bitwarden/.env, then start the stack:

cd /opt/bitwarden
sudo nano .env
sudo docker compose up -d

Check the logs of the Caddy container to verify that TLS certificates are being obtained successfully. You should see lines indicating ACME challenges, certificate issuance, and TLS negotiation.

docker compose logs -f caddy | grep --line-buffered -E "tls|certificate|obtain|acme"

You can also test the TLS setup directly with curl:

curl -vI https://vault.example.com

You want to see successful TLS negotiation and a valid certificate chain.

If everything is set up correctly, you should now have a fully functional Bitwarden Lite instance running. You can access the web vault at the domain you configured (e.g., https://vault.example.com).

Backup

The backup sidecar container runs pg_dump on a daily schedule and uploads the encrypted backup to the S3 bucket. The backup supports any S3-compatible storage, so you can use this with providers like Scaleway Object Storage or Backblaze B2 by setting the appropriate S3_ENDPOINT and credentials in the .env file.

You can update the configuration in the .env and docker-compose.yml files. Restart the backup container after making changes:

cd /opt/bitwarden
sudo docker compose up -d backup

Wrapping up

This setup gives you a practical, maintainable Bitwarden Lite deployment on Debian 13 with daily backups. Thanks to cloud-init, the entire stack is provisioned and configured automatically. Only thing left is to manually set up the DNS records and trigger the first docker compose up after provisioning. After that, the weekly update automation will keep your stack up.