Hardening playbook
After self-host.md is working, run through this before exposing the instance to anyone but yourself. Twenty minutes of work that pays off the first time someone scans your IP.
SSH lockdown
The single biggest attack-surface reduction. Two parts: kill password auth, move SSH off the default port.
1. Generate a key on your local machine (skip if you already have one)
ssh-keygen -t ed25519 -C "your-handle@your-machine"
# Press Enter at all prompts. The default location ~/.ssh/id_ed25519 is fine.2. Copy the public key to the VPS
ssh-copy-id root@<vps-ip>
# Enter the root password one last time. After this, key auth works.3. Disable password auth + move SSH off port 22
On the VPS, edit /etc/ssh/sshd_config:
PasswordAuthentication no
PubkeyAuthentication yes
PermitRootLogin prohibit-password
Port 22022
Pick any high random port for Port. Don't pick 2222 — that's konnos's git+ssh port, would conflict.
# Reload sshd
sudo systemctl reload sshd
# Test from a NEW terminal (don't close your existing session yet):
ssh -p 22022 root@<vps-ip>
# If that works, you can close the original session.If something breaks, your existing session stays open — you can revert the config without losing access.
4. Open the new SSH port in your firewall
sudo ufw allow 22022/tcp
sudo ufw delete allow 22/tcp # remove old rule if it existedFirewall
Default-deny everything except what you need:
sudo ufw default deny incoming
sudo ufw default allow outgoing
sudo ufw allow 22022/tcp # SSH (your custom port)
sudo ufw allow 80/tcp # HTTP (ACME challenge)
sudo ufw allow 443/tcp # HTTPS
sudo ufw allow 2222/tcp # konnos git+ssh (skip if you don't use it)
sudo ufw --force enable
sudo ufw status verboseVerify from outside the VPS that closed ports are actually closed (use any port-scanner like nmap -Pn <vps-ip>).
fail2ban (optional but cheap)
Brute-force attempts on port 22022 (now custom) will be rare, but fail2ban is free insurance:
sudo apt install -y fail2ban
sudo cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local
sudo systemctl enable --now fail2banThe defaults block IPs after 5 failed attempts for 10 minutes. Tune in /etc/fail2ban/jail.local if you want it stricter.
Postgres backups
konnos's database is the only thing irreplaceable on the VPS. Take dumps regularly, push them off-site.
Nightly dump to local + remote
# /usr/local/bin/konnos-backup
#!/bin/bash
set -euo pipefail
TIMESTAMP=$(date -u +%Y%m%dT%H%M%SZ)
BACKUP_DIR=/var/backups/konnos
mkdir -p "$BACKUP_DIR"
# Postgres
docker exec konnos-postgres pg_dump -U konnos konnos | gzip > "$BACKUP_DIR/postgres-$TIMESTAMP.sql.gz"
# konnos data volume (repos, LFS, attachments)
docker run --rm -v forgejo_data:/data:ro -v "$BACKUP_DIR":/backup alpine \
tar czf "/backup/data-$TIMESTAMP.tar.gz" -C /data .
# Keep last 14 nightly backups locally
find "$BACKUP_DIR" -mtime +14 -delete
# Push to off-site (rclone, restic, rsync — pick one)
# rclone copy "$BACKUP_DIR" remote:konnos-backups/$(date -u +%Y%m)/ --include "*-$TIMESTAMP.*"Make it executable and put it on cron:
sudo chmod +x /usr/local/bin/konnos-backup
sudo crontab -e
# Add:
0 2 * * * /usr/local/bin/konnos-backup >> /var/log/konnos-backup.log 2>&1Test your backups
A backup you've never restored isn't a backup. Once a quarter:
- Spin up a throwaway VPS.
- Restore the most recent dump + data archive.
- Verify a few repos clone, issues are intact, the admin user can sign in.
- Burn the throwaway VPS.
Log rotation
Docker's default JSON file driver fills your disk in weeks if logs run hot. Cap log size in /etc/docker/daemon.json:
{
"log-driver": "json-file",
"log-opts": {
"max-size": "100m",
"max-file": "5"
}
}Restart the Docker daemon to apply: sudo systemctl restart docker.
TLS certificate renewal
If you used the recommended Traefik+ACME path, certs auto-renew 30 days before expiry. Verify the renewal cron is firing:
# show what's in acme.json
sudo cat /etc/dokploy/traefik/dynamic/acme.json | jq '.letsencrypt.Certificates[] | {domain: .domain.main, expires: (.certificate | "<base64-blob>")}'The domain.main field should match every domain you've configured. If a domain is missing, ACME hasn't issued for it yet — check Traefik logs (docker logs dokploy-traefik 2>&1 | grep -i acme).
Manually trigger a re-check by reloading Traefik:
# via the management API
curl -H "x-api-key: ${ADMIN_TOKEN}" -X POST http://localhost:3000/api/trpc/settings.reloadTraefik -d '{"json":{}}'Forge-side security
Defaults that ship in the repo's docker-compose.yml:
| Setting | Default | Why |
|---|---|---|
DISABLE_REGISTRATION | true | Public registration off — invite-only until you flip it |
REQUIRE_SIGNIN_VIEW | false | Public repos are cloneable without auth (needed for CI) |
ENABLE_CAPTCHA | install-time toggle | Mitigates bot signups when you do open registration |
DISABLE_GRAVATAR | true | No third-party calls; avatars live on your VPS only |
When you eventually open registration (see ROADMAP.md — registration triggers on rebrand 100% + first headline ✅), set:
FORGEJO__service__DISABLE_REGISTRATION: "false"
FORGEJO__service__REGISTER_EMAIL_CONFIRM: "true"
FORGEJO__service__ENABLE_CAPTCHA: "true"
FORGEJO__service__CAPTCHA_TYPE: "hcaptcha" # or recaptcha, mcaptcha, image
FORGEJO__service__HCAPTCHA_SECRET: "…"
FORGEJO__service__HCAPTCHA_SITEKEY: "…"Plus rate limiting at the reverse-proxy layer — Traefik's RateLimit middleware can cap at 60 req/min/IP on the registration endpoint.
Monitoring
You want to know about three things:
- Site is reachable — uptime monitor pings
https://code.your-domain.com/api/v1/versionevery minute. Anything other than HTTP 200 → page you. - Disk is filling — Linux's default
df+ a cron that emails you when/crosses 80%. - Cert is about to expire — your uptime monitor should also flag certs in their last 7 days. (Auto-renewal usually works, but a heads-up if it doesn't is cheap insurance.)
External services that handle all three at once exist; the konnos community runs web-down.com for uptime — see the partner-tip in your konnos UI.
Update strategy
konnos consists of two layers: the upstream forge core (a versioned image) and the konnos distribution layer (theme, locale, templates, brand). They're updated separately.
- Core image bumps: covered in
upstream-tracking.md. Major version bumps need a full smoke test of the distribution layer. - Distribution layer changes: pulled from
code.konnos.org/konnos/konnosvia your auto-deploy chain (webhook → CD). No manual action.
Pin the core image to a major version (:13 not :latest) so unexpected breaking changes don't ship on a docker compose pull.
Final checklist
Before you tell anyone the URL:
- SSH on a non-default port, password auth disabled, key-only
- Firewall default-deny; only 22022, 80, 443, 2222 open
- Backups running on cron, off-site, tested by restoring once
- Log rotation configured (don't fill the disk in two months)
- TLS cert renews automatically — verified by checking
acme.json - Forge-side: registration disabled (or CAPTCHA-gated), Gravatar disabled
- Monitoring set up: uptime, disk, cert expiry
- Admin user has 2FA enabled (
/-/user/settings/security/2fa) - Postgres backup includes the schema + data of the
system_settingtable (custom slogans, meta tags, etc.)