Published: December 27, 2025

My journey into building a hybrid Kubernetes cluster has been one of the most rewarding technical projects I've undertaken. What started as a simple homelab experiment evolved into a production-grade infrastructure running real applications. Here's how I built it.

Why Kubernetes at Home?

When I decided to build a homelab, I knew I wanted something more than just running Docker containers on a single machine. I wanted:

  • A real production environment for my side projects
  • Hands-on experience with modern cloud-native patterns
  • Full control over the infrastructure without cloud costs
  • A learning playground for experimenting with new tools

But here's the challenge: running Kubernetes at home means dealing with residential ISPs, dynamic IPs, CGNAT, and security concerns about exposing your home network to the internet.

My solution? A hybrid architecture that keeps the cluster private while using a cheap cloud VM as an edge gateway.

The Architecture

Rather than exposing my home cluster directly to the internet, I built a tunnel-based architecture:

Internet → DigitalOcean Droplet (HAProxy)
         → WireGuard Tunnel
         → MicroK8s Cluster (Local)
         → Applications

The Components

Local Cluster Side:

  • MicroK8s running on my old tower in my moms garage
  • All applications and infrastructure run here

Edge Gateway (DigitalOcean Droplet):

  • HAProxy forwards raw traffic from the internet to the backend
  • WireGuard Server the hub in the hybrid cloud wheel

The Tunnel:

  • WireGuard VPN connecting both sides

This architecture gives me:

  • Privacy: Cluster stays on my private network
  • Stability: No dynamic DNS or port forwarding hassles
  • Flexibility: Can move the cluster anywhere
  • Security: Only WireGuard tunnel exposed from home
  • Cost: $6/month Droplet vs. $100+/month managed Kubernetes

The Technology Stack

Cluster Runtime: MicroK8s

I chose MicroK8s over other distributions because:

  • Easy to install and maintain
  • Perfect for homelab use cases
  • It was the first result when I searched for kubernetes on the snap store
sudo snap install microk8s --classic
sudo microk8s enable dns storage

GitOps with Helmfile

I manage the entire cluster declaratively using Helmfile. My helmfile.yaml.gotmpl defines all infrastructure:

applications deployed in order:
1. cert-manager (CRDs + operator)
2. gitlab-runner
3. vault
4. vault-secrets-operator
5. shared-secrets
6. monitoring
7. wireguard-client
8. traefik

Every change goes through GitLab CI/CD. The pipeline:

  1. Detects which apps changed
  2. Syncs only those apps via Helmfile
  3. Uses GitLab Agent for secure deployment (no exposed cluster API)

No manual kubectl commands. Everything is infrastructure-as-code.

Secrets Management: HashiCorp Vault

All sensitive data lives in Vault running in HA mode with Raft storage:

Vault Configuration:
- Storage: Raft consensus (10Gi PVC)
- Audit logs: Separate 10Gi PVC
- UI: Enabled (internal only)
- Auth: Kubernetes service accounts
- Policies: Path-based RBAC

The Vault Secrets Operator (VSO) automatically syncs secrets from Vault to Kubernetes:

apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultStaticSecret
metadata:
  name: gitlab-registry-creds
spec:
  vaultAuthRef: vault-auth
  path: secret/gitlab/registry
  refreshAfter: 1h
  destination:
    create: true
    name: gitlab-registry-secret

This means:

  • No hardcoded secrets anywhere
  • Automatic rotation every hour
  • Central management in one place
  • Secrets never committed to Git

Ingress: Traefik + HAProxy

Layer 1 - HAProxy (on Droplet):

Listens on:
- 80/443 (HTTP/HTTPS) → Traefik
- 25/587/993 (SMTP/IMAP) → Mail server
- 2222 (SSH fallback) → Cluster SSH

Routes everything through the WireGuard tunnel to internal services.

Layer 2 - Traefik (in cluster):

Features:
- Dynamic routing via Ingress resources
- Automatic HTTPS via cert-manager

TLS Certificates: cert-manager

cert-manager handles all TLS automatically:

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-prod
spec:
  acme:
    server: https://acme-v02.api.letsencrypt.org/directory
    email: postmaster@devlin.vining.club
    privateKeySecretRef:
      name: letsencrypt-prod
    solvers:
    - http01:
        ingress:
          class: traefik

Every Ingress resource gets automatic Let's Encrypt certificates. No manual certificate management.

Monitoring: Prometheus + Grafana

The kube-prometheus-stack provides complete observability:

Prometheus:

  • 50Gi storage, 60-day retention
  • ServiceMonitor auto-discovery
  • Scrapes all applications automatically

Grafana:

  • Custom dashboards for .NET apps
  • Auto-loaded datasources

Coverage:

  • Node metrics (Node Exporter)
  • Kubernetes state (kube-state-metrics)
  • Application metrics (ServiceMonitor)
  • Custom business metrics

CI/CD: GitLab Runner

A GitLab Runner runs inside the cluster:

Executor: Kubernetes
Features:
- Docker-outside-of-Docker (DooD) via host socket
- Kubernetes-native job execution
- Auto-scales with cluster capacity
- Secure via GitLab Agent tunnel

All my applications deploy via this pipeline:

  1. Build Docker image (DooD)
  2. Push to GitLab Registry
  3. Deploy via GitLab Agent
  4. Verify with health checks

The Edge Gateway Setup

The DigitalOcean Droplet runs a simple HAProxy configuration:

# HAProxy on Droplet

frontend http_https
    bind *:80
    bind *:443
    mode tcp
    default_backend cluster_traefik

backend cluster_traefik
    mode tcp
    server cluster 10.8.0.2:80 send-proxy  # Through WireGuard
    server cluster 10.8.0.2:443 send-proxy

frontend mail_smtp
    bind *:25
    bind *:587
    mode tcp
    default_backend cluster_stalwart

backend cluster_stalwart
    mode tcp
    server cluster 10.8.0.2:587

The WireGuard configuration establishes the tunnel:

# Droplet side: /etc/wireguard/wg0.conf
[Interface]
Address = 10.8.0.1/24
ListenPort = 51820
PrivateKey = <from-vault>

[Peer]
PublicKey = <cluster-public-key>
AllowedIPs = 10.8.0.2/32
PersistentKeepalive = 25

Deployment is automated via GitLab CI:

  1. Detects changes to gateway-config/haproxy.cfg
  2. SSH to Droplet (key from Vault)
  3. Uploads new config
  4. Reloads HAProxy

Key Learnings

1. GitOps Everything

The entire cluster is defined in Git. To add a new application:

  1. Create Helmfile manifest
  2. Push to GitLab
  3. CI automatically syncs

No manual changes. Ever. This gives me:

  • Full audit history
  • Easy rollbacks
  • Reproducible infrastructure
  • Team collaboration (if needed)

2. Secrets Management is Critical

Before Vault, I had secrets scattered everywhere:

  • Kubernetes Secrets (base64 "encryption")
  • Environment variables in CI
  • Hardcoded values in Helm charts

Now everything flows from Vault:

  • GitLab registry credentials
  • TLS certificates
  • Database passwords
  • API keys
  • SSH keys

Honestly I don't know if I would go with the vault again. For a single administrator, the setup is overkill and adds a fair amout of complexity to the system. Really I just needed to be more careful about commiting keys to git.

3. The Hybrid Approach Works

This setup has been mostly functional:

  • Cluster availability: has been pretty good.
  • Edge availability: not that great. I'm not sure the single vCpu droplet is powerful enough.
  • Network: WireGuard tunnel reconnects automatically
  • Cost: $4/month for the Droplet plus PG&E costs for Mom.

I can:

  • Move the cluster to different hardware
  • Change ISPs without reconfiguration
  • Access cluster from anywhere via VPN

4. Single-Node is Viable

MicroK8s on a single node handles everything I throw at it:

  • 15+ application pods
  • PostgreSQL database
  • Full monitoring stack
  • CI/CD runners
  • Mail server

High availability isn't always necessary. For homelab and side projects, single-node with good backups is totally fine.

5. Observability from Day 1

Having Prometheus + Grafana from the start was invaluable:

  • Identify resource bottlenecks
  • Monitor application health
  • Debug performance issues
  • Track trends over time

ServiceMonitors make it trivial to add metrics to any app:

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: my-app
spec:
  selector:
    matchLabels:
      app: my-app
  endpoints:
  - port: http
    path: /metrics

Boom. Automatic scraping.

6. Automation Compounds

Every bit of automation I added made the next automation easier:

  • Vault → easier to add new secrets
  • Helmfile → easier to add new apps
  • GitLab Agent → easier to add new pipelines
  • cert-manager → easier to add new domains

Automate the infrastructure, then automate your applications.

Resources & Cost

Hardware:

  • Local machine (existing hardware, $0/month)
  • DigitalOcean Droplet ($6/month)
  • Domain registration ($12/year)

Total monthly cost: ~$7

Compare to:

  • Managed Kubernetes (GKE, AKS): $70-150/month minimum
  • App hosting (Heroku, Render): $7-25 per app
  • Secrets management (1Password Teams): $20/month
  • Monitoring (Datadog): $15+/month

Running my own cluster saves $100+/month but the real reward was I just love tinkering with this stuff.