March 4, 2026 | By Commander Nyx Aldara, Station Architect of the Third Relay


A formation of starships in cold standby drifts near an orbital relay station, amber reactor glow barely visible at the stern, the void stretching above them in patient quiet

The briefing room aboard Relay Three was cold, the way all relay stations are cold — not from poor environmental systems, but from the particular stillness of a facility built for traffic that hasn't arrived yet. Commander Nyx Aldara stood at the tactical display, her hands clasped behind her back, studying the topology of a fleet that did not yet know how to sleep.

"As of oh-seven-hundred, I'm issuing Operational Plan SLEEPING FLEET," she said. The room held seventeen officers, most of them infrastructure engineers from the station's deployment corps. "This is a pre-battle briefing. The orders are written. The fleet has not yet moved."

She tapped the display. A cluster diagram resolved — nodes, namespaces, ingress routes, the whole constellation of services that kept the Third Relay operational.

"Here is our problem."

The Theater of Operations

The Third Relay ran a modest but capable fleet. A homepage vessel. A job automation carrier group — five microservices burning reactor mass around the clock. A translation service. A product intelligence corvette called Buccaneer. A quit-tracking frigate. Each ship had its own deployment cycle, its own codebase, its own crew of CI/CD pipelines pushing changes through staging and into production.

And every one of them shared a single staging lane.

"When Lieutenant Vasquez needs to test a feature branch on Buccaneer," Aldara said, pointing to the staging namespace on the display, "she deploys to staging. When Ensign Park needs to test a database migration on quit-track twenty minutes later, he deploys to staging. Vasquez's build gets overwritten. She has to redeploy when Park is finished. If a third engineer enters the lane — and on a busy day, they do — the whole queue stalls."

She let that sit for a moment.

"We have one dock, and every ship in the fleet needs to berth there for inspection. This is not a resource problem. This is a topology problem."

The second issue was quieter but no less costly. Every staging deployment ran continuously. Twenty-four hours a day, seven days a week, the pods burned memory and CPU cycles whether anyone was looking at them or not. For a relay station running on finite reactor capacity — a single node called Surfstation with its companion Thinkpad — those idle cycles were tonnage they couldn't afford.

"We are maintaining full power to ships that no one is aboard," Aldara said. "Every staging pod is a lit bridge on an empty vessel, draining fuel from the reactor. We need these ships to learn how to sleep."

The Plan: Review Environments

The first maneuver was conceptual: abandon the shared staging lane entirely for feature work. Every open merge request would receive its own environment — a fully isolated berth with its own namespace, its own deployment, its own ingress route, its own database if the application required one.

Aldara brought up the naming convention on the display:

{branch-slug}.{application}.surfshack.dev

"If Vasquez opens a merge request on branch feat/price-alerts for Buccaneer, she gets feat-price-alerts.buccaneer.surfshack.dev. Park opens fix/migration-order on quit-track, he gets fix-migration-order.quit-track.surfshack.dev. They don't touch each other. They don't know each other exists."

The infrastructure was already half-built. External-dns would handle DNS propagation automatically — it already watched for new Ingress resources and wrote the records. Cert-manager would provision TLS certificates on demand, the same way it handled every other endpoint in the fleet. Traefik would route traffic. The plumbing was there. What was missing was the CI/CD choreography to spin environments up and tear them down.

"Two new jobs in every project's pipeline," she said. "deploy-review and stop-review."

She pulled up the skeleton:

deploy-review:
  stage: deploy
  environment:
    name: review/$CI_COMMIT_REF_SLUG
    url: https://$CI_COMMIT_REF_SLUG.$APP_NAME.surfshack.dev
    on_stop: stop-review
  script:
    - helm upgrade --install $RELEASE_NAME ./helm/chart
        --namespace $REVIEW_NAMESPACE
        --create-namespace
        --set image.tag=$CI_COMMIT_SHORT_SHA
        --set ingress.host=$CI_COMMIT_REF_SLUG.$APP_NAME.surfshack.dev
  rules:
    - if: $CI_MERGE_REQUEST_IID

stop-review:
  stage: deploy
  environment:
    name: review/$CI_COMMIT_REF_SLUG
    action: stop
  script:
    - helm uninstall $RELEASE_NAME --namespace $REVIEW_NAMESPACE
    - kubectl delete namespace $REVIEW_NAMESPACE
  when: manual
  rules:
    - if: $CI_MERGE_REQUEST_IID
      when: manual

"GitLab understands the environment block. When a merge request is opened, deploy-review fires. When the MR is merged or closed, GitLab triggers stop-review. The namespace is created, the namespace is destroyed. Clean entry, clean exit."

Ensign Okoro raised a hand. "What about the database-backed applications, Commander? Homepage is simple — static assets and a web server. But quit-track, Buccaneer, translation-service, job-automation — they all need PostgreSQL."

"Correct," Aldara said. She had expected the question. "That's the harder maneuver."

Database Provisioning: The Docking Procedure

For applications that required a database, the deploy-review job would need to do more than deploy a Helm chart. It would need to provision a dedicated PostgreSQL database and user on the shared instance at postgres.databases.svc.cluster.local, create a Kubernetes secret holding the credentials, and run EF Core migrations against the fresh schema.

She walked through the sequence:

script:
  # Generate credentials
  - REVIEW_DB="review_${CI_PROJECT_NAME}_${CI_COMMIT_REF_SLUG}" | tr '-' '_'
  - REVIEW_USER="review_${CI_PROJECT_NAME}_${CI_COMMIT_REF_SLUG}" | tr '-' '_'
  - REVIEW_PASSWORD=$(head -c 32 /dev/urandom | base64 | tr -d '/+=' | head -c 24)

  # Create database and role
  - |
    psql "$ADMIN_DATABASE_URL" <<SQL
      CREATE ROLE $REVIEW_USER WITH LOGIN PASSWORD '$REVIEW_PASSWORD';
      CREATE DATABASE $REVIEW_DB OWNER $REVIEW_USER;
    SQL

  # Store as Kubernetes secret
  - kubectl create secret generic db-credentials
      --namespace $REVIEW_NAMESPACE
      --from-literal=connection-string="Host=postgres.databases.svc.cluster.local;Database=$REVIEW_DB;Username=$REVIEW_USER;Password=$REVIEW_PASSWORD"

"Password generated from /dev/urandom," she said. "Twenty-four characters, base64-encoded with the ambiguous characters stripped. Stored as a Kubernetes secret in the review namespace. The Helm chart references the secret as an environment variable. EF Core migrations run as a Helm pre-install hook — same pattern we already use in production."

The cleanup was equally precise:

# In stop-review
- psql "$ADMIN_DATABASE_URL" -c "
    SELECT pg_terminate_backend(pid) FROM pg_stat_activity
    WHERE datname = '$REVIEW_DB';
  "
- psql "$ADMIN_DATABASE_URL" -c "DROP DATABASE IF EXISTS $REVIEW_DB;"
- psql "$ADMIN_DATABASE_URL" -c "DROP ROLE IF EXISTS $REVIEW_USER;"
- kubectl delete namespace $REVIEW_NAMESPACE --ignore-not-found

"Terminate active connections first," Aldara said, tapping the first command. "Then drop the database. Then drop the role. Then delete the namespace. In that order. If you drop the database before terminating connections, PostgreSQL will refuse the operation. If you delete the namespace before dropping the database, you've orphaned a database on the shared instance and nobody will find it until the disk fills up."

She paused. "I have seen both of these happen. On other relay stations. Under other architects."

The room was quiet.

The Second Maneuver: Scale-to-Zero

The review environments solved the contention problem. But they introduced a new one: resource consumption. If five engineers each had two open merge requests, that was ten isolated environments running simultaneously, each with its own pod, each burning memory and CPU. Add staging environments for each application, and the reactor budget became untenable.

"This is where the fleet learns to sleep," Aldara said.

She brought up the KEDA architecture diagram. KEDA — the Kubernetes Event-Driven Autoscaler — was a well-proven system, already deployed across hundreds of relay stations in the wider fleet. Its HTTP Add-on extended the core autoscaler with a specific capability: scaling deployments based on HTTP traffic rather than queue depth or metric thresholds.

The principle was simple. After ten minutes with no inbound requests, KEDA would scale the deployment to zero replicas. The pod would be terminated. Memory freed. CPU cycles returned to the reactor pool. The ship would go dark.

When a request arrived — an engineer navigating to their review environment, a webhook firing, a health check — the KEDA HTTP interceptor would catch it. The interceptor, which ran permanently in the keda namespace as a lightweight proxy, consumed negligible resources. It would hold the inbound request, signal KEDA to scale the deployment back to one replica, and serve a branded loading page while the pod started.

"Cold start takes between eight and fifteen seconds for our .NET applications," Aldara said. "The readiness probe passes, the interceptor releases the held request, and subsequent requests flow through normally. The engineer sees a brief loading indicator, then the application. For review environments that might be accessed twice a day, this is an acceptable trade."

She drew the request flow on the display:

Request → Traefik → KEDA HTTP Interceptor → App Pod (scales 0↔1)

Lieutenant Commander Huang leaned forward. "The interceptor runs in the keda namespace. Our applications run in their own namespaces. Kubernetes Ingress resources can only reference Services in the same namespace. How do we bridge that gap?"

Aldara permitted herself a small smile. Huang was one of her best infrastructure officers, and the question was exactly right.

The Interceptor Problem

A compact relay station floats alone in the void, its amber reactor core glowing steadily while teal signal arcs reach toward the silhouettes of dormant ships

"This is the most interesting piece of the architecture," she said. "And the piece that took the longest to design."

The KEDA HTTP interceptor deployed as a Service in the keda namespace. Traefik, the ingress controller, needed to route traffic to this interceptor. But an Ingress resource in the buccaneer-review namespace could only reference a Service in buccaneer-review. Cross-namespace service references were not permitted by the Kubernetes API.

The solution was an ExternalName Service — a Kubernetes Service type that acted as a DNS alias rather than a load balancer. In each application namespace, the Helm chart would create an ExternalName Service pointing to the interceptor's fully qualified DNS name across the namespace boundary:

apiVersion: v1
kind: Service
metadata:
  name: keda-interceptor
  namespace: {{ .Release.Namespace }}
spec:
  type: ExternalName
  externalName: keda-add-ons-http-interceptor-proxy.keda.svc.cluster.local

"The Ingress in the review namespace points to this ExternalName Service," Aldara said. "The ExternalName Service resolves to the interceptor in the keda namespace. The interceptor knows which deployment to scale based on the HTTPScaledObject custom resource. Three hops instead of one. But it works, and it enables zero-replica deployments without violating namespace boundaries."

She brought up the HTTPScaledObject manifest:

apiVersion: http.keda.sh/v1alpha1
kind: HTTPScaledObject
metadata:
  name: {{ .Release.Name }}
  namespace: {{ .Release.Namespace }}
spec:
  hosts:
    - {{ .Values.ingress.host }}
  scaleTargetRef:
    name: {{ .Release.Name }}
    service: {{ .Release.Name }}
    port: 8080
  replicas:
    min: 0
    max: 1
  scalingMetric:
    requestRate:
      targetValue: 1
  scaledownPeriod: 600  # 10 minutes

"Minimum replicas zero. Maximum replicas one. Scale-down period six hundred seconds. The hosts field tells the interceptor which inbound hostname maps to which deployment. When a request arrives for feat-price-alerts.buccaneer.surfshack.dev, the interceptor matches it to this scaled object, finds the target deployment, and scales it up."

The Rollout: Five Phases

Commander Aldara stands before a holographic tactical display showing fleet topology in amber-gold light, engineers at consoles behind her, pale viewport light cutting through dust in the air

Aldara stepped back from the display and addressed the room directly. This was the part that mattered — not the architecture, which was sound, but the sequencing, which would determine whether the operation succeeded or stalled.

"Phase One. We install KEDA and its HTTP Add-on through the cluster-management helmfile. This is infrastructure — it touches no application workloads. A new entry in the helmfile, a helmfile sync through CI, and the autoscaler is operational."

She held up two fingers. "Phase Two. We add the PostgreSQL client tooling to our CI images. The deploy-review and stop-review jobs need psql to provision and tear down databases. Our CI images are pre-built in the ci-images repository — we never install packages at runtime. A new image version, pushed through its own pipeline."

Three fingers. "Phase Three. The pilot. We take homepage — the simplest vessel in the fleet. No database. No background workers. A web server and some static assets. We add the review environment CI jobs, the KEDA scaled object, the ExternalName Service. We open a test merge request and verify: does the environment provision? Does the URL resolve? Does the pod scale to zero after ten minutes? Does it wake when we send a request? Does the environment clean up when we close the MR?"

Four fingers. "Phase Four. Database-backed applications. Quit-track first — it's our reference implementation, the pattern every other application follows. Then Buccaneer. Then translation-service. Then job-automation, which is the most complex — five microservices, multiple database dependencies, network policies that restrict egress. Each rollout is a separate merge request, tested in isolation."

She lowered her hand. "Phase Five. End-to-end verification across all projects. Multiple engineers, multiple open MRs, simultaneous review environments. We stress the shared PostgreSQL instance, we watch the reactor budget, we confirm that the interceptor handles concurrent scale-up events without dropping requests."

Five phases. Fourteen tasks. Five projects. Approximately twenty new files and thirty modified files across six repositories.

The Weight of Idle Ships

Ensign Okoro spoke again. "Commander, what about production? Does production scale to zero?"

"No." Aldara's answer was immediate. "Production stays always-on. The latency of a cold start — even eight seconds — is unacceptable for production traffic. Staging environments will scale to zero. Review environments will scale to zero. Production does not sleep."

She turned back to the display, where the cluster topology glowed in the dim light of the briefing room. Nodes and namespaces and service meshes, the whole silent machinery of a fleet at rest.

"I want to be clear about what we are doing and what we are not doing," she said. "We are not deploying this today. The orders are written. The manifests are drafted. The architecture has been reviewed. But the fleet has not yet translated out of foldspace. No helm chart has been modified. No CI pipeline has been updated. No KEDA pod is running in the cluster."

She looked at the officers assembled before her — the engineers who would execute this plan over the coming weeks, one phase at a time, one application at a time, with the careful precision that infrastructure work demanded.

"What we have is a plan. A good plan, I believe. Review environments that give every engineer their own dock. Scale-to-zero that lets idle ships power down and return their reactor mass to the pool. Automated provisioning that creates databases from nothing and returns them to nothing when the work is done. Clean entry, clean exit."

She clasped her hands behind her back again.

"The fleet will learn to sleep. But today, we study the orders. Dismissed."

The officers filed out of the briefing room in silence, datapads glowing with architecture diagrams and YAML manifests, already thinking through the implementation details that would turn a briefing into a deployment. Aldara remained at the display, studying the topology one last time.

Seventeen namespaces. Forty-six pods. A shared PostgreSQL instance holding six databases. One reactor node doing the work of ten.

Soon, some of those pods would go dark when no one needed them. Soon, every merge request would get its own corner of the cluster, spun up in seconds, torn down without a trace. Soon, the fleet would breathe — expanding under load, contracting in stillness, a living system instead of a static one.

But not today. Today, the orders were written.

That was enough.


Commander Nyx Aldara serves as Station Architect of the Third Relay, where she designs fleet infrastructure for the Surfshack Deep Space Network. This briefing covers planned integration of KEDA 2.x, KEDA HTTP Add-on 0.10.x, GitLab CI/CD review environments, Helm 3 lifecycle management, PostgreSQL provisioning, Traefik ingress routing, cert-manager, external-dns, and K3s cluster orchestration. The views expressed are her own and do not represent the strategic priorities of Fleet Command.