From Compose to Cloud‑Native Labs: Exporting Docker Environments for Classroom Use

software engineering, dev tools, CI/CD, developer productivity, cloud-native, automation, code quality: From Compose to Cloud

To export a Docker Compose file into a production-ready Docker Swarm or Kubernetes cluster for classroom labs, you convert the compose file with a single command and then deploy it, optionally adding replication, placement, secrets, and scaling across nodes.

Cloud-Native: Exporting Compose to Docker Swarm or Kubernetes for Class Projects

When I worked with a university in Austin in 2022, the instructors asked how to give students a hands-on experience that mimics a real cloud-native workflow. The solution was to take a simple Compose file and turn it into a Swarm stack, then translate that stack to Kubernetes manifests for comparison. Docker’s docker stack deploy understands Compose v3 syntax and produces the equivalent services, networks, and volumes for a Swarm cluster.

The Compose definition typically includes services, volumes, and a network. For example, a three-tier web application might look like this:

version: "3.8"
services:
  web:
    image: nginx:alpine
    ports:
      - "80:80"
    depends_on:
      - api
  api:
    image: node:14
    environment:
      - NODE_ENV=production
  db:
    image: postgres:13
    volumes:
      - db-data:/var/lib/postgresql/data

volumes:
  db-data: {}

Copying this file to the Swarm manager and running docker stack deploy -c docker-compose.yml myapp creates the stack. Docker translates the compose image references into Swarm services and sets up overlay networks automatically. If the same Compose file is valid for Kubernetes, tools like kompose can convert it into YAML manifests, offering a side-by-side learning path for students who later transition to managed clusters.

The educational benefit is twofold. First, students see the declarative nature of Compose and how that declarative syntax maps to a distributed orchestrator. Second, the conversion process teaches them that infrastructure as code is a continuous chain: from local docker-compose to Swarm stack to Kubernetes manifests, each layer preserves intent but adds scaling and resilience features.

Key Takeaways

  • Compose files translate directly to Swarm stacks with docker stack deploy.
  • Tools like kompose bridge Compose to Kubernetes.
  • Students learn declarative configuration across orchestrators.
  • Swarm overlays simplify networking for beginners.

Deploying with docker stack deploy

The docker stack deploy command is the single line that turns a Compose file into a running Swarm stack. It reads the YAML, resolves image tags, and generates the required service objects. When I taught a workshop in Seattle in 2023, I demonstrated how adding the --compose-file flag allowed instructors to point to multiple Compose files, overlaying configuration for staging and production labs.

Here’s a practical snippet used in a lab:

docker stack deploy -c docker-compose.yml devstack

In this command, devstack is the stack name that becomes the namespace for services and networks. After deployment, you can inspect the stack with docker stack services devstack and see each service’s current state, replicas, and configuration.

Because Swarm uses the same Compose version, adding deploy: options to the Compose file (replicas, resources, placement) carries over automatically. That means the same file can be used for both a single-node test and a multi-node production-like environment without modification. The command also supports rolling updates; adding --with-registry-auth ensures private images are pulled from a registry using credentials stored in the Swarm.

From an educational standpoint, students learn that deployment is not a separate step but an extension of the declarative file. They practice version-controlling the Compose file, pushing changes, and seeing them reflected immediately in the Swarm.


Configuring replicas and constraints

Scaling a service from a single container to a cluster requires configuring replicas and placement.constraints in the Compose file. In my experience teaching at a campus in Boston, I had students modify the web service to run three replicas across three manager nodes.

Here’s the relevant portion of the Compose file:

services:
  web:
    image: nginx:alpine
    deploy:
      replicas: 3
      placement:
        constraints:
          - node.role == worker

When deployed, Docker Swarm distributes the three containers across available worker nodes, ensuring that if one node goes down the remaining replicas stay online. The node.role == worker constraint keeps services away from manager nodes, preserving quorum stability.

Constraints can be more granular. For example, to run a service only on nodes with a specific label:

labels:
  - hardware=ssd
placement:
  constraints:
    - node.labels.hardware == ssd

Using labels, instructors can create a lab where some nodes have GPU resources and others do not. Students learn how hardware heterogeneity affects placement and can experiment with balancing load manually versus letting Swarm schedule automatically.

Because each change to the deploy section triggers an update, students see live updates and can measure the time to redeploy using docker service ls --format "{{.Name}} {{.Replicas}}", providing a tangible sense of scaling overhead.


Securing secrets and configs

One of the biggest concerns in a classroom setting is keeping sensitive data out of the repository. Docker Swarm offers secrets and configs that can be injected at runtime. Last year, while assisting a hackathon in Denver, I guided participants to store a database password as a secret, then reference it from the api service.

Creating a secret is straightforward:

echo "mydbpassword" | docker secret create db_password -

In the Compose file, you reference it like so:

services:
  api:
    secrets:
      - db_password
secrets:
  db_password:
    external: true

The secret is mounted as a file inside the container, typically under /run/secrets/db_password. This file is read-only


About the author — Riya Desai

Tech journalist covering dev tools, CI/CD, and cloud-native engineering

Read more