Dagger.io: Write Portable CI/CD Pipelines as Code
PROGRAMMING LANGUAGES March 12, 2026, 5:30 a.m.

Dagger.io: Write Portable CI/CD Pipelines as Code

Dagger.io is reshaping the way developers think about CI/CD by turning pipelines into portable, reusable code. Instead of juggling YAML files that lock you into a specific CI provider, Dagger lets you write pipelines in a familiar programming language—most commonly Go, Python, or TypeScript. This approach brings the full power of version control, testing frameworks, and IDE tooling to your build and deployment logic, making pipelines easier to review, refactor, and share across teams.

In this article we’ll explore the core concepts behind Dagger, walk through a couple of hands‑on examples, and discuss real‑world scenarios where “pipeline as code” shines. By the end you’ll have a working Dagger pipeline, a sense of how to integrate it with GitHub Actions or GitLab CI, and a few pro tips to keep your pipelines clean and fast.

Why Dagger Is Different from Traditional CI/CD

Most CI/CD platforms rely on declarative configuration files (think .github/workflows/*.yml or .gitlab-ci.yml). While these files are easy to write, they lack the expressive power of a full programming language. Conditional logic, loops, and complex data structures become cumbersome, leading teams to duplicate scripts or resort to external tooling.

Dagger flips the script by providing a language‑agnostic SDK that models your pipeline as a directed acyclic graph (DAG) of containerized steps. Each step is a lightweight container image that runs a specific command, and the SDK handles caching, parallelism, and artifact sharing automatically.

  • Portability: The same Dagger pipeline can run on GitHub Actions, GitLab CI, Jenkins, or even locally on a developer’s laptop.
  • Versionability: Because pipelines are code, they live in the same Git repository as the application they build.
  • Extensibility: Use any language feature—functions, classes, generics—to factor out common logic.

Getting Started: Installing the Dagger CLI

First, install the Dagger CLI. It’s a single binary that works on macOS, Linux, and Windows. The following command downloads the latest version and puts it in /usr/local/bin (adjust the path for Windows):

curl -L https://dl.dagger.io/dagger/install.sh | sudo sh

Verify the installation:

dagger version

If you see a version string, you’re ready to write your first pipeline.

Your First Dagger Pipeline in Python

Dagger ships with SDKs for Go, Python, and TypeScript. Below is a minimal Python example that builds a Docker image, runs unit tests inside the image, and pushes the image to a registry if the tests pass.

Project Structure

my-app/
├── src/
│   └── main.py
├── tests/
│   └── test_main.py
├── Dockerfile
└── dagger.py   # Dagger pipeline entry point

dagger.py

import dagger
import os

# The entry point must be an async function named "main"
async def main(ctx: dagger.Client) -> None:
    # 1️⃣ Build the Docker image from the current directory
    image = await (
        ctx.container()
        .from_("python:3.11-slim")
        .with_directory("/", ctx.host().directory("."))  # copy source code
        .with_workdir("/")
        .with_exec(["pip", "install", "-r", "requirements.txt"])
        .with_exec(["python", "-m", "compileall", "."])
        .publish("ttl.sh/my-app:{{.UnixTimestamp}}")  # temporary registry for demo
    )

    # 2️⃣ Run unit tests inside the image
    test_result = await (
        ctx.container()
        .from_(image)                # reuse the built image
        .with_workdir("/")
        .with_exec(["pytest", "tests"])
        .stdout()
    )
    print("Test output:", test_result)

    # 3️⃣ If tests succeed, push to Docker Hub (or any registry)
    if "FAILED" not in test_result:
        await ctx.container().from_(image).publish("docker.io/youruser/my-app:latest")
    else:
        raise Exception("Tests failed – aborting push")

This script does three things: builds an image, runs tests inside that image, and pushes the image only when tests succeed. Notice how the entire flow is expressed in plain Python, with full access to loops, conditionals, and error handling.

Pro tip: Use ttl.sh (a public temporary registry) for quick experiments. It automatically expires images after a short TTL, keeping your namespace clean.

Running the Pipeline Locally

To execute the pipeline, run:

dagger run ./dagger.py

Dagger will spin up a lightweight container that hosts the SDK, download any required base images, and orchestrate the steps defined in main. Because everything runs inside containers, you get reproducible builds without polluting your host system.

Integrating Dagger with GitHub Actions

One of Dagger’s biggest selling points is that you can embed the same pipeline code inside any CI system. Below is a minimal GitHub Actions workflow that invokes the same dagger.py script.

.github/workflows/dagger.yml

name: Dagger CI

on:
  push:
    branches: [main]
  pull_request:

jobs:
  ci:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout repository
        uses: actions/checkout@v4

      - name: Install Dagger CLI
        run: |
          curl -L https://dl.dagger.io/dagger/install.sh | sudo sh
          dagger version

      - name: Run Dagger pipeline
        env:
          DOCKERHUB_USERNAME: ${{ secrets.DOCKERHUB_USERNAME }}
          DOCKERHUB_TOKEN: ${{ secrets.DOCKERHUB_TOKEN }}
        run: dagger run ./dagger.py

The workflow checks out the code, installs the Dagger CLI, and runs the same pipeline you executed locally. Secrets for Docker Hub are passed as environment variables, keeping credentials out of the repo.

Pro tip: Cache the Dagger binary between runs using the actions/cache action. It reduces download time, especially for large monorepos.

Advanced Example: Multi‑Stage Build with Go SDK

For larger projects you often need multiple stages: compile, test, lint, and finally package. The Go SDK makes this pattern concise because Go’s type system naturally models the pipeline DAG.

Project Layout

go-service/
├── cmd/
│   └── server/
│       └── main.go
├── internal/
│   └── ...
├── go.mod
├── go.sum
└── dagger.go

dagger.go

package main

import (
    "context"
    "fmt"
    "os"

    "dagger.io/dagger"
)

func main() {
    ctx := context.Background()
    client, err := dagger.Connect(ctx)
    if err != nil {
        panic(err)
    }
    defer client.Close()

    if err := runPipeline(ctx, client); err != nil {
        fmt.Fprintln(os.Stderr, err)
        os.Exit(1)
    }
}

func runPipeline(ctx context.Context, c *dagger.Client) error {
    // 1️⃣ Build the binary in a container with Go toolchain
    src := c.Host().Directory(".")
    builder := c.Container().
        From("golang:1.22-alpine").
        WithMountedDirectory("/src", src).
        WithWorkdir("/src").
        WithExec([]string{"go", "mod", "download"}).
        WithExec([]string{"go", "build", "-o", "/app/server", "./cmd/server"})

    // 2️⃣ Lint using golangci-lint
    linter := c.Container().
        From("golangci/golangci-lint:latest").
        WithMountedDirectory("/src", src).
        WithWorkdir("/src").
        WithExec([]string{"golangci-lint", "run", "--out-format", "colored-line-number"})

    // 3️⃣ Run unit tests
    tester := c.Container().
        From("golang:1.22-alpine").
        WithMountedDirectory("/src", src).
        WithWorkdir("/src").
        WithExec([]string{"go", "test", "./..."})

    // 4️⃣ Package the binary into a minimal runtime image
    runtime := c.Container().
        From("alpine:3.19").
        WithFile("/app/server", builder.File("/app/server")).
        WithEntrypoint([]string{"/app/server"})

    // Execute steps sequentially (Dagger resolves dependencies automatically)
    if _, err := linter.Sync(ctx); err != nil {
        return fmt.Errorf("lint failed: %w", err)
    }
    if _, err := tester.Sync(ctx); err != nil {
        return fmt.Errorf("tests failed: %w", err)
    }

    // Publish the final image
    _, err := runtime.Publish(ctx, "docker.io/youruser/go-service:latest")
    return err
}

This Go pipeline demonstrates several Dagger strengths:

  1. Each stage reuses the same source directory without copying files multiple times.
  2. Intermediate containers (builder, linter, tester) are cached automatically; rerunning the pipeline only rebuilds what changed.
  3. The final image is stripped down to alpine, resulting in a tiny production artifact.

Real‑World Use Cases

Monorepos with multiple languages. Companies that host front‑end, back‑end, and infra code in a single repository often struggle with divergent CI configurations. Dagger lets you write a single pipeline that branches based on file patterns, invoking Node, Python, or Go steps as needed.

Secure build environments. Because every step runs in an isolated container, you can enforce least‑privilege policies. For example, a step that pushes to a private registry can be granted a short‑lived token that never leaves the container.

Self‑hosted runners. If you need to run pipelines on custom hardware (e.g., ARM boards, GPUs), Dagger’s SDK can target any Docker‑compatible runtime. The same pipeline you test on a laptop will work on a high‑performance build farm without modification.

Pro Tips for Scaling Dagger Pipelines

1️⃣ Leverage Dagger’s built‑in caching. By default Dagger hashes the contents of each container step. When you rerun a pipeline, unchanged steps are fetched from the cache, cutting down build time dramatically.

2️⃣ Keep containers small. Use distroless or Alpine base images for runtime stages. Smaller images mean faster pulls, less storage, and reduced attack surface.

3️⃣ Modularize with functions. Extract reusable pieces (e.g., “run linter”, “publish image”) into separate functions or packages. This mirrors normal software engineering practices and makes pipelines easier to test.

4️⃣ Use secret management wisely. Dagger can inject secrets directly into containers without writing them to disk. In Python, call ctx.secret("DOCKERHUB_TOKEN") and pass it to .with_secret() for a truly zero‑leak workflow.

5️⃣ Monitor pipeline health. Dagger emits rich telemetry (step duration, cache hits, resource usage). Hook this into Grafana or Prometheus to spot bottlenecks early.

Testing Dagger Pipelines Locally

Before committing a pipeline, run it in a sandboxed environment. Dagger provides a dagger test command that spins up a temporary Docker daemon, executes the pipeline, and tears everything down. This is especially useful for CI‑only steps that require credentials.

dagger test ./dagger.go

If the test passes, you can be confident that the same code will succeed in GitHub Actions, GitLab CI, or any other runner.

Deploying with Dagger: From Image to Kubernetes

Beyond building images, Dagger can also apply Kubernetes manifests, run Helm upgrades, or even trigger Argo CD syncs. Below is a concise snippet that pushes a Docker image and updates a Helm release in a single pipeline.

async def deploy(ctx: dagger.Client) -> None:
    # Build and push image (same as earlier)
    image_ref = await (
        ctx.container()
        .from_("python:3.11-slim")
        .with_directory("/", ctx.host().directory("."))
        .with_workdir("/")
        .with_exec(["pip", "install", "-r", "requirements.txt"])
        .publish("docker.io/youruser/my-app:${{ .GitCommitShort }}")
    )

    # Render Helm values with the new image tag
    helm_vals = ctx.set_secret("IMAGE_TAG", image_ref).file("helm/values.yaml")

    # Run Helm upgrade in a container that has Helm installed
    await (
        ctx.container()
        .from_("alpine/helm:3.12.0")
        .with_mounted_directory("/charts", ctx.host().directory("helm/charts"))
        .with_file("/values.yaml", helm_vals)
        .with_exec(["helm", "upgrade", "--install", "my-app", "/charts/my-app", "-f", "/values.yaml"])
        .sync()
    )

This example demonstrates Dagger’s ability to orchestrate cross‑tool workflows without leaving the container world. The same function could be called from a GitHub Actions step, a Jenkins job, or a local developer script.

Common Pitfalls and How to Avoid Them

Missing cache keys. If you forget to include a dependency file (e.g., requirements.txt) in the cache scope, Dagger will rebuild the layer on every run. Always add files that affect the build environment to the cache definition.

Hard‑coded image tags. Using static tags (like latest) defeats the purpose of reproducibility. Prefer dynamic tags based on Git SHA, timestamps, or semantic versions.

Excessive container size. Pulling large base images (e.g., ubuntu:22.04) for trivial steps adds latency. Split heavy steps into dedicated containers and keep lightweight steps (like linting) on minimal images.

Future Directions: Dagger’s Roadmap

The Dagger team is actively expanding support for more runtimes (e.g., WASM), richer secret backends (Vault, AWS KMS), and tighter integration with cloud-native tools like Tekton. Watching the roadmap will help you plan migrations and adopt new features before they become mainstream.

In particular, the upcoming “pipeline composition” feature will let you import one Dagger pipeline as a library into another, enabling truly modular CI/CD architectures across multiple repositories.

Conclusion

Dagger.io brings the power of code to CI/CD, turning brittle YAML files into maintainable, testable, and portable pipelines. By modeling each step as a container and leveraging the full expressiveness of a programming language, you gain version control, caching, and cross‑platform compatibility out of the box. Whether you’re a solo developer looking for a reproducible build script or a large organization standardizing pipelines across dozens of services, Dagger offers a compelling alternative to traditional CI tooling.

Start small—convert a single build step into Dagger, run it locally, and then integrate it into your existing CI system. As you grow comfortable, expand to multi‑stage pipelines, secret management, and Kubernetes deployments. The result is a CI/CD workflow that feels like any other piece of your codebase: testable, refactorable, and truly portable.

Share this article