Woodpecker CI: Lightweight Self-Hosted CI/CD Pipeline
Woodpecker CI has quietly become a favorite among teams that crave the flexibility of self‑hosted pipelines without the overhead of heavyweight solutions. It started as a fork of Drone CI, shedding unnecessary bloat while preserving a clean, declarative YAML syntax. In this article we’ll explore why Woodpecker feels light as a feather, how to spin it up on your own hardware, and how to craft pipelines that scale from a single‑container test suite to a full production release workflow.
What is Woodpecker CI?
At its core, Woodpecker CI is an open‑source continuous integration and delivery platform that runs your pipeline definitions inside isolated containers. It supports GitHub, GitLab, Gitea, and Bitbucket, pulling code changes via webhooks and executing steps defined in a .woodpecker.yml file. Because the engine itself is just a Go binary, you can host it on a modest VM, a Raspberry Pi, or a Kubernetes cluster, depending on your scale.
Core Principles
- Lightweight Runtime: No heavyweight agents; each step runs in a Docker container managed by the server.
- Declarative Pipelines: A simple YAML format keeps configurations readable and versionable.
- Extensible Plugins: Use any Docker image as a step, making language‑agnostic builds trivial.
- Security‑First: Secrets are stored encrypted, and each build runs in an isolated namespace.
Getting Started
Before you dive into pipelines, you need a running Woodpecker server and an agent that will execute jobs. The official Docker images make the setup painless: one container for the server, another for the agent, and a shared Docker socket to launch build containers.
Installation
# Pull the official images
docker pull woodpeckerci/woodpecker-server:latest
docker pull woodpeckerci/woodpecker-agent:latest
# Create a network for internal communication
docker network create woodpecker-net
# Run the server
docker run -d \
--name woodpecker-server \
--network woodpecker-net \
-e WOODPECKER_GITEA=true \
-e WOODPECKER_GITEA_URL=https://gitea.example.com \
-e WOODPECKER_GITEA_CLIENT=your-client-id \
-e WOODPECKER_GITEA_SECRET=your-client-secret \
-p 8000:8000 \
woodpeckerci/woodpecker-server:latest
# Run the agent
docker run -d \
--name woodpecker-agent \
--network woodpecker-net \
-e WOODPECKER_SERVER=http://woodpecker-server:8000 \
-v /var/run/docker.sock:/var/run/docker.sock \
woodpeckerci/woodpecker-agent:latest
Replace the Gitea variables with the credentials of your chosen VCS provider. If you prefer GitHub, swap the WOODPECKER_GITEA_* variables for WOODPECKER_GITHUB_* equivalents. Once both containers are up, navigate to http://localhost:8000 to finish the web UI setup.
First Pipeline
# .woodpecker.yml
pipeline:
test:
image: python:3.11-slim
commands:
- pip install -r requirements.txt
- pytest -q
This minimal pipeline pulls a Python image, installs dependencies, and runs your test suite. Commit the file to your repository, push, and watch Woodpecker trigger a build automatically.
Key Features
Woodpecker’s feature set is intentionally focused. It doesn’t try to be a full-fledged CD platform with blue‑green deployments out of the box, but it provides enough hooks for you to build those patterns yourself.
Lightweight Architecture
The server maintains a tiny SQLite database (or PostgreSQL if you need clustering). All heavy lifting—building, testing, packaging—happens in Docker containers launched by the agent. Because containers are short‑lived, resource consumption stays low, and you can spin up many parallel jobs on a single host without exhausting memory.
Self‑Hosting Options
- Docker Compose: Ideal for development or small teams; a single
docker-compose.ymlcan bring up the whole stack. - Kubernetes: Deploy the server as a Deployment and the agent as a DaemonSet for auto‑scaling across nodes.
- Bare‑Metal: Run the binary directly on a VM and mount the Docker socket for container execution.
Practical Example: Node.js Application
Let’s walk through a real‑world scenario: a simple Express API that needs linting, testing, and Docker image publishing. The repository already contains a Dockerfile that builds the app.
.woodpecker.yml for Node.js
pipeline:
lint:
image: node:20-alpine
commands:
- npm ci
- npm run lint
test:
image: node:20-alpine
commands:
- npm ci
- npm test
build_image:
image: plugins/docker
settings:
repo: your-registry.com/your-org/express-api
tags: ${CI_COMMIT_SHA}
username:
from_secret: docker_user
password:
from_secret: docker_pass
when:
branch: main
event: push
The pipeline consists of three stages. The first two use the official Node image to run npm ci for reproducible installs, followed by linting and testing. The final stage leverages the plugins/docker plugin to push a tagged image to your private registry, but only when code lands on the main branch.
Running the Pipeline
After committing the .woodpecker.yml, push to your remote. Woodpecker will fetch the changes, spin up three containers in sequence, and report the status back to your VCS. If any step fails, subsequent stages are skipped, keeping feedback fast and clear.
Practical Example: Python Package Release
Now let’s see how Woodpecker can automate the release of a Python library to PyPI. The repository follows the typical src/ layout and includes a setup.cfg for packaging.
Workflow File
pipeline:
install:
image: python:3.12-slim
commands:
- pip install -U pip setuptools wheel build
- pip install -r requirements-dev.txt
test:
image: python:3.12-slim
commands:
- pytest -q
build:
image: python:3.12-slim
commands:
- python -m build
artifacts:
- dist/**
publish:
image: plugins/pypi
settings:
username:
from_secret: pypi_user
password:
from_secret: pypi_token
when:
branch: main
event: tag
Notice the artifacts stanza: Woodpecker archives the dist folder after the build step, making it available to the publish step. The publish stage only runs on tag pushes, ensuring that only versioned releases hit PyPI.
Real‑World Use Cases
Many teams adopt Woodpecker for scenarios where a cloud CI service would be overkill or too costly. Below are three common patterns.
- Embedded Device Firmware: Build toolchains are large and need direct access to hardware. A self‑hosted Woodpecker runner can mount USB devices, flash firmware, and report results without exposing the hardware to the public internet.
- Monorepo with Heterogeneous Stacks: A single repository may contain Go microservices, a React front‑end, and a Python data pipeline. Woodpecker’s per‑step Docker images let you tailor the environment per language without juggling multiple CI providers.
- Compliance‑Heavy Environments: Regulations sometimes require that source code never leave the corporate network. Hosting Woodpecker behind a firewall satisfies that requirement while still providing modern CI feedback loops.
Pro Tips for Optimizing Woodpecker CI
Cache wisely. Use thevolumeskey to mount a persistent cache directory (e.g.,/root/.cache/pipfor Python) between builds. This can shave minutes off each run.
Parallelize when possible. Split independent test suites into separate steps and enable thedepends_onattribute to run them concurrently.
Secure secrets. Store tokens in Woodpecker’s built‑in secret store, never hard‑code them in.woodpecker.yml. Enableallow_failure: truefor non‑critical steps to keep the pipeline green while still surfacing warnings.
Monitor resource usage. The server exposes Prometheus metrics at/metrics. Hook these into Grafana to spot spikes before they cause queue backlogs.
Conclusion
Woodpecker CI strikes a sweet spot between simplicity and power. Its lightweight footprint, flexible Docker‑based steps, and self‑hosting capabilities make it an excellent choice for teams that value control, cost efficiency, and the ability to tailor pipelines to unique workloads. Whether you’re building a tiny Node.js microservice or orchestrating a complex multi‑language monorepo, Woodpecker gives you the tools to automate, test, and ship code with confidence—without the bloat of larger enterprise solutions.