Testcontainers: Integration Testing with Real Databases
TOP 5 April 22, 2026, 5:30 a.m.

Testcontainers: Integration Testing with Real Databases

Integration testing with real databases used to feel like a luxury—most teams settled for in‑memory mocks or static fixtures. Those shortcuts often miss subtle bugs that only appear when the full stack talks to an actual DB engine. Testcontainers changes the game by spinning up lightweight Docker containers on‑the‑fly, giving you a fresh, production‑grade database for every test run. In this article we’ll walk through why you should care, how to set it up, and three practical examples that you can copy straight into your own projects.

Why Real‑Database Integration Tests Matter

Mocking a relational database is tempting because it’s fast and requires no external services. However, mocks can’t enforce schema constraints, transaction semantics, or vendor‑specific quirks such as PostgreSQL’s ON CONFLICT clause. When your code relies on those features, a mock will silently pass while the real DB throws an error.

Beyond correctness, real‑database tests give you confidence in migration scripts. Running a migration against a clean container mirrors the production upgrade path, catching missing defaults or incompatible data types early. This also helps you verify that your ORM mappings stay in sync with the underlying schema.

Finally, using containers isolates test environments. Each test gets its own disposable DB instance, eliminating flaky failures caused by leftover state from previous runs. The result is a reproducible CI pipeline that behaves the same on a developer’s laptop as it does on a shared build server.

What Is Testcontainers?

Testcontainers is a library that abstracts Docker container lifecycle management for integration tests. It supports many languages—Java, Python, Go, .NET—and provides ready‑made modules for popular databases, message brokers, and even full‑stack services. Under the hood it uses the Docker Engine API to pull images, start containers, expose ports, and clean up after the test finishes.

From a developer’s perspective the API is intentionally simple: you instantiate a container class, call .start(), retrieve the connection details, and then interact with the service just like you would with a locally installed instance. When the test ends, the container is stopped and removed, leaving no trace.

Because Testcontainers spins up real containers, you get the exact same version of PostgreSQL, MySQL, or any other service that you would run in production. This eliminates the “works on my machine” gap and lets you test edge cases such as SSL connections or specific configuration flags.

Getting Started with Testcontainers in Python

The Python ecosystem offers the testcontainers package, which you can install via pip. It ships with modules for PostgreSQL, MySQL, MongoDB, and more. The first step is to add the dependency and ensure Docker is running on the host machine.

# Install the library
pip install testcontainers[postgresql] pytest

With the package installed, you can write a simple test that starts a PostgreSQL container, creates a table, and runs a query. Below is a minimal example that demonstrates the core workflow.

Example 1: Basic PostgreSQL Container

import os
import psycopg2
from testcontainers.postgres import PostgresContainer

def test_postgres_basic():
    # Spin up a PostgreSQL container with the default image (postgres:latest)
    with PostgresContainer("postgres:15-alpine") as pg:
        # The container is now running; retrieve connection URL
        conn_url = pg.get_connection_url()
        # psycopg2 can parse the URL directly
        conn = psycopg2.connect(conn_url)
        cur = conn.cursor()
        cur.execute("CREATE TABLE users (id SERIAL PRIMARY KEY, name TEXT NOT NULL);")
        cur.execute("INSERT INTO users (name) VALUES ('Alice'), ('Bob');")
        conn.commit()
        cur.execute("SELECT COUNT(*) FROM users;")
        count = cur.fetchone()[0]
        assert count == 2
        cur.close()
        conn.close()

Notice the with block: Testcontainers automatically stops and removes the container when the block exits, even if an exception is raised. This pattern works seamlessly with pytest, ensuring clean teardown after each test function.

Integrating Testcontainers with a Flask + SQLAlchemy App

Most real‑world projects use an ORM, and the integration points differ slightly from raw SQL. Let’s see how to wire a Flask application to a temporary PostgreSQL container using a pytest fixture. This approach lets you write high‑level tests that hit the actual request/response cycle while still using a real database.

Setup: Flask App Skeleton

# app.py
from flask import Flask, jsonify, request
from flask_sqlalchemy import SQLAlchemy

app = Flask(__name__)
db = SQLAlchemy(app)

class Todo(db.Model):
    id = db.Column(db.Integer, primary_key=True)
    task = db.Column(db.String(120), nullable=False)

@app.route("/todos", methods=["POST"])
def create_todo():
    data = request.get_json()
    todo = Todo(task=data["task"])
    db.session.add(todo)
    db.session.commit()
    return jsonify({"id": todo.id, "task": todo.task}), 201

@app.route("/todos", methods=["GET"])
def list_todos():
    todos = Todo.query.all()
    return jsonify([{"id": t.id, "task": t.task} for t in todos])

The app expects a SQLALCHEMY_DATABASE_URI environment variable. In production you’d point it at your managed PostgreSQL instance; in tests we’ll inject the container’s URL.

pytest Fixture for the Database Container

# conftest.py
import os
import pytest
from testcontainers.postgres import PostgresContainer
from app import app, db

@pytest.fixture(scope="session")
def postgres_container():
    with PostgresContainer("postgres:15-alpine") as pg:
        # Export the connection URL so Flask can pick it up
        os.environ["SQLALCHEMY_DATABASE_URI"] = pg.get_connection_url()
        # Ensure the Flask app uses the new config
        app.config["SQLALCHEMY_DATABASE_URI"] = pg.get_connection_url()
        # Create tables once per session
        with app.app_context():
            db.create_all()
        yield pg
        # Teardown happens automatically when the context exits

@pytest.fixture
def client(postgres_container):
    # Flask's test client works with the container-backed DB
    with app.test_client() as client:
        yield client

The postgres_container fixture runs once per test session, creating the schema a single time. Individual tests receive a fresh client that talks to the same temporary database, guaranteeing isolation because each test can clean up after itself.

Example 2: End‑to‑End Test of the Flask API

def test_create_and_list_todos(client):
    # Create a new todo
    response = client.post("/todos", json={"task": "Write blog post"})
    assert response.status_code == 201
    data = response.get_json()
    assert data["task"] == "Write blog post"
    todo_id = data["id"]

    # List all todos and verify the new entry appears
    response = client.get("/todos")
    assert response.status_code == 200
    todos = response.get_json()
    assert any(t["id"] == todo_id and t["task"] == "Write blog post" for t in todos)

This test exercises the full request stack, the ORM layer, and the underlying PostgreSQL engine—all without any external services. When you run pytest, Testcontainers pulls the Docker image (if not cached), starts the container, runs the test, and tears everything down.

Pro tip: Pin the exact Docker image tag (e.g., postgres:15.4-alpine) in your fixtures. This guarantees deterministic builds and protects you from accidental breaking changes in newer minor releases.

Testing with MySQL and Transaction Rollbacks

While PostgreSQL is a popular default, many legacy systems still run MySQL or MariaDB. Testcontainers supports those databases as well, and the workflow is almost identical. One nuance with MySQL is that the default storage engine (InnoDB) enforces foreign‑key constraints only within a transaction, so you often want to wrap each test in a rollback to keep the container clean.

Example 3: MySQL Container with Pytest Transaction Fixture

import os
import pytest
import mysql.connector
from testcontainers.mysql import MySqlContainer
from sqlalchemy import create_engine, MetaData, Table, Column, Integer, String
from sqlalchemy.orm import sessionmaker

@pytest.fixture(scope="function")
def mysql_container():
    with MySqlContainer("mysql:8.0") as mysql:
        os.environ["DATABASE_URL"] = mysql.get_connection_url()
        yield mysql

@pytest.fixture(scope="function")
def db_session(mysql_container):
    engine = create_engine(mysql_container.get_connection_url())
    metadata = MetaData()
    users = Table(
        "users",
        metadata,
        Column("id", Integer, primary_key=True),
        Column("username", String(50), nullable=False, unique=True),
    )
    metadata.create_all(engine)
    Session = sessionmaker(bind=engine)
    session = Session()
    # Begin a transaction that will be rolled back after the test
    connection = engine.connect()
    transaction = connection.begin()
    session.bind = connection
    yield session
    transaction.rollback()
    session.close()
    connection.close()
    engine.dispose()

def test_user_unique_constraint(db_session):
    # Insert first user
    db_session.execute(
        "INSERT INTO users (username) VALUES ('alice')"
    )
    db_session.commit()
    # Attempt duplicate insertion – should raise an IntegrityError
    with pytest.raises(mysql.connector.errors.IntegrityError):
        db_session.execute(
            "INSERT INTO users (username) VALUES ('alice')"
        )
        db_session.commit()

This snippet shows two important patterns: (1) using a function‑scoped container fixture to spin up a fresh MySQL instance for each test, and (2) leveraging SQLAlchemy’s transaction control to roll back changes automatically. The container itself stays alive for the duration of the test, but the database state is cleared without needing to drop and recreate tables.

Note: If you run tests in parallel (e.g., pytest-xdist), give each container a unique name by passing name=f"mysql_test_{os.getpid()}" to MySqlContainer. This prevents port collisions and ensures isolation across workers.

Advanced Scenarios: Custom Configuration and Network Isolation

Sometimes you need to tweak the database configuration to mirror production settings—think custom max_connections, SSL mode, or specific extensions. Testcontainers lets you pass arbitrary environment variables or command‑line arguments when constructing the container.

For PostgreSQL you can enable the postgis extension by using a pre‑built image, or you can mount a custom postgresql.conf file via a bind mount. Below is a quick example that starts a container with SSL enabled, which is useful when your application enforces encrypted connections.

SSL‑Enabled PostgreSQL Container

from testcontainers.core.container import DockerContainer

class PostgresSSLContainer(DockerContainer):
    def __init__(self, image="postgres:15-alpine"):
        super().__init__(image)
        # Expose default PostgreSQL port
        self.with_exposed_ports(5432)
        # Set environment variables to enable SSL (requires certificates in /var/lib/postgresql)
        self.with_env("POSTGRES_PASSWORD", "test")
        self.with_env("POSTGRES_HOST_AUTH_METHOD", "trust")
        # Assume you have a local directory with certs
        self.with_bind_mount("/path/to/certs", "/var/lib/postgresql/certs")

    def start(self):
        super().start()
        # Additional setup could run here, e.g., creating a self‑signed cert
        return self

# Usage in a test
def test_ssl_connection():
    with PostgresSSLContainer() as pg:
        conn_url = pg.get_connection_url().replace("postgresql://", "postgresql+psycopg2://")
        # psycopg2 will pick up SSL parameters from the URL if you add ?sslmode=require
        conn_url += "?sslmode=require"
        # Proceed with normal connection logic...

The custom class inherits from DockerContainer, giving you full control over the underlying Docker run command. This flexibility means you can emulate almost any production‑grade configuration without leaving your test suite.

Running Testcontainers in CI/CD Pipelines

CI environments like GitHub Actions, GitLab CI, or Jenkins already have Docker available, so integrating Testcontainers is straightforward. The key is to ensure the Docker daemon has enough resources and that the job caches the pulled images to speed up subsequent runs.

A typical GitHub Actions workflow might look like this:

name: Python CI

on: [push, pull_request]

jobs:
  test:
    runs-on: ubuntu-latest
    services:
      # Optional: start a Docker-in-Docker service if you need privileged mode
      docker:
        image: docker:20.10-dind
        privileged: true
    steps:
      - uses: actions/checkout@v3
      - name: Set up Python
        uses: actions/setup-python@v4
        with:
          python-version: "3.11"
      - name: Install dependencies
        run: |
          python -m pip install --upgrade pip
          pip install -r requirements.txt
      - name: Run tests
        env:
          # Ensure Docker can be reached from the runner
          DOCKER_HOST: unix:///var/run/docker.sock
        run: |
          pytest -v

The services.docker block isn’t always required, but it guarantees a Docker daemon is available even on runners that disable Docker by default. By caching the .docker directory, you can reduce image pull times dramatically.

Pro tip: Add a step that runs docker system prune -f after your test suite. This prevents leftover containers from filling up the CI runner’s disk space over time.

Best Practices and Common Pitfalls

1. Keep containers lightweight. Use Alpine‑based images (e.g., postgres:15-alpine) to reduce startup time. If you need extensions, choose a slim variant that already includes them rather than installing packages at runtime.

2. Pin image versions. Avoid the :latest tag in CI; a new minor release could introduce breaking changes that cause your tests to fail unexpectedly.

3. Reuse containers when safe. For large test suites, starting a fresh container for every single test can be slow. Consider a session‑scoped fixture that spins up the container once per test run, then use transactions or truncation to clean state between tests.

4. Beware of time‑zones and locale. Databases inherit the host’s locale settings unless you explicitly set them. If your application depends on UTC timestamps, pass TZ=UTC as an environment variable to the container.

5. Monitor resource usage. Containers consume RAM and CPU; on CI runners with limited resources you might need to limit the container’s memory via with_memory_limit (available in newer versions of the library).

Conclusion

Testcontainers bridges the gap between fast unit tests and fragile integration tests by giving you real, disposable databases that behave exactly like the ones in production. With just a few lines of code you can spin up PostgreSQL, MySQL, or any other service, run your ORM‑backed logic against it, and tear everything down automatically. The approach scales from a single developer’s laptop to a full CI pipeline, ensuring that schema migrations, vendor‑specific SQL,

Share this article