Trigger.dev: Background Jobs for Modern Node.js Apps
TOP 5 April 14, 2026, 5:30 a.m.

Trigger.dev: Background Jobs for Modern Node.js Apps

When you build a modern Node.js application, you quickly discover that not everything can—or should—run in the request‑response cycle. Email notifications, image processing, data aggregation, and third‑party API calls are classic examples of work that belongs in the background. That’s where Trigger.dev shines: it provides a developer‑friendly, serverless‑first platform for defining, scheduling, and monitoring background jobs directly from your code.

Why Trigger.dev Over Traditional Queues?

Traditional message brokers such as RabbitMQ or BullMQ give you raw power, but they also demand infrastructure, scaling logic, and a lot of boilerplate. Trigger.dev abstracts those concerns while keeping the developer experience familiar—everything is written as plain JavaScript/TypeScript functions, and the platform handles retries, concurrency limits, and observability for you.

Another advantage is the tight integration with modern CI/CD pipelines. Deploy a new job, push a change, and Trigger.dev instantly picks up the new version without any manual queue re‑configuration. This makes iterative development of background processes as smooth as updating an API route.

Core Concepts You Need to Know

Jobs and Tasks

A job in Trigger.dev is a top‑level unit of work that you define with a unique name and a handler function. Inside a job you can create multiple tasks, which are individual steps that the platform tracks individually. This granularity gives you fine‑grained retry and logging capabilities.

Triggers

Triggers are the events that start a job. They can be HTTP requests, cron schedules, or even webhook payloads from third‑party services. Trigger.dev supports declarative trigger definitions, meaning you describe the event once and the platform wires everything up automatically.

Run Context

Every job runs inside a Run object that provides utilities such as run.logger, run.retry, and run.waitFor. These helpers let you add custom logs, schedule delayed retries, or pause execution until a dependent task finishes.

Getting Started: A Minimal Setup

First, install the Trigger.dev SDK in your Node.js project:

npm install @trigger.dev/sdk

Next, create a trigger.dev.ts file that exports a configured client. The client holds your API key (available from the Trigger.dev dashboard) and the environment name.

import { TriggerClient } from "@trigger.dev/sdk";

export const client = new TriggerClient({
  apiKey: process.env.TRIGGER_API_KEY!,
  endpoint: "https://api.trigger.dev",
  // Choose "development" or "production" based on your stage
  environment: "development",
});

Now you can define your first job. Below is a simple “Send welcome email” job that runs whenever a new user registers.

import { client } from "./trigger.dev";
import nodemailer from "nodemailer";

client.defineJob({
  // Unique identifier for the job
  id: "send-welcome-email",
  // Triggered by an HTTP POST to /api/users
  trigger: {
    type: "http",
    method: "POST",
    path: "/api/users",
  },
  async run({ event, run }) {
    const { email, name } = event.body;

    // Log the start of the job
    run.logger.info(`Preparing welcome email for ${email}`);

    // Create a transporter (using a test account for demo)
    const transporter = nodemailer.createTransport({
      host: "smtp.example.com",
      port: 587,
      auth: {
        user: process.env.SMTP_USER,
        pass: process.env.SMTP_PASS,
      },
    });

    // Send the email
    await transporter.sendMail({
      from: '"Acme Corp" ',
      to: email,
      subject: "Welcome to Acme!",
      text: `Hey ${name},\n\nThanks for joining us!`,
    });

    run.logger.info(`Welcome email sent to ${email}`);
  },
});

Deploy the file, and Trigger.dev automatically creates an HTTP endpoint (e.g., https://api.trigger.dev/v1/webhooks/…) that you can call from your front‑end or another service. The job runs entirely outside your main server process, ensuring that a slow SMTP server never blocks user registration.

Real‑World Use Case #1: Image Processing Pipeline

Imagine an e‑commerce platform where merchants upload product photos. You need to generate thumbnails, watermarks, and WebP versions without slowing down the upload API. Trigger.dev lets you spin up a multi‑step pipeline that processes each image in parallel, retries on failure, and stores results in an S3 bucket.

Step‑by‑Step Implementation

  1. Define a job that triggers on a new S3 object creation event.
  2. Inside the job, create three tasks: generateThumbnail, applyWatermark, and convertToWebP.
  3. Use run.waitFor to ensure all tasks finish before marking the job complete.

Here’s the full code example:

import { client } from "./trigger.dev";
import sharp from "sharp";
import { S3 } from "aws-sdk";

const s3 = new S3();

client.defineJob({
  id: "process-product-image",
  trigger: {
    type: "webhook",
    // Triggered by an S3 event forwarded to Trigger.dev
    source: "aws:s3:ObjectCreated:*",
    bucket: "acme-product-images",
  },
  async run({ event, run }) {
    const key = event.object.key;
    const bucket = event.object.bucket;

    // Fetch the original image from S3
    const original = await s3
      .getObject({ Bucket: bucket, Key: key })
      .promise();

    // Task 1: Thumbnail (200x200)
    const thumbnailTask = run.task({
      id: "generateThumbnail",
      async handler() {
        const buffer = await sharp(original.Body as Buffer)
          .resize(200, 200)
          .jpeg({ quality: 80 })
          .toBuffer();

        await s3
          .putObject({
            Bucket: bucket,
            Key: `thumbnails/${key}`,
            Body: buffer,
            ContentType: "image/jpeg",
          })
          .promise();

        run.logger.info(`Thumbnail saved for ${key}`);
      },
    });

    // Task 2: Watermark
    const watermarkTask = run.task({
      id: "applyWatermark",
      async handler() {
        const watermark = await sharp("assets/watermark.png")
          .resize(100)
          .toBuffer();

        const buffer = await sharp(original.Body as Buffer)
          .composite([{ input: watermark, gravity: "southeast" }])
          .jpeg({ quality: 90 })
          .toBuffer();

        await s3
          .putObject({
            Bucket: bucket,
            Key: `watermarked/${key}`,
            Body: buffer,
            ContentType: "image/jpeg",
          })
          .promise();

        run.logger.info(`Watermarked image saved for ${key}`);
      },
    });

    // Task 3: WebP conversion
    const webpTask = run.task({
      id: "convertToWebP",
      async handler() {
        const buffer = await sharp(original.Body as Buffer)
          .webp({ quality: 85 })
          .toBuffer();

        await s3
          .putObject({
            Bucket: bucket,
            Key: `webp/${key}.webp`,
            Body: buffer,
            ContentType: "image/webp",
          })
          .promise();

        run.logger.info(`WebP version saved for ${key}`);
      },
    });

    // Wait for all tasks to finish
    await run.waitFor([thumbnailTask, watermarkTask, webpTask]);

    run.logger.info(`Image processing pipeline completed for ${key}`);
  },
});

Each task runs in its own isolated environment, so a failure in the watermark step won’t stop the thumbnail from being generated. Trigger.dev automatically retries failed tasks with exponential back‑off, and you can view the status of each task in the dashboard.

Pro tip: Use the run.task API to set concurrency limits per bucket. This prevents a sudden surge of uploads from exhausting your S3 request quota.

Real‑World Use Case #2: Periodic Data Sync with External APIs

Many SaaS products need to keep local caches up‑to‑date with third‑party services like Stripe, HubSpot, or Salesforce. Instead of writing a cron job that runs on a VM, you can define a Trigger.dev cron trigger that executes a sync job every hour, with built‑in rate‑limit handling.

Example: Sync Stripe Customers to a Postgres Table

The following job fetches all customers from Stripe, upserts them into a PostgreSQL table, and logs any discrepancies. It demonstrates how to combine HTTP calls, database access, and pagination inside a single job.

import { client } from "./trigger.dev";
import Stripe from "stripe";
import { Pool } from "pg";

const stripe = new Stripe(process.env.STRIPE_SECRET_KEY!, {
  apiVersion: "2023-10-16",
});

const pgPool = new Pool({
  connectionString: process.env.DATABASE_URL,
});

client.defineJob({
  id: "sync-stripe-customers",
  trigger: {
    type: "cron",
    // Runs at the start of every hour
    cron: "0 * * * *",
  },
  async run({ run }) {
    run.logger.info("Starting Stripe customer sync");

    let hasMore = true;
    let startingAfter: string | undefined = undefined;
    const batchSize = 100; // Stripe max page size

    while (hasMore) {
      const customers = await stripe.customers.list({
        limit: batchSize,
        starting_after: startingAfter,
      });

      // Upsert each customer into Postgres
      for (const cust of customers.data) {
        await pgPool.query(
          `INSERT INTO stripe_customers (id, email, name, created_at)
           VALUES ($1, $2, $3, to_timestamp($4))
           ON CONFLICT (id) DO UPDATE
           SET email = EXCLUDED.email,
               name = EXCLUDED.name,
               created_at = EXCLUDED.created_at`,
          [cust.id, cust.email, cust.name, cust.created]
        );
      }

      run.logger.info(
        `Synced ${customers.data.length} customers (page ${customers.has_more ? "more" : "last"})`
      );

      hasMore = customers.has_more;
      if (hasMore) {
        startingAfter = customers.data[customers.data.length - 1].id;
      }
    }

    run.logger.info("Stripe customer sync completed");
  },
});

The cron trigger guarantees that the sync runs even if your main server restarts. Because the job runs on Trigger.dev’s managed workers, you get automatic scaling when Stripe’s data grows, and you can monitor the exact time each batch took in the UI.

Pro tip: Wrap the Stripe pagination loop in a separate run.task if you expect the sync to exceed the default 15‑minute job timeout. Trigger.dev lets you chain tasks with their own timeouts.

Advanced Features You’ll Love

Dynamic Retries and Back‑Off Strategies

Trigger.dev’s run.retry method lets you programmatically decide whether to retry a failed step, and how long to wait. For example, when calling an external API that returns a 429 (Too Many Requests), you can read the Retry-After header and schedule a retry accordingly.

await run.retry(async () => {
  const response = await fetch("https://api.example.com/data");
  if (response.status === 429) {
    const retryAfter = parseInt(response.headers.get("Retry-After") ?? "30", 10);
    throw new run.RetryableError("Rate limited", { delay: retryAfter * 1000 });
  }
  return response.json();
});

Secrets Management

Never hard‑code API keys. Trigger.dev integrates with popular secret stores (AWS Secrets Manager, GCP Secret Manager, or its own vault). Reference a secret in your job definition with run.secrets.get("MY_API_KEY"), and the value is injected at runtime only.

Observability & Dashboard

Every job run appears as a timeline in the Trigger.dev dashboard, with logs, task durations, and retry counts. You can set up Slack or email alerts for failed jobs, and even export metrics to Prometheus for custom dashboards.

Testing Jobs Locally

During development you might not want to hit the live Trigger.dev API. The SDK ships with a dev mode that runs jobs in a local Node process, preserving the same run API. Start the dev server with:

npx trigger.dev dev

Then trigger a job using curl or Postman against the local endpoint (e.g., http://localhost:3000/api/users). All logs appear in your console, and you can step through the code with a debugger just like any other function.

Pro tip: Use process.env.NODE_ENV === "test" inside your job to switch to mock services (e.g., a fake email transporter) while keeping production code untouched.

Best Practices for Scaling with Trigger.dev

  • Keep jobs idempotent. Since retries are automatic, design your handlers so that running twice produces the same result.
  • Limit payload size. Pass only identifiers (e.g., user ID, S3 key) to the job; fetch heavy data inside the job itself.
  • Separate concerns. Use distinct jobs for unrelated side effects (e.g., email vs. analytics) to avoid cascading failures.
  • Monitor latency. If a task consistently hits the timeout, break it into smaller subtasks or increase the timeout per task.
  • Version your jobs. Include a version number in the job ID (e.g., send-welcome-email-v2) when you need breaking changes, so older runs continue with the original code.

Deploying to Production

Deploying a Trigger.dev‑enabled app is no different from any other Node.js service. After you push your code to your repository, run your CI pipeline to set the TRIGGER_API_KEY secret in the production environment. Trigger.dev will automatically pick up the new job definitions on the next deployment.

If you use Vercel, Netlify, or Railway, you can add a trigger.dev build step that runs npx trigger.dev sync to ensure the dashboard stays in sync with your codebase. This step also validates that all job definitions are syntactically correct before the deployment succeeds.

Conclusion

Trigger.dev bridges the gap between simple serverless functions and full‑blown message‑queue architectures. By letting you write background jobs as ordinary JavaScript functions, it removes the operational overhead of managing workers, scaling queues, and retry logic. Whether you need to send emails, process images, or keep external data in sync, Trigger.dev provides a clear, observable, and developer‑centric workflow.

Start by integrating the SDK, define a few jobs that match your most time‑consuming tasks, and let the platform handle the rest. As your application grows, you’ll appreciate the built‑in observability, automatic retries, and seamless CI/CD integration—making background processing feel like a natural extension of your codebase rather than an afterthought.

Share this article