Building AI Workflows with n8n and OpenAI
TOP 5 Dec. 23, 2025, 5:30 p.m.

Building AI Workflows with n8n and OpenAI

Imagine being able to stitch together powerful AI capabilities without writing a single line of boilerplate code. With n8n, an open‑source workflow automation tool, and OpenAI’s API, you can create end‑to‑end AI pipelines that handle everything from data ingestion to content generation. In this guide we’ll walk through the core concepts, build a couple of real‑world workflows, and share pro tips to keep your automations robust and scalable.

Why Combine n8n and OpenAI?

n8n shines as a visual orchestrator that can connect APIs, databases, and custom scripts in a drag‑and‑drop canvas. OpenAI, on the other hand, offers state‑of‑the‑art language models that can understand and generate text, images, and even code. By pairing the two, you get the best of both worlds: a low‑code environment to manage data flow and a powerful AI engine to add intelligence.

Key benefits include:

  • Modularity: Each node in n8n can be swapped out, letting you experiment with different prompts or models without touching the rest of the workflow.
  • Scalability: n8n can run on your own server, giving you full control over concurrency limits and cost management for OpenAI calls.
  • Observability: Built‑in execution logs and error handling make debugging AI‑driven pipelines much easier than a monolithic script.

Getting Started: Setting Up n8n and OpenAI

First, spin up an n8n instance. The quickest way is using Docker:

docker run -d \
  --name n8n \
  -p 5678:5678 \
  -e N8N_BASIC_AUTH_ACTIVE=true \
  -e N8N_BASIC_AUTH_USER=admin \
  -e N8N_BASIC_AUTH_PASSWORD=secret \
  n8nio/n8n

Next, obtain an API key from the OpenAI dashboard and store it securely. In n8n, create a new Credential of type “API Key” and paste the key. This credential will be referenced by every OpenAI node you add.

Installing the OpenAI Node

n8n ships with a generic HTTP Request node, but the community also offers a dedicated OpenAI node that simplifies parameter handling. To install:

  1. Open n8n’s Settings → Community Nodes.
  2. Search for “OpenAI” and click Install.
  3. Refresh the editor – the node will appear under the “AI” category.
Pro tip: Keep your n8n version up to date. Community nodes often rely on the latest core features, and newer releases include performance improvements for HTTP calls.

Use Case 1: Automated Customer Support Summaries

Support teams receive dozens of tickets daily. Manually summarizing each conversation is time‑consuming, yet concise summaries are essential for knowledge bases and hand‑offs. Let’s build a workflow that pulls the latest tickets from a Zendesk view, sends the conversation history to OpenAI’s gpt‑4o-mini model, and stores the generated summary back in a Google Sheet.

Step‑by‑Step Workflow

  • Trigger: Schedule Trigger – runs every hour.
  • Zendesk Node: “Get Tickets” – filters by status = “open”.
  • Function Node: Formats ticket data into a single string prompt.
  • OpenAI Node: Calls the completion endpoint with a custom system prompt.
  • Google Sheets Node: Appends a row containing ticket ID, summary, and timestamp.

Below is the JavaScript code used in the Function node to craft the prompt. n8n’s Function node runs on Node.js, so we can use template literals for readability.

const tickets = items.map(item => ({
  id: item.json.id,
  subject: item.json.subject,
  conversation: item.json.conversation,
}));

return tickets.map(ticket => ({
  json: {
    prompt: `Summarize the following support ticket in 3 bullet points. 
Ticket ID: ${ticket.id}
Subject: ${ticket.subject}
Conversation:
${ticket.conversation}
`,
    ticketId: ticket.id,
  },
}));

In the OpenAI node, set the following parameters:

  • Model: gpt-4o-mini
  • Temperature: 0.3 (keeps output deterministic)
  • Max Tokens: 150
  • Prompt: Use the prompt field from the previous node ({{ $json.prompt }})

Finally, map the response to the Google Sheets node:

{
  "values": [
    [
      "{{ $json.ticketId }}",
      "{{ $json.choices[0].message.content }}",
      "{{ $now }}"
    ]
  ]
}
Pro tip: Enable “Continue on Fail” for the Zendesk node. This prevents a single problematic ticket from halting the entire hourly run, and you can later inspect failures in the execution log.

Use Case 2: Content Generation for Marketing Campaigns

Marketers often need fresh copy for emails, ads, and social posts. Instead of brainstorming from scratch, you can automate the generation of multiple variants with a single workflow. This example pulls product data from a PostgreSQL table, feeds it to OpenAI to create three tagline options, and publishes them to a Contentful CMS entry for review.

Workflow Overview

  • PostgreSQL Node: Executes a SELECT query for newly added products.
  • SplitInBatches Node: Processes each product individually.
  • OpenAI Node: Generates three distinct taglines using a “few‑shot” prompt.
  • Function Node: Parses the response into a JSON array.
  • Contentful Node: Creates or updates a content entry with the generated copy.

The key to getting diverse outputs lies in the prompt design. We’ll use a system message that instructs the model to act as a “senior copywriter” and a user message that provides product details.

{
  "model": "gpt-4o-mini",
  "temperature": 0.8,
  "max_tokens": 200,
  "messages": [
    {
      "role": "system",
      "content": "You are a senior copywriter specializing in tech products. Write three catchy, 8‑word taglines for the given product. Each tagline should be on a separate line."
    },
    {
      "role": "user",
      "content": "Product: {{ $json.name }}\nFeatures: {{ $json.features }}\nTarget audience: {{ $json.audience }}"
    }
  ]
}

After the OpenAI node returns the raw text, we split it into an array:

const raw = $json.choices[0].message.content;
const lines = raw.trim().split('\n').filter(l => l.length > 0);
return [{ json: { taglines: lines } }];

Now the Contentful node can map each tagline to a field in the CMS entry. This keeps the content pipeline fully automated while still allowing editors to approve the copy before publishing.

Pro tip: Use the “Cache” feature in n8n for the PostgreSQL node. It reduces database load by only fetching rows added since the last successful run.

Advanced Pattern: Chaining Multiple OpenAI Calls

Some scenarios require more than a single model call. For example, you might first extract entities from a document, then ask the model to write a summary that emphasizes those entities. n8n makes this pattern straightforward by passing the output of one OpenAI node into the next.

Entity Extraction → Focused Summary

  1. HTTP Request Node (OpenAI – “Chat Completion”): Prompt the model with Extract the top 5 keywords from the following text.
  2. Function Node: Parse the list of keywords into a comma‑separated string.
  3. Second OpenAI Node: Use a system prompt like “Write a 150‑word summary that highlights the keywords: {keywords}.”

Sample code for the first Function node:

const text = $json.input_text;
const keywords = $json.choices[0].message.content
  .replace(/[\-\*\d\.]/g, '')
  .split('\n')
  .map(k => k.trim())
  .filter(Boolean);
return [{ json: { keywords: keywords.join(', ') } }];

The second OpenAI node then receives {{ $json.keywords }} in its prompt, ensuring the summary stays on‑topic.

Pro tip: Set “Maximum Retries” on each OpenAI node to 2 and enable exponential back‑off. This mitigates transient rate‑limit errors without manual intervention.

Best Practices for Production‑Ready AI Workflows

Building a proof‑of‑concept is only half the battle. When you move to production, consider these guidelines to keep costs, latency, and errors under control.

  • Token Management: Use the max_tokens parameter wisely. Over‑allocating can inflate your OpenAI bill without improving output quality.
  • Prompt Versioning: Store prompts in a separate JSON file or database table. This lets non‑technical team members edit copy without touching the workflow.
  • Observability: Leverage n8n’s built‑in “Execution Data” view and export logs to a centralized monitoring system (e.g., Loki or ELK).
  • Security: Never hard‑code API keys. Use environment variables or n8n’s credential store, and enable IP whitelisting on the OpenAI dashboard.
  • Testing: Create a “sandbox” workflow that uses the gpt-4o-mini model with a low temperature. Validate outputs before switching to higher‑cost models.

Scaling Strategies

As your automation volume grows, you’ll need to think about concurrency, rate limits, and fault tolerance.

  1. Horizontal Scaling: Deploy n8n behind a load balancer and enable the “Execute Workflow” API endpoint. Multiple instances can share a Redis queue for job distribution.
  2. Rate‑Limit Guard: Insert a “Throttle” node before each OpenAI call. Configure it to respect the requests per minute quota of your OpenAI plan.
  3. Retry Logic: Use the “Error Workflow” feature to capture failed executions, add context, and re‑queue them after a back‑off period.

Combining these techniques ensures that a sudden spike—say, a marketing campaign launching dozens of new products—won’t overwhelm the API or cause data loss.

Monitoring and Alerting

n8n provides webhook endpoints for external monitoring tools. You can emit a custom event after each successful OpenAI response, then pipe that into Grafana or Datadog dashboards.

{
  "event": "openai_success",
  "workflowId": "{{ $workflow.id }}",
  "durationMs": {{ $executionTime }},
  "model": "{{ $json.model }}",
  "tokensUsed": {{ $json.usage.total_tokens }}
}

Set up an alert on the tokensUsed metric to catch unexpected surges that could indicate a prompt loop or misuse.

Pro tip: Tag each workflow execution with a UUID (e.g., using the uuid node). This makes tracing a specific request across logs and monitoring tools trivial.

Conclusion

By marrying n8n’s flexible orchestration with OpenAI’s generative power, you can build AI‑driven pipelines that are both maintainable and scalable. Whether you’re summarizing support tickets, generating marketing copy, or chaining multiple model calls, the visual workflow approach reduces development overhead and speeds up iteration. Keep an eye on token usage, employ solid error handling, and leverage n8n’s built‑in monitoring to turn experimental automations into reliable production services.

Share this article