PostgreSQL 18: New Features Guide
PROGRAMMING LANGUAGES Feb. 1, 2026, 11:30 a.m.

PostgreSQL 18: New Features Guide

PostgreSQL 18 lands with a fresh batch of enhancements that feel like a natural evolution rather than a radical overhaul. Whether you’re a seasoned DBA, a backend engineer, or a data‑science enthusiast, the new release offers tangible productivity boosts and performance gains. In this guide we’ll walk through the most compelling features, see them in action with practical code snippets, and uncover some hidden tricks that can make your next project run smoother.

What’s New in PostgreSQL 18?

PostgreSQL 18 builds on the solid foundation of version 17, focusing on query planning, JSON handling, partitioning, and security. The release also introduces several cloud‑native conveniences that simplify deployment on managed platforms. While the core engine remains rock‑solid, the refinements are enough to justify an upgrade for most production workloads.

Key highlights include a smarter planner that can better estimate row counts for complex joins, expanded support for JSON path queries, and a new generated always as identity syntax that replaces the older serial pseudo‑type. Additionally, the partition pruning logic now works across more query patterns, and the built‑in pg_hint_plan extension finally makes its way into the core distribution.

A Quick Release Summary

  • Improved cost‑based optimizer with extended statistics for multi‑column dependencies.
  • Native JSON path predicates with index support.
  • Enhanced partition pruning and automatic partition creation.
  • Advanced authentication methods, including SCRAM‑SHA‑256‑PLUS and LDAP over TLS.
  • Built‑in support for pg_hint_plan to influence planner decisions.
  • Performance‑focused vacuum and autovacuum improvements.

Smarter Query Planner

The planner in PostgreSQL 18 now leverages extended statistics that capture correlations between columns in the same table. This means the optimizer can avoid over‑estimating row counts for queries that join on correlated fields, leading to more efficient join orders and reduced execution times.

To enable these statistics, you simply run CREATE STATISTICS on the relevant column sets. The planner automatically picks them up during subsequent query compilation.

CREATE STATISTICS sales_corr
    ON (customer_id, region_id)
    FROM sales;
ANALYZE sales;

After the statistics are collected, a query that filters by both customer_id and region_id will see a dramatically lower estimated row count, allowing the planner to choose a nested‑loop join instead of a hash join when appropriate.

Pro tip: Run EXPLAIN (ANALYZE, BUFFERS) before and after adding statistics to see the exact impact on join strategy and I/O.

Introducing pg_hint_plan Core Integration

Historically, pg_hint_plan lived as an external extension, requiring extra installation steps. PostgreSQL 18 ships it as a core component, letting you embed hints directly in SQL comments. This is especially handy for legacy applications that need fine‑grained control over join orders without rewriting the entire query.

Here’s a quick example that forces a bitmap index scan on a large table:

/*+ BitmapScan(t1) */
SELECT *
FROM orders t1
WHERE order_date > '2024-01-01'::date;

The hint is ignored by the planner if it deems the plan sub‑optimal, preserving safety while giving you a lever for edge cases.

JSON Path Queries Get a Boost

JSON has become a first‑class citizen in modern applications, and PostgreSQL 18 responds with richer path query capabilities. The new @> operator lets you test whether a JSON document matches a JSONPath expression, and you can now create GIN indexes on those expressions for lightning‑fast lookups.

Suppose you store product specifications in a jsonb column called specs. You want to find all items where the cpu.cores field is at least 8. The new syntax makes this concise:

SELECT id, name
FROM products
WHERE specs @> '$.cpu.cores ? (@ >= 8)';

To accelerate this query, create a functional GIN index on the JSONPath expression:

CREATE INDEX idx_products_cpu_cores
    ON products USING GIN ((specs @> '$.cpu.cores'));

Now the planner can use the index to filter rows before evaluating the predicate, cutting down runtime dramatically on tables with millions of rows.

Pro tip: Combine JSONPath indexing with covering indexes (include the columns you need in the SELECT) to avoid fetching the full row from the heap.

Practical Python Example

Below is a short Python script using psycopg2 that inserts a batch of product records and then queries them with the new JSONPath operator. The script demonstrates both the insertion of complex JSON and the use of a prepared statement for the filter.

import json
import psycopg2
from psycopg2.extras import Json

conn = psycopg2.connect(dsn="dbname=store user=app")
cur = conn.cursor()

# Insert sample products
products = [
    (1, "Gaming Laptop", {"cpu": {"model": "i7", "cores": 8}, "ram": 16}),
    (2, "Workstation", {"cpu": {"model": "Xeon", "cores": 12}, "ram": 32}),
    (3, "Ultrabook", {"cpu": {"model": "i5", "cores": 4}, "ram": 8}),
]

cur.executemany(
    "INSERT INTO products (id, name, specs) VALUES (%s, %s, %s::jsonb)",
    [(pid, name, Json(spec)) for pid, name, spec in products]
)
conn.commit()

# Query using JSONPath
cur.execute(
    \"\"\"
    SELECT id, name
    FROM products
    WHERE specs @> '$.cpu.cores ? (@ >= 8)'
    \"\"\"
)
for row in cur.fetchall():
    print(row)

cur.close()
conn.close()

The script will print the IDs and names of the “Gaming Laptop” and “Workstation” rows, confirming that the JSONPath filter works as expected.

Generated Columns and Identity Enhancements

Version 18 deprecates the old SERIAL pseudo‑type in favor of the SQL standard GENERATED ALWAYS AS IDENTITY. This change brings clearer semantics and better compatibility with other databases. Moreover, you can now define generated stored columns that compute values from other columns and are materialized on disk, offering indexable derived data without triggers.

Consider a table that stores timestamps in UTC but frequently needs a local‑time representation for reporting. A generated column can handle the conversion once, and you can index it for fast queries.

CREATE TABLE events (
    event_id   BIGINT GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
    utc_ts     TIMESTAMPTZ NOT NULL,
    local_ts   TIMESTAMP GENERATED ALWAYS AS (utc_ts AT TIME ZONE 'America/New_York') STORED
);

CREATE INDEX idx_events_local_ts ON events (local_ts);

Now a query that filters on local_ts can use the index directly, avoiding the runtime conversion cost.

Pro tip: Use generated stored columns for frequently queried expressions like date_trunc('day', ts) or hash keys. This eliminates the need for materialized views in many cases.

Partitioning Gets Smarter

Partition pruning has been a hallmark of PostgreSQL’s scalability, and 18 expands its reach. The planner now prunes partitions based on IN list predicates, range predicates combined with OR, and even on expressions that reference immutable functions.

For example, a table partitioned by month can now be queried with an IN list of dates, and the planner will eliminate all irrelevant partitions in a single pass.

SELECT *
FROM sales_monthly
WHERE sale_date IN ('2024-01-15'::date, '2024-02-20'::date);

Behind the scenes, PostgreSQL evaluates the list, matches each date to its corresponding month partition, and scans only those two partitions instead of the whole year.

Automatic Partition Creation

Creating a new partition manually every month can be tedious. PostgreSQL 18 introduces CREATE TABLE ... PARTITION OF ... DEFAULT with a built‑in trigger that automatically creates the next partition when data for a new range arrives.

CREATE TABLE logs (
    id         BIGINT GENERATED ALWAYS AS IDENTITY,
    log_ts     TIMESTAMPTZ NOT NULL,
    message    TEXT
) PARTITION BY RANGE (log_ts);

CREATE TABLE logs_default
    PARTITION OF logs DEFAULT
    USING (log_ts)
    WITH (PARTITION_RANGE = INTERVAL '1 month');

The logs_default partition acts as a placeholder. When a row with a timestamp that falls outside existing partitions is inserted, PostgreSQL automatically creates the appropriate monthly partition behind the scenes.

Pro tip: Pair automatic partition creation with a scheduled job that runs VACUUM ANALYZE on newly created partitions to keep statistics fresh.

Security and Authentication Upgrades

Security is a top priority for PostgreSQL 18, and the release adds two notable authentication mechanisms: SCRAM‑SHA‑256‑PLUS and LDAP over TLS. The former strengthens password hashing by incorporating a server‑side secret, making offline attacks considerably harder.

To enable SCRAM‑SHA‑256‑PLUS, set password_encryption = scram-sha-256-plus in postgresql.conf and reset user passwords. Existing SCRAM‑SHA‑256 users will continue to work, but new passwords will benefit from the extra secret.

ALTER SYSTEM SET password_encryption = 'scram-sha-256-plus';
SELECT pg_reload_conf();
ALTER ROLE alice WITH PASSWORD 'newStrongPassword!';

LDAP over TLS is configured via pg_hba.conf entries that specify ldap ssl=1. This ensures credentials travel over an encrypted channel, meeting compliance requirements for many enterprises.

# pg_hba.conf
host    all     all     0.0.0.0/0    ldap ssl=1 ldapprefix="uid=" ldapsuffix=",ou=People,dc=example,dc=com"

Row‑Level Security (RLS) Enhancements

RLS policies now support USING expressions that reference immutable functions, making policy definitions more expressive without sacrificing performance. For multi‑tenant SaaS platforms, this means you can embed a tenant‑lookup function directly into the policy.

CREATE FUNCTION tenant_id_for_user(uid TEXT) RETURNS INT
    IMMUTABLE LANGUAGE sql AS $$
    SELECT tenant_id FROM users WHERE username = uid;
$$;

CREATE POLICY tenant_isolation ON orders
    USING (tenant_id = tenant_id_for_user(current_user));

The policy evaluates once per row, but because the function is immutable, PostgreSQL can cache the result, keeping overhead minimal.

Performance: Vacuum, Autovacuum, and Indexing

Vacuuming remains a critical maintenance task, and PostgreSQL 18 refines both manual and autovacuum behavior. The new vacuum_cost_delay algorithm dynamically adjusts based on I/O pressure, preventing vacuum from starving user queries on busy systems.

Additionally, autovacuum now tracks per‑table dead‑tuple ratios more accurately, triggering a vacuum earlier for tables that experience bursty updates. This reduces bloat in high‑write workloads such as event logging.

Covering Indexes for Faster Reads

Covering indexes, also known as include indexes, have been extended to support expression columns. You can now store a computed value directly in the index, eliminating the need to fetch the heap row for certain queries.

CREATE INDEX idx_orders_customer_total
    ON orders (customer_id)
    INCLUDE (total_amount, (order_date AT TIME ZONE 'UTC')::date);

A query that selects total_amount and the converted order_date for a specific customer_id can be satisfied entirely from the index, dramatically reducing I/O.

Pro tip: Use covering indexes sparingly on wide tables; they increase index size, but the read‑performance gain can outweigh the storage cost for hot paths.

Cloud‑Native Features

PostgreSQL 18 acknowledges the growing shift to managed cloud services. The release adds built‑in support for logical replication over TLS, making it easier to set up secure cross‑region replicas without external tunneling.

To enable TLS for logical replication, set wal_level = logical and configure hostssl entries in pg_hba.conf. The replication client then connects using sslmode=require.

# pg_hba.conf
hostssl    replication    replicator    10.0.0.0/24    scram-sha-256

Another cloud‑focused addition is the pg_stat_kcache extension, now part of the core distribution. It surfaces kernel‑level cache statistics, helping you diagnose latency spikes caused by OS page cache pressure.

Server‑less Integration

For server‑less platforms like Amazon Aurora Serverless v2, PostgreSQL 18 introduces a lightweight pg_sleep_until function that pauses execution until a specified timestamp without consuming a CPU slot. This is handy for back‑off algorithms in retry loops.

SELECT pg_sleep_until(NOW() + INTERVAL '5 seconds');

The function yields control back to the scheduler, allowing other queries to run while the sleep timer counts down.

Migration Strategies

Upgrading to PostgreSQL 18 should be a planned, incremental process. The most reliable path is to use pg_upgrade with the --link option on the same filesystem, minimizing downtime to a few minutes for most workloads.

Before you start, run pg_dumpall --globals-only to back up roles and tablespaces, then verify that all extensions you rely on are compatible with version 18. Extensions such as postgis and timescaledb have released 18‑compatible builds, but custom extensions may need recompilation.

# Step‑by‑step upgrade
pg_ctl -D /var/lib/postgresql/17/main stop
pg_upgrade \
    --old-datadir=/var/lib/postgresql/17/main \
    --new-datadir=/var/lib/postgresql/18/main \
    --old-bindir=/usr/lib/postgresql/17/bin \
    --new-bindir=/usr/lib/postgresql
        
Share this article