Hattip: Cross-Platform JavaScript HTTP Toolkit
When you think about making HTTP calls from JavaScript, the first things that pop up are usually fetch or bulky libraries like Axios. While they get the job done, they often hide the nitty‑gritty of streams, timeouts, and cross‑environment quirks behind abstractions that can become a pain point in performance‑critical apps. Enter Hattip—a lightweight, cross‑platform HTTP toolkit that gives you granular control without sacrificing developer ergonomics. In this article we’ll unpack what makes Hattip special, walk through real‑world examples, and share pro tips to keep your network layer lean and fast.
Why Hattip Stands Out
Hattip was built with three guiding principles: portability, stream‑first design, and minimalism. It works seamlessly in Node.js, Deno, Cloudflare Workers, and even in the browser, thanks to a tiny runtime‑agnostic core. Unlike fetch, which always buffers the entire response unless you manually handle streams, Hattip exposes readable streams right out of the box, letting you pipe data straight to a file or another service.
Another differentiator is its plug‑in architecture. You can stack middleware for logging, authentication, or retry logic in a way that feels familiar to Express or Koa users, but without pulling in heavyweight dependencies. The result is a lean bundle that stays under 5 KB gzipped—perfect for edge functions where every byte counts.
Getting Started: Installation and Basic Setup
First, add Hattip to your project. The package works with npm, yarn, or pnpm, and it also offers an ES module build for browsers.
npm install @hattip/core @hattip/node
# or with yarn
yarn add @hattip/core @hattip/node
Once installed, you can spin up a tiny server in under ten lines. The following example demonstrates a JSON echo endpoint that works both locally and on Cloudflare Workers.
import { createServer } from '@hattip/node';
import { json } from '@hattip/response';
const server = createServer(async (req) => {
const body = await req.text();
return json({ method: req.method, body });
});
server.listen(3000, () => console.log('🚀 Hattip listening on :3000'));
Notice the use of req.text()—a built‑in helper that reads the request body as a string, handling streams for you. The json helper sets the correct Content-Type header and stringifies the payload automatically.
Making HTTP Calls with Hattip Client
Hattip isn’t just a server framework; it also ships a tiny HTTP client that mirrors the server API. This symmetry means you can reuse the same middleware logic for outbound requests, making it easier to apply retries, tracing, or custom headers consistently.
Below is a practical GET request that streams a large CSV file from a public API and writes it directly to disk without loading the whole file into memory.
import { fetch } from '@hattip/core';
import { createWriteStream } from 'fs';
async function downloadCsv(url, destPath) {
const response = await fetch(url);
if (!response.ok) throw new Error(`❌ ${response.status}`);
// response.body is a WHATWG ReadableStream
const writable = createWriteStream(destPath);
const reader = response.body.getReader();
while (true) {
const { done, value } = await reader.read();
if (done) break;
writable.write(Buffer.from(value));
}
writable.end();
console.log('✅ Download complete');
}
// Example usage
downloadCsv('https://example.com/large.csv', './data.csv');
The fetch function works in Node, Deno, and browsers, abstracting away platform‑specific stream implementations. By using getReader(), you gain back‑pressure control, which is crucial when dealing with gigabyte‑scale payloads.
Handling Timeouts and Retries
Network reliability is never guaranteed, especially on the edge. Hattip lets you wrap the client with middleware that adds timeout and retry logic in a declarative fashion.
import { fetch, middleware } from '@hattip/core';
// Timeout middleware (ms)
function timeout(ms) {
return async (next) => async (req) => {
const controller = new AbortController();
const timer = setTimeout(() => controller.abort(), ms);
try {
return await next({ ...req, signal: controller.signal });
} finally {
clearTimeout(timer);
}
};
}
// Simple exponential backoff retry
function retry(maxAttempts = 3) {
return async (next) => async (req) => {
let attempt = 0;
while (attempt < maxAttempts) {
try {
return await next(req);
} catch (err) {
attempt++;
if (attempt >= maxAttempts) throw err;
const backoff = 100 * 2 ** attempt;
await new Promise((r) => setTimeout(r, backoff));
}
}
};
}
// Compose middleware
const client = middleware([timeout(5000), retry(4)])(fetch);
// Use the wrapped client
client('https://api.example.com/data')
.then((res) => res.json())
.then(console.log)
.catch(console.error);
Pro tip: Keep your retry count low on edge functions to avoid hitting execution time limits. Pair retries with a short timeout to fail fast and let the platform’s built‑in retry mechanisms take over.
Middleware in Depth
Hattip’s middleware model is inspired by the “onion” pattern: each layer receives a next function, does something before or after the inner layer, and then returns a response. This design makes it trivial to inject cross‑cutting concerns such as logging, authentication, or response compression.
Let’s build a small logging middleware that timestamps each request and prints the response status. It works both on the server side and when used with the client.
import { middleware } from '@hattip/core';
function logger() {
return async (next) => async (req) => {
const start = Date.now();
console.log(`[${new Date().toISOString()}] → ${req.method} ${req.url}`);
const res = await next(req);
const duration = Date.now() - start;
console.log(`[${new Date().toISOString()}] ← ${res.status} (${duration}ms)`);
return res;
};
}
// Server example
import { createServer } from '@hattip/node';
import { json } from '@hattip/response';
const handler = middleware([logger()])(
async (req) => json({ hello: 'world' })
);
createServer(handler).listen(3000);
The same logger can be applied to the client by wrapping fetch, giving you a unified view of inbound and outbound traffic.
Composable Error Handling
When you’re dealing with multiple middleware layers, a central error handler can prevent duplicate try/catch blocks. Hattip provides a convenient errorHandler helper that catches any thrown error and formats a JSON response.
import { errorHandler } from '@hattip/middleware';
function jsonError(err) {
return new Response(JSON.stringify({ error: err.message }), {
status: err.status || 500,
headers: { 'Content-Type': 'application/json' },
});
}
// Apply globally
const api = middleware([errorHandler(jsonError), logger()])(
async (req) => {
if (req.url.includes('boom')) throw new Error('💥 Boom!');
return json({ ok: true });
}
);
Now any uncaught exception bubbles up to jsonError, ensuring a consistent error shape across your API.
Real‑World Use Cases
1. Edge‑Optimized Image Proxy
Imagine you need to resize images on the fly at a CDN edge. Hattip’s streaming response lets you pipe the original image directly into an image processing library (like Sharp) and stream the transformed output back to the client without temporary files.
import { createServer } from '@hattip/node';
import sharp from 'sharp';
import { fetch } from '@hattip/core';
const server = createServer(async (req) => {
const url = new URL(req.url, 'http://localhost');
const src = url.searchParams.get('src');
const width = parseInt(url.searchParams.get('w'), 10) || 800;
const upstream = await fetch(src);
if (!upstream.ok) return new Response('Source not found', { status: 404 });
const transformer = sharp().resize(width);
const readable = upstream.body.pipeThrough(transformer);
return new Response(readable, {
headers: { 'Content-Type': 'image/webp' },
});
});
server.listen(8080);
This pattern eliminates the need for a separate image server and reduces latency by processing directly at the edge.
2. Server‑Side Event (SSE) Streamer
SSE is a simple way to push real‑time updates to browsers. Hattip’s native stream support makes implementing an SSE endpoint a breeze.
import { createServer } from '@hattip/node';
import { text } from '@hattip/response';
function sseHandler(req) {
const encoder = new TextEncoder();
const stream = new ReadableStream({
start(controller) {
const interval = setInterval(() => {
const data = `data: ${new Date().toISOString()}\n\n`;
controller.enqueue(encoder.encode(data));
}, 1000);
req.signal.addEventListener('abort', () => {
clearInterval(interval);
controller.close();
});
},
});
return new Response(stream, {
headers: {
'Content-Type': 'text/event-stream',
'Cache-Control': 'no-cache',
Connection: 'keep-alive',
},
});
}
createServer(sseHandler).listen(4000);
Clients can connect with new EventSource('/sse') and receive a timestamp every second, all powered by Hattip’s minimal footprint.
Performance Benchmarks
Because Hattip avoids heavy abstractions, it often outperforms traditional frameworks in raw throughput. In a recent benchmark (10k concurrent GET requests to a static JSON endpoint), Hattip achieved ~12,000 RPS on a single‑core AWS Lambda, while Express hovered around 7,500 RPS and Fastify near 9,200 RPS. The difference shrinks when you add middleware, but Hattip’s lean core still gives it a head‑start.
Memory usage also stays low: a minimal Hattip server consumes ~30 MB RSS, compared to ~45 MB for Express and ~38 MB for Fastify. This makes Hattip a natural fit for serverless platforms where cold‑start time and memory limits are critical.
Pro tip: When deploying to Cloudflare Workers, bundle your Hattip code with esbuild --minify to stay well under the 1 MB limit. Hattip’s tree‑shakable design means unused middleware never makes it into the final bundle.
Testing Hattip Applications
Testing is straightforward thanks to Hattip’s pure function handlers. You can invoke a handler directly with a mock request object and assert on the response. Below is a Jest‑style test for the JSON echo endpoint we built earlier.
import { createServer } from '@hattip/node';
import { json } from '@hattip/response';
const handler = createServer(async (req) => {
const body = await req.text();
return json({ method: req.method, body });
});
test('echo returns posted body', async () => {
const req = new Request('http://localhost', {
method: 'POST',
body: JSON.stringify({ foo: 'bar' }),
headers: { 'Content-Type': 'application/json' },
});
const res = await handler(req);
const data = await res.json();
expect(res.status).toBe(200);
expect(data.method).toBe('POST');
expect(data.body).toBe('{"foo":"bar"}');
});
Because the handler is just a function, you don’t need a running server or network stack to test it. This leads to faster CI pipelines and easier debugging.
Deploying Hattip to Different Environments
One of Hattip’s biggest selling points is its “write once, run anywhere” promise. Below is a quick cheat‑sheet for the most common targets.
- Node.js: Use
@hattip/nodeand start withcreateServer. - Deno: Import from
https://deno.land/x/hattipand callserve. - Cloudflare Workers: Export a default handler; the platform will invoke it automatically.
- Vercel Serverless Functions: Wrap the handler with
export default handlerand Vercel will handle the rest.
All you need to change is the adapter import; the core logic stays untouched. This eliminates “platform lock‑in” and lets you experiment with edge locations without rewriting code.
Advanced Patterns: Request Batching
Batching multiple HTTP calls into a single request can dramatically reduce latency, especially on high‑latency networks. Hattip’s streaming response makes it easy to implement a batch endpoint that aggregates downstream calls and streams a combined JSON array as results arrive.
import { createServer } from '@hattip/node';
import { fetch } from '@hattip/core';
import { json } from '@hattip/response';
async function batchHandler(req) {
const { urls } = await req.json(); // expects { urls: ['https://…', …] }
const encoder = new TextEncoder();
const stream = new ReadableStream({
async start(controller) {
controller.enqueue(encoder.encode('['));
for (let i = 0; i < urls.length; i++) {
const res = await fetch(urls[i]);
const data = await res.json();
const chunk = JSON.stringify(data) + (i < urls.length - 1 ? ',' : '');
controller.enqueue(encoder.encode(chunk));
}
controller.enqueue(encoder.encode(']'));
controller.close();
},
});
return new Response(stream, {
headers: { 'Content-Type': 'application/json' },
});
}
createServer(batchHandler).listen(5000);
The client receives a valid JSON array as soon as the first downstream response arrives, without waiting for the entire batch to finish. This pattern shines in dashboards that need to display data from several micro‑services concurrently.
Security Considerations
Because Hattip gives you raw access to streams and headers, you must be vigilant about common web security pitfalls. Here are a few quick checks:
- Validate Content‑Length: When streaming uploads, enforce a maximum size to avoid DoS attacks.
- Sanitize User Input: Never interpolate request parameters directly into file paths or database queries.
- Set CSP and CORS: Use middleware to attach
Content‑Security‑PolicyandAccess‑Control‑Allow‑Originheaders consistently. - Enable TLS: When deploying to edge platforms, ensure the platform terminates TLS for you; otherwise, use
https.createServerin Node.
Pro tip: Write