Tech Tutorial - February 17 2026 113007
Welcome to this hands‑on tutorial where we’ll build a real‑time data dashboard from scratch. By the end of the guide, you’ll have a fully functional web app that streams live metrics via WebSockets, visualizes them with Plotly, and runs on a lightweight FastAPI server. The approach balances simplicity with production‑ready patterns, so you can adapt it to anything from IoT telemetry to stock‑price tickers.
Why Real‑Time Dashboards Matter
In today’s data‑driven world, waiting for a page refresh feels archaic. Real‑time dashboards let stakeholders react instantly—whether it’s a DevOps engineer spotting a spike in CPU usage or a product manager monitoring user engagement during a launch. By leveraging WebSockets, we push updates the moment they happen, eliminating the latency inherent in traditional polling.
FastAPI shines in this space because it’s built on ASGI, giving us native async support without extra glue code. Pair it with Plotly’s interactive charts, and you get a sleek UI that feels responsive on both desktop and mobile browsers. The stack we’ll use is deliberately minimal: Python 3.12, FastAPI, Uvicorn, and Plotly.js on the client side.
Setting Up the Development Environment
First, ensure you have Python 3.12 or newer installed. Create a virtual environment to keep dependencies isolated:
python -m venv venv
source venv/bin/activate # On Windows use `venv\Scripts\activate`
pip install fastapi uvicorn python‑dotenv plotly
We’ll also need websockets for a clean client implementation, and pydantic for data validation (already bundled with FastAPI). Install them with:
pip install websockets
Once the packages are in place, spin up a fresh project folder structure:
- app/
- main.py
- schemas.py
- utils.py
- static/
- index.html
- dashboard.js
- .env
This layout separates the API logic from static assets, making it easier to serve the front‑end directly from FastAPI’s static files middleware.
Designing the Data Model
Our dashboard will display three metrics: temperature, humidity, and pressure. Each data point arrives as a JSON payload with a timestamp. Let’s define a Pydantic model to enforce schema consistency:
# app/schemas.py
from pydantic import BaseModel, Field
from datetime import datetime
class Metric(BaseModel):
timestamp: datetime = Field(default_factory=datetime.utcnow)
temperature: float
humidity: float
pressure: float
Using default_factory=datetime.utcnow ensures every incoming metric is stamped server‑side, preventing clock drift between devices.
Pro Tip: If you expect high‑frequency data (e.g., >100 Hz), consider using msgpack instead of JSON to reduce payload size and parsing overhead.
Building the FastAPI Backend
Creating the FastAPI Instance
Open app/main.py and instantiate the FastAPI app. We’ll also mount the static directory so the browser can fetch HTML, CSS, and JavaScript files directly.
# app/main.py
from fastapi import FastAPI, WebSocket, WebSocketDisconnect
from fastapi.staticfiles import StaticFiles
from pathlib import Path
from .schemas import Metric
import json
import asyncio
app = FastAPI()
app.mount("/static", StaticFiles(directory="static"), name="static")
Managing WebSocket Connections
To broadcast metrics to all connected clients, we’ll maintain an in‑memory set of active WebSocket connections. This simple approach works well for demo purposes; for production, you’d replace it with a Redis pub/sub layer.
# app/main.py (continued)
class ConnectionManager:
def __init__(self):
self.active_connections: set[WebSocket] = set()
async def connect(self, websocket: WebSocket):
await websocket.accept()
self.active_connections.add(websocket)
def disconnect(self, websocket: WebSocket):
self.active_connections.discard(websocket)
async def broadcast(self, message: str):
# Send to all clients concurrently
await asyncio.gather(
*[ws.send_text(message) for ws in self.active_connections],
return_exceptions=True,
)
manager = ConnectionManager()
WebSocket Endpoint
The endpoint receives metric data via POST requests (simulating an IoT device) and pushes the same payload to every WebSocket client. This decouples data ingestion from consumption.
# app/main.py (continued)
from fastapi import Body
@app.post("/ingest")
async def ingest(metric: Metric = Body(...)):
# Serialize to JSON for broadcasting
payload = metric.json()
await manager.broadcast(payload)
return {"status": "queued"}
@app.websocket("/ws")
async def websocket_endpoint(websocket: WebSocket):
await manager.connect(websocket)
try:
while True:
# Keep connection alive; we don’t expect messages from client
await websocket.receive_text()
except WebSocketDisconnect:
manager.disconnect(websocket)
Notice the dummy receive_text loop—its sole purpose is to keep the connection alive and detect disconnects cleanly.
Simulating a Data Source
To see the dashboard in action, we’ll write a small script that generates random metrics every second. In a real scenario, this would be replaced by sensor firmware or a message broker.
# utils.py
import random
import httpx
import asyncio
from datetime import datetime
from app.schemas import Metric
API_URL = "http://127.0.0.1:8000/ingest"
async def send_random_metric():
async with httpx.AsyncClient() as client:
while True:
metric = Metric(
temperature=round(random.uniform(20, 30), 2),
humidity=round(random.uniform(30, 70), 2),
pressure=round(random.uniform(990, 1025), 2),
)
await client.post(API_URL, json=metric.dict())
await asyncio.sleep(1)
if __name__ == "__main__":
asyncio.run(send_random_metric())
Run the FastAPI server with uvicorn app.main:app --reload, then start utils.py in a separate terminal. You’ll see a steady stream of JSON objects being pushed to all connected browsers.
Pro Tip: When scaling, replace the in‑memory ConnectionManager with a message broker (e.g., RabbitMQ or Redis Streams) to guarantee delivery across multiple server instances.
Crafting the Frontend Dashboard
HTML Boilerplate
The static/index.html file loads Plotly.js from a CDN, establishes a WebSocket connection, and sets up three line charts—one for each metric. Keep the markup minimal to focus on the JavaScript logic.
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Real‑Time Dashboard</title>
<script src="https://cdn.plot.ly/plotly-2.27.0.min.js"></script>
<style>
body { font-family: Arial, sans-serif; margin: 2rem; }
.chart { width: 100%; height: 300px; margin-bottom: 2rem; }
</style>
</head>
<body>
<h1>Live Sensor Metrics</h1>
<div id="tempChart" class="chart"></div>
<div id="humChart" class="chart"></div>
<div id="presChart" class="chart"></div>
<script src="/static/dashboard.js"></script>
</body>
</html>
JavaScript Logic
The dashboard.js script creates Plotly traces, updates them in place, and handles reconnection logic. By mutating the existing traces instead of redrawing the whole chart, we achieve smooth animations even at high update rates.
// static/dashboard.js
const WS_URL = `${location.protocol === 'https:' ? 'wss' : 'ws'}://${location.host}/ws`;
let socket = null;
let reconnectAttempts = 0;
const MAX_POINTS = 100; // Keep chart lightweight
function initChart(divId, yTitle) {
const layout = {
title: yTitle,
xaxis: { title: 'Time' },
yaxis: { autorange: true },
margin: { t: 30, b: 40 }
};
Plotly.newPlot(divId, [{ x: [], y: [], mode: 'lines', line: {color: '#1f77b4'} }], layout);
return divId;
}
const charts = {
temperature: initChart('tempChart', 'Temperature (°C)'),
humidity: initChart('humChart', 'Humidity (%)'),
pressure: initChart('presChart', 'Pressure (hPa)')
};
function addPoint(chartId, timestamp, value) {
Plotly.extendTraces(chartId, { x: [[timestamp]], y: [[value]] }, [0]);
const chartDiv = document.getElementById(chartId);
if (chartDiv.data[0].x.length > MAX_POINTS) {
Plotly.relayout(chartDiv, {
xaxis: {
range: [
chartDiv.data[0].x[chartDiv.data[0].x.length - MAX_POINTS],
chartDiv.data[0].x[chartDiv.data[0].x.length - 1]
]
}
});
}
}
function connect() {
socket = new WebSocket(WS_URL);
socket.onopen = () => {
console.log('WebSocket connected');
reconnectAttempts = 0;
};
socket.onmessage = (event) => {
const data = JSON.parse(event.data);
const ts = new Date(data.timestamp);
addPoint('tempChart', ts, data.temperature);
addPoint('humChart', ts, data.humidity);
addPoint('presChart', ts, data.pressure);
};
socket.onclose = () => {
console.warn('WebSocket closed, attempting reconnection...');
if (reconnectAttempts < 5) {
setTimeout(connect, 2000 * ++reconnectAttempts);
}
};
socket.onerror = (err) => {
console.error('WebSocket error:', err);
socket.close();
};
}
// Kick off the connection when the page loads
window.addEventListener('load', connect);
The script uses Plotly.extendTraces to append new points efficiently. We also enforce a sliding window of MAX_POINTS to keep memory usage predictable.
Real‑World Use Cases
IoT Device Monitoring: Deploy the FastAPI server on an edge gateway, and have hundreds of sensors POST metrics. The dashboard gives operators a single pane of glass to spot anomalies instantly.
Financial Tick Data: Replace the random generator with a market data feed (e.g., WebSocket from a broker). The same front‑end can render price movements, volume spikes, and order‑book depth in real time.
Application Health Checks: Feed internal health metrics—CPU, memory, request latency—into the /ingest endpoint. Teams can set up alerts when thresholds are crossed, leveraging Plotly’s built‑in hover tooltips for quick diagnostics.
Performance Optimizations
While the demo runs comfortably on a single‑core laptop, production workloads demand careful tuning. Here are three quick wins:
- Batch Broadcasts: Instead of sending every metric individually, accumulate a short buffer (e.g., 100 ms) and broadcast as a JSON array. This reduces network chatter.
- Binary Serialization: Switch from JSON to
msgpackor Protocol Buffers for lower latency and smaller payloads. - Horizontal Scaling: Deploy multiple FastAPI workers behind a load balancer and use a shared Redis pub/sub channel for WebSocket messages. This ensures all clients receive the same stream regardless of which worker they connect to.
Pro Tip: When using Uvicorn with multiple workers, pass --ws websockets to enable proper handling of WebSocket upgrades across processes.
Deploying to Production
Containerization is the most straightforward path. Create a Dockerfile that installs dependencies, copies the source, and runs Uvicorn with --host 0.0.0.0:
# Dockerfile
FROM python:3.12-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
EXPOSE 8000
CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000"]
Build and run the image:
docker build -t realtime-dashboard .
docker run -d -p 8000:8000 realtime-dashboard
For high availability, orchestrate the containers with Kubernetes. Define a Deployment with at least two replicas, expose it via a Service, and attach an Ingress that terminates TLS. Remember to mount a Redis sidecar or external service for shared pub/sub.
Testing and Validation
Automated tests help catch regressions early. Use pytest together with httpx.AsyncClient to simulate POST requests and verify that the WebSocket broadcast delivers the correct payload.
# tests/test_ingest.py
import pytest
import json
from httpx import AsyncClient
from app.main import app
@pytest.mark.asyncio
async def test_ingest_broadcast():
async with AsyncClient(app=app, base_url="http://test") as client:
# Open a WebSocket connection
async with client.ws_connect("/ws") as websocket:
payload = {
"temperature": 25.5,
"humidity": 55.2,
"pressure": 1005.3
}
response = await client.post("/ingest", json