Teleport: Zero Trust Access for Servers and Databases
Imagine a world where every SSH session, every database query, and every admin action is verified, logged, and encrypted—no matter where the client or server lives. That’s the promise of Zero Trust, and Teleport is one of the most pragmatic implementations of this model for modern infrastructure. In this article we’ll unpack how Teleport works, walk through real‑world setups, and share pro tips that help you lock down servers and databases without breaking developer velocity.
Zero Trust in a Nutshell
Zero Trust flips the traditional security mindset on its head. Instead of assuming everything inside the corporate network is safe, it assumes every request is hostile until proven otherwise. The core tenets are “never trust, always verify” and “least‑privilege access.”
For developers, this means:
- Identity‑driven access rather than IP‑based firewalls.
- Short‑lived credentials that expire automatically.
- End‑to‑end encryption for every session.
When you apply these principles to SSH and database connections, you eliminate a huge attack surface: stolen private keys, lingering root accounts, and unchecked lateral movement.
What Is Teleport?
Teleport is an open‑source access plane that provides secure, auditable access to SSH servers, Kubernetes clusters, web applications, and databases. It sits between the client and the target, handling authentication, authorization, and session recording.
Key features include:
- Identity federation – integrate with Okta, Azure AD, GitHub, or any OIDC provider.
- One‑click access – users run
tshto spin up a temporary session without ever handling private keys. - Full audit trail – every command is recorded and searchable.
- Database proxy – Teleport acts as a TLS‑terminating gateway for PostgreSQL, MySQL, MongoDB, and more.
Because Teleport is built on a single, stateless proxy, you can deploy it in any environment—on‑prem, cloud VMs, or even edge devices.
Core Architecture
Auth Server
The Auth Server is the brain of Teleport. It stores user identities, role definitions, and public keys. All authentication requests flow through this component, which also issues short‑lived certificates for SSH and database access.
Proxy Service
The Proxy Service is the single entry point for all client traffic. It terminates TLS, validates the user’s certificate with the Auth Server, and forwards the request to the appropriate node or database.
Node (SSH) and Database Services
Each managed server runs a lightweight Teleport “node” daemon that registers itself with the Auth Server. For databases, a separate Teleport “database” daemon runs on the same host, exposing a local port that the proxy forwards to.
Optional Components
- Teleport Cloud – SaaS offering for centralized management.
- Telemetry & Metrics – Prometheus exporters for monitoring.
- Audit Log Backends – Elasticsearch, DynamoDB, or local file storage.
Understanding these pieces helps you decide where to place each service for high availability and minimal latency.
Getting Started: Installing Teleport
Teleport provides binary packages for Linux, macOS, and Windows. Below is a quick installation script for Ubuntu 22.04 that pulls the latest stable version, verifies the signature, and starts the service as a systemd unit.
#!/usr/bin/env python3
import subprocess, os, sys, pathlib, hashlib, urllib.request
VERSION = "15.0.0"
BASE_URL = f"https://dl.teleport.dev/teleport/v{VERSION}"
DEB = f"teleport_{VERSION}_amd64.deb"
SIG = f"{DEB}.asc"
def run(cmd):
subprocess.check_call(cmd, shell=True)
def download(file):
url = f"{BASE_URL}/{file}"
urllib.request.urlretrieve(url, file)
def verify():
# Import Teleport public key (placeholder)
run("curl -sSL https://get.gravitational.com/teleport-pubkey.asc | gpg --import")
run(f"gpg --verify {SIG} {DEB}")
def install():
run(f"sudo dpkg -i {DEB}")
def enable_service():
run("sudo systemctl enable teleport")
run("sudo systemctl start teleport")
def main():
for f in (DEB, SIG):
download(f)
verify()
install()
enable_service()
print("✅ Teleport installed and running")
if __name__ == "__main__":
main()
The script demonstrates two best practices: cryptographic verification of the package and automated service enablement. After the install, you can check the status with systemctl status teleport.
Bootstrapping the Auth Server
In a single‑node deployment, the same binary runs both Auth and Proxy services. Initialize the cluster with a one‑time token that will be used by nodes to join.
# Generate a join token (run on the Auth/Proxy host)
import subprocess, json, os
def create_token():
cmd = ["tctl", "tokens", "add", "--type=node", "--ttl=24h"]
out = subprocess.check_output(cmd).decode()
token = json.loads(out)["token"]
print(f"Node join token: {token}")
if __name__ == "__main__":
create_token()
Copy the printed token and use it on each server you want to manage. The node daemon will automatically register itself with the Auth Server, and you’ll see the host appear in the Teleport UI.
Configuring SSH Access
Teleport replaces traditional SSH keys with short‑lived X.509 certificates. To grant a developer read‑only access to a set of production servers, define a role in the Auth Server’s YAML configuration.
# roles/readonly.yaml
kind: role
metadata:
name: prod-readonly
spec:
allow:
logins: ["ubuntu"]
node_labels:
env: "prod"
rules:
- resources: ["node"]
verbs: ["read"]
deny: {}
options:
max_session_ttl: "2h"
forward_agent: false
Apply the role with tctl create -f roles/readonly.yaml. Then bind it to a user (or group) from your identity provider:
# users/jane.yaml
kind: user
metadata:
name: jane.doe@example.com
spec:
roles: ["prod-readonly"]
oidc_identities:
- connector_name: "github"
username: "jane-doe"
After syncing, Jane can log in with tsh login --proxy=teleport.example.com and then run tsh ssh ubuntu@my-prod-server without ever touching a private key.
Database Access Made Simple
Teleport’s database proxy removes the need for per‑user credentials inside your DB. Instead, Teleport issues a temporary TLS certificate that the client presents to the database engine.
First, enable the database service in teleport.yaml:
# teleport.yaml snippet
databases:
- name: prod-postgres
protocol: postgres
uri: "10.0.1.12:5432"
static_labels:
env: "prod"
aws_region: "us-east-1" # optional, for IAM auth
Restart Teleport, then create a role that permits read access to the database:
# roles/db-read.yaml
kind: role
metadata:
name: db-read
spec:
allow:
db_labels:
env: "prod"
db_roles: ["readonly"]
db_names: ["prod-postgres"]
rules:
- resources: ["db"]
verbs: ["read"]
options:
max_session_ttl: "30m"
Bind the role to a user, and they can connect using the Teleport client:
# One‑liner for developers
tsh db login prod-postgres
psql "host=prod-postgres.localport port=26257 user=alice dbname=app"
Behind the scenes, Teleport forwards the request to 10.0.1.12:5432 over a mutually authenticated TLS channel, and every query is recorded.
Real‑World Use Cases
DevOps Engineer On‑Call
When an incident occurs, the on‑call engineer needs immediate, audited access to a fleet of EC2 instances across multiple AWS accounts. By federating Teleport with Okta and using the “incident‑response” role, the engineer receives a one‑time certificate that expires after the incident window, and every command is automatically stored in Elasticsearch for post‑mortem analysis.
Compliance‑Driven Auditing
Financial institutions often must retain SSH session logs for 7‑years. Teleport’s built‑in session recording can stream directly to an immutable S3 bucket, while role‑based policies enforce “read‑only” or “write” permissions per regulatory requirement.
Multi‑Cloud Database Consolidation
Enterprises with PostgreSQL instances in GCP, Azure, and on‑prem can expose each through Teleport, presenting a single psql endpoint. Developers never see the underlying passwords; access is governed by central RBAC, and audit logs are unified across clouds.
Pro Tips for Production Deployments
Tip 1: Run the Auth Server in HA mode using a backend like etcd or DynamoDB. This eliminates a single point of failure and enables seamless scaling.
Tip 2: Enable
audit_events_urito ship JSON audit events to a SIEM (e.g., Splunk) in real time. Correlate SSH commands with IAM changes for richer threat detection.Tip 3: Use
tsh login --request-otpto enforce MFA on every new session, even if the user is already authenticated via SSO.
Scaling Teleport: High Availability & Clustering
For large organizations, a single Auth/Proxy node becomes a bottleneck. Teleport supports clustering by running multiple Auth servers behind a load balancer and sharing a common backend (etcd, DynamoDB, or Consul). The Proxy service can also be horizontally scaled; each instance reads the same certificate authority data from the backend.
Typical HA topology:
- 3 Auth nodes (quorum) with etcd as the KV store.
- 2+ Proxy nodes behind an L7 load balancer (e.g., AWS ALB).
- Node daemons on every managed host, registering with the nearest Auth endpoint.
Remember to configure proxy_public_addr and auth_servers consistently across all instances; otherwise clients may receive mismatched certificates.
Monitoring & Alerting
Teleport exports Prometheus metrics at /metrics on the Proxy service. Key metrics to watch include teleport_proxy_sessions_active, teleport_auth_failed_login_total, and teleport_node_heartbeat_seconds. Set up alerts for spikes in failed logins or sudden drops in node heartbeats.
Example Prometheus rule to catch brute‑force attempts:
# alerts.yml
groups:
- name: teleport
rules:
- alert: HighFailedLogins
expr: sum(rate(teleport_auth_failed_login_total[5m])) > 10
for: 2m
labels:
severity: critical
annotations:
summary: "More than 10 failed login attempts per minute"
description: "Potential credential stuffing on {{ $labels.instance }}."
Integrating with Identity Providers
Teleport’s OIDC connector abstracts away the complexities of each IdP. Below is a minimal GitHub OIDC connector definition:
# connectors/github.yaml
kind: oidc
metadata:
name: github
spec:
client_id: "YOUR_GITHUB_CLIENT_ID"
client_secret: "YOUR_GITHUB_CLIENT_SECRET"
issuer_url: "https://github.com/login/oauth"
claim_mappings:
username: "login"
groups: "team"
redirect_url: "https://teleport.example.com/v1/webapi/oidc/callback"
After loading the connector with tctl create -f connectors/github.yaml, users can log in using their GitHub accounts, and Teleport will map GitHub teams to Teleport roles.
Security Hardening Checklist
- Enforce MFA at the IdP level and enable
require_mfain role definitions. - Use short‑lived certificates (max 30 min) for highly privileged actions.
- Disable password authentication on all managed SSH servers; rely solely on Teleport certificates.
- Restrict node registration by whitelisting allowed IP ranges in
tokengeneration. - Enable audit log encryption when storing logs in S3 or GCS.
Frequently Asked Questions
Do I still need SSH keys on my servers?
No. Teleport’s node daemon runs as root and manages a local authorized_principals file. All user access is mediated through certificates, eliminating the need for per‑user keys.
Can Teleport protect containers?
Yes. Teleport can run as a sidecar in Kubernetes, exposing a kubectl proxy that enforces the same RBAC and audit model for pod exec and port‑forward operations.
What’s the performance impact?
Because Teleport terminates TLS only at the proxy and then streams traffic over a single TCP connection, latency overhead is typically < 5 ms per hop