SmartVPN: A VPN That Hides in Plain Sight Over WebSockets

@push.rocks/smartvpn tunnels all VPN traffic inside a standard WebSocket connection over HTTPS. To any firewall, DPI box, or network observer, it looks indistinguishable from ordinary web traffic (the same kind of connection a chat app or real-time dashboard would open on port 443). It passes through Cloudflare, corporate proxies, and restrictive networks that block traditional VPN protocols on sight. Under the hood, smartvpn splits the work between a TypeScript control plane for configuration, client management, and telemetry, and a Rust data plane that handles encryption, tunneling, QoS, and packet classification at native speed. The two halves talk over typed JSON-lines IPC. Neither side compromises.

The Architecture in 30 Seconds

Your TypeScript code instantiates a VpnClient or VpnServer. Under the hood, smartvpn uses @push.rocks/smartrust to spawn a Rust binary (smartvpn_daemon) and opens a bidirectional JSON channel to it, either over stdio for development or a Unix socket for production. Every method call on the TypeScript side becomes a JSON command to the Rust daemon. Every response is typed end to end.

The Rust daemon handles everything that needs to be fast: Noise NK handshakes, XChaCha20-Poly1305 encryption with random 24-byte nonces, TUN device I/O, a three-tier priority queue (ICMP/DNS/SSH at the top, bulk flows at the bottom), per-client token-bucket rate limiting, adaptive keepalives, and automatic path MTU calculation. The TypeScript side never touches a raw packet. The Rust side never parses a config file.

What the TypeScript Looks Like

Connecting as a client is about what you would expect from a well-designed Node library:

import { VpnClient } from '@push.rocks/smartvpn';

const client = new VpnClient({
  transport: { transport: 'stdio' },
});
await client.start();

const { assignedIp } = await client.connect({
  serverUrl: 'wss://vpn.example.com/tunnel',
  serverPublicKey: 'BASE64_SERVER_PUBLIC_KEY',
  dns: ['1.1.1.1', '8.8.8.8'],
  mtu: 1420,
});
console.log(`Connected: ${assignedIp}`);

That's it. Behind that await, the TypeScript layer spawned a Rust daemon, negotiated a Noise NK handshake with the server, stood up a TUN interface, and started routing encrypted traffic through a WebSocket tunnel. The transport choice (WebSocket over HTTPS) is deliberate. It passes through Cloudflare, corporate proxies, and anything else that terminates TLS. Most WireGuard setups choke in these environments. smartvpn doesn't.

Once connected, you get real-time telemetry for free. Call client.getConnectionQuality() and you get back smoothed RTT, jitter, loss ratio, and a link health classification (healthy / degraded / critical) — all measured by the daemon's own adaptive keepalive system, not by WebSocket pings that Cloudflare would swallow.

Easy Serverside Control over Clients

The server API (via VpnServer class) gives you a typed control surface over every connected client, and you can change anything at runtime without dropping connections.

You can list clients, pull per-client telemetry (bytes in/out, packets dropped, keepalive history), set individual rate limits with server.setClientRateLimit(), and kick anyone with server.disconnectClient(). Rate limits use a token-bucket algorithm at byte granularity in the Rust daemon. When you call setClientRateLimit('client-id', 5_000_000, 10_000_000), the bucket parameters change in the Rust process instantly — no reconnection, no config reload, no restart. Most VPN stacks need custom C extensions for this kind of runtime control.

KeepAlive and Adaptive HealthChecks

If you tunnel over WebSocket through a reverse proxy like Cloudflare, your WebSocket-level pings get swallowed. The proxy responds on behalf of the server, so your client thinks the link is healthy when the backend might be dead. smartvpn solves this with application-level keepalives — custom ping/pong frames inside the encrypted tunnel carrying 8-byte timestamps for precise RTT measurement.

The interval adapts. The daemon runs a three-state finite state machine: healthy (60s interval), degraded (30s), and critical (10s). Transitions have hysteresis — you need three consecutive good checks to upgrade from degraded to healthy, two bad ones to downgrade, which prevents the flapping that wastes bandwidth on unstable mobile connections. Dead peer detection fires after three consecutive timeouts in critical state. The whole thing runs inside the Rust process with zero allocations in steady state.

Quality of Service

The Rust data plane classifies every decrypted IP packet into three priority tiers by inspecting headers — no deep packet inspection, just port numbers and packet size. ICMP, DNS (port 53), SSH (port 22), and small packets under 128 bytes get high priority. Bulk flows that exceed 1 MB within a 60-second window get demoted to low. Everything else is normal. The priority channels drain with biased scheduling: high-priority packets always go first.

Under backpressure, the drop policy is equally deliberate: low-priority packets get dropped silently, normal packets get dropped next, and high-priority packets wait up to 5ms before being dropped as a last resort. Drop statistics are tracked per priority level and exposed through the telemetry API. This means your SSH session stays snappy even when someone on the same VPN is saturating the link with a large download.

Crypto Choices

The Noise NK handshake pattern means the client knows the server's static public key upfront but doesn't authenticate itself — the same trust model as connecting to a website via TLS. Two DH operations (ephemeral-static, then ephemeral-ephemeral) produce forward-secret transport keys. Post-handshake, everything goes through XChaCha20-Poly1305 with random 24-byte nonces. The large nonce space means you never need counter synchronization between peers; just generate a random nonce per packet. The wire format is minimal: [nonce:24B][ciphertext][tag:16B], inside a binary frame of [type:1B][length:4B][payload].

The MTU Math

The daemon also calculates tunnel overhead precisely. IP header (20 bytes) + TCP with timestamps (32 bytes) + WebSocket framing (6 bytes) + VPN frame header (5 bytes) + Noise AEAD tag (16 bytes) = 79 bytes of overhead. On a standard 1500-byte Ethernet link, that gives you an effective TUN MTU of 1421 bytes. The default of 1420 is conservative and correct. Oversized packets get an ICMP Fragmentation Needed written back into the TUN so the source TCP adjusts its MSS automatically. You can inspect all of this at runtime through client.getMtuInfo().

What You Get From the Process Split

Plenty of projects use napi-rs or wasm-bindgen to call Rust from Node. smartvpn does something different: it draws the boundary at the process level instead of FFI. The Rust daemon is a standalone binary that communicates over IPC. The VPN tunnel survives TypeScript process crashes. You can upgrade your control plane without dropping connections. And the Rust code stays a normal Cargo project with normal Rust tests (71 of them), no Node.js required.

Both VpnClient and VpnServer extend EventEmitter, so you can hook into lifecycle events like exit, reconnected, client-connected, and client-disconnected to build monitoring and alerting in the same language your team already knows. The entire TypeScript API is fully typed with exported interfaces for every config, status, and telemetry object.

The source is MIT-licensed and available at code.foss.global/push.rocks/smartvpn. Install it with pnpm install @push.rocks/smartvpn and a single pnpm build compiles both the TypeScript and cross-compiles the Rust daemon for amd64 and arm64.

Going to Production

In development, the TypeScript process spawns the Rust daemon as a child process via stdio. In production, you flip the transport to socket and point it at a Unix socket where the daemon already runs as a systemd or launchd service:

const client = new VpnClient({
  transport: {
    transport: 'socket',
    socketPath: '/var/run/smartvpn.sock',
    autoReconnect: true,
    maxReconnectAttempts: 10,
  },
});

When using socket transport, client.stop() closes the socket but doesn't kill the daemon. That's exactly what you want when your TypeScript service restarts but the VPN tunnel should stay up. The VpnInstaller utility even generates the systemd unit files and launchd plists for you, with platform detection built in.

Read more