SmartProxy: A Rust-Powered Proxy Toolkit You Configure in TypeScript

@push.rocks/smartproxy is a production proxy that handles TCP, TLS, HTTP reverse proxying, WebSockets, UDP, QUIC/HTTP3, load balancing, and kernel-level NFTables forwarding — all from a single route-based TypeScript API. Under the hood, a Rust binary does every byte of networking. Your TypeScript code just describes what should happen. The Rust engine figures out how — fast.

The Problem with Proxy Configuration

If you've ever set up nginx, HAProxy, or Envoy, you know the drill: write YAML or a bespoke config language, restart the process, hope the syntax is right, grep through docs for the directive name you forgot. There's no type checking, no autocomplete, no way to express "route this domain to backend A unless the path starts with /api, in which case terminate TLS and forward to backend B with rate limiting."

SmartProxy replaces all of that with typed TypeScript objects. A route is a match/action pair. The match says what traffic to capture. The action says what to do with it. TypeScript catches your mistakes before the proxy even starts.

30 Seconds to HTTPS

import { SmartProxy } from '@push.rocks/smartproxy';

const proxy = new SmartProxy({
  acme: {
    email: 'ssl@yourdomain.com',
    useProduction: true
  },
  routes: [
    // Redirect HTTP → HTTPS
    {
      name: 'redirect-to-https',
      match: { ports: 80, domains: 'app.example.com' },
      action: {
        type: 'socket-handler',
        socketHandler: SocketHandlers.httpRedirect('https://{domain}{path}', 301)
      }
    },
    // Terminate TLS, forward to backend
    {
      name: 'app-https',
      match: { ports: 443, domains: 'app.example.com' },
      action: {
        type: 'forward',
        targets: [{ host: 'localhost', port: 3000 }],
        tls: { mode: 'terminate', certificate: 'auto' }
      }
    }
  ]
});

await proxy.start();

Two routes. The first catches HTTP on port 80 and redirects to HTTPS using a built-in socket handler with template variables ({domain}, {path}). The second terminates TLS on port 443 with an automatic Let's Encrypt certificate and forwards plain HTTP to your backend. No certificate files to manage. No cron jobs for renewal. The Rust engine handles ACME challenges, stores nothing to disk, and calls back into your TypeScript code if you want custom persistence.

The Route Model

Every route in SmartProxy follows the same shape:

{
  name: 'api-route',
  match: {
    ports: 443,
    domains: 'api.example.com',
    path: '/v1/*'
  },
  action: {
    type: 'forward',
    targets: [{ host: 'backend', port: 8080 }],
    tls: { mode: 'terminate', certificate: 'auto' }
  }
}

The match block supports ports (single, array, or ranges), domains (exact or wildcard), paths, transport protocol (tcp, udp, or all), application-layer protocol (http, tcp, quic), client IP ranges, TLS versions, and HTTP headers. The action block is either forward (proxy to backends) or socket-handler (hand the raw socket to your TypeScript function). Every field is typed. The match criteria compose — you can have a route that only fires for QUIC traffic from a specific IP range on a specific port with a specific SNI hostname.

Routes are evaluated in priority order. First match wins. You can update them at runtime with proxy.updateRoutes() — the operation is atomic and mutex-locked, so in-flight connections aren't affected.

Three TLS Modes

SmartProxy supports three ways to handle TLS, and the choice matters:

Passthrough routes encrypted traffic based on the SNI hostname without decrypting it. The backend handles TLS. The proxy never sees plaintext. This is what you want when you can't or don't want to terminate TLS at the proxy layer.

{
  name: 'tls-passthrough',
  match: { ports: 443, domains: 'secure.example.com' },
  action: {
    type: 'forward',
    targets: [{ host: 'backend-that-handles-tls', port: 8443 }],
    tls: { mode: 'passthrough' }
  }
}

Terminate decrypts at the proxy and forwards plain HTTP to the backend. This is the standard reverse proxy model. The proxy can inspect HTTP headers, match paths, add headers, and do request-level routing.

Terminate-and-reencrypt decrypts at the proxy, then re-encrypts to the backend. HTTP traffic gets full per-request routing (Host header, path matching) via the HTTP proxy; non-HTTP traffic uses a raw TLS-to-TLS tunnel. This is for zero-trust environments where traffic must be encrypted on every hop.

The Rust Engine

All networking — TCP listeners, TLS handshakes, HTTP parsing, connection pooling, security enforcement, metrics collection, UDP sockets, QUIC — runs inside a Rust binary. The TypeScript process communicates with it over JSON IPC on stdin/stdout. This isn't an FFI binding or a WASM module. It's a separate process.

The architecture diagram tells the story:

┌─────────────────────────────────────────────────┐
│              Your Application                    │
│     (TypeScript — routes, config, handlers)      │
└──────────────────┬──────────────────────────────┘
                   │  IPC (JSON over stdin/stdout)
┌──────────────────▼──────────────────────────────┐
│              Rust Proxy Engine                    │
│  ┌─────────┐ ┌─────────┐ ┌─────────┐           │
│  │ TCP/TLS │ │  HTTP    │ │  Route  │           │
│  │ Listener│ │  Proxy   │ │ Matcher │           │
│  └─────────┘ └─────────┘ └─────────┘           │
│  ┌─────────┐ ┌─────────┐ ┌─────────┐           │
│  │   UDP   │ │Security │ │ Metrics │           │
│  │  QUIC   │ │ Enforce │ │ Collect │           │
│  └─────────┘ └─────────┘ └─────────┘           │
└─────────────────────────────────────────────────┘

The process split is deliberate. The Rust engine doesn't parse config files. The TypeScript side never touches a raw packet. When a route uses a JavaScript socket handler or a dynamic host function (a callback that can't be serialized to Rust), the Rust engine relays that connection back to a TypeScript-side Unix socket server. Everything else stays in Rust.

Load Balancing with Health Checks

{
  name: 'load-balanced-app',
  match: { ports: 443, domains: 'app.example.com' },
  action: {
    type: 'forward',
    targets: [
      { host: 'server1.internal', port: 8080 },
      { host: 'server2.internal', port: 8080 },
      { host: 'server3.internal', port: 8080 }
    ],
    tls: { mode: 'terminate', certificate: 'auto' },
    loadBalancing: {
      algorithm: 'round-robin',
      healthCheck: {
        path: '/health',
        interval: 30000,
        timeout: 5000,
        unhealthyThreshold: 3,
        healthyThreshold: 2
      }
    }
  }
}

Three algorithms: round-robin, least-connections, and ip-hash. Health checks run at the configured interval and remove backends that fail the unhealthy threshold. They come back when they pass the healthy threshold. No external health checker needed.

UDP, QUIC, and HTTP/3

SmartProxy isn't TCP-only. Set transport: 'udp' in a route match and you're listening for datagrams. Pair it with a datagramHandler and you can implement any UDP protocol in TypeScript:

const udpHandler: TDatagramHandler = (datagram, info, reply) => {
  console.log(`UDP from ${info.sourceIp}:${info.sourcePort}`);
  reply(datagram); // Echo it back
};

const proxy = new SmartProxy({
  routes: [{
    name: 'udp-echo',
    match: { ports: 5353, transport: 'udp' },
    action: {
      type: 'socket-handler',
      datagramHandler: udpHandler,
      udp: {
        sessionTimeout: 60000,
        maxSessionsPerIP: 100,
        maxDatagramSize: 65535
      }
    }
  }]
});

For QUIC and HTTP/3, set protocol: 'quic' and configure TLS (QUIC requires TLS 1.3). SmartProxy can receive QUIC on the frontend and translate to TCP for backends that don't speak QUIC yet:

const quicRoute: IRouteConfig = {
  name: 'quic-to-backend',
  match: {
    ports: 443,
    transport: 'udp',
    protocol: 'quic'
  },
  action: {
    type: 'forward',
    targets: [{
      host: 'backend-server',
      port: 8443,
      backendTransport: 'tcp'  // QUIC → TCP translation
    }],
    tls: { mode: 'terminate', certificate: 'auto' },
    udp: {
      quic: {
        enableHttp3: true,
        maxIdleTimeout: 30000,
        altSvcPort: 443,
        altSvcMaxAge: 86400
      }
    }
  }
};

You can also listen on both TCP and UDP with transport: 'all' and provide separate socketHandler and datagramHandler callbacks — useful for protocols like DNS that serve on both transports.

Best-Effort Backend Protocol

By default, SmartProxy uses the highest protocol your backend supports. It follows the same discovery model as browsers:

  1. First request probes via TLS ALPN for HTTP/2 or HTTP/1.1
  2. If the backend responds with an Alt-Svc: h3=":port" header, SmartProxy caches it
  3. Subsequent requests use HTTP/3 over QUIC to the backend
  4. If H3 fails, it falls back to H2, then H1, and invalidates the cache

A client connecting over HTTP/1.1 can be forwarded over HTTP/3 to the backend. The protocols are independent. You can also pin a specific backend protocol with backendProtocol: 'http1' | 'http2' | 'http3'.

Custom Protocol Handlers

When forward isn't enough, hand the raw socket to TypeScript. Set the action type to socket-handler and provide a socketHandler callback:

import { SmartProxy, SocketHandlers } from '@push.rocks/smartproxy';

const proxy = new SmartProxy({
  routes: [
    // Pre-built echo handler
    {
      name: 'echo-server',
      match: { ports: 7777, domains: 'echo.example.com' },
      action: {
        type: 'socket-handler',
        socketHandler: SocketHandlers.echo
      }
    },
    // Custom protocol
    {
      name: 'custom-protocol',
      match: { ports: 9999, domains: 'custom.example.com' },
      action: {
        type: 'socket-handler',
        socketHandler: async (socket) => {
          socket.write('Welcome to my custom protocol!\n');

          socket.on('data', (data) => {
            const command = data.toString().trim();
            switch (command) {
              case 'PING': socket.write('PONG\n'); break;
              case 'TIME': socket.write(`${new Date().toISOString()}\n`); break;
              case 'QUIT': socket.end('Goodbye!\n'); break;
              default: socket.write(`Unknown: ${command}\n`);
            }
          });
        }
      }
    }
  ]
});

SmartProxy ships pre-built SocketHandlers for echo servers, TCP proxies, line-based protocols, HTTP responses, redirects (with template variables like {domain}, {path}, {clientIp}), and blockers. These are composable building blocks, not frameworks.

Security Per Route

Security isn't global — it's per route. Each route can have its own IP allow/block lists, connection limits, rate limiting, and authentication:

{
  name: 'secure-api',
  match: { ports: 443, domains: 'api.example.com' },
  action: {
    type: 'forward',
    targets: [{ host: 'api-backend', port: 8080 }],
    tls: { mode: 'terminate', certificate: 'auto' }
  },
  security: {
    ipAllowList: ['10.0.0.0/8', '192.168.*'],
    ipBlockList: ['192.168.1.100'],
    maxConnections: 1000,
    rateLimit: { enabled: true, maxRequests: 100, window: 60 },
    jwtAuth: { secret: 'your-jwt-secret', algorithm: 'HS256' }
  }
}

Since security is just a property on the route object, you can compose it however you like — spread operators, utility functions, whatever pattern fits your codebase. The Rust engine enforces all of it — IP filtering happens before a TLS handshake is even attempted.

Certificates Without Disk

SmartProxy never writes certificates to disk. You control persistence through a certStore interface:

const proxy = new SmartProxy({
  routes: [...],
  certStore: {
    loadAll: async () => {
      const certs = await myDb.getAllCerts();
      return certs.map(c => ({
        domain: c.domain,
        publicKey: c.certPem,
        privateKey: c.keyPem,
      }));
    },
    save: async (domain, publicKey, privateKey, ca) => {
      await myDb.upsertCert({ domain, certPem: publicKey, keyPem: privateKey });
    },
    remove: async (domain) => {
      await myDb.deleteCert(domain);
    },
  },
});

On first startup, SmartProxy provisions certificates via ACME and calls certStore.save(). On subsequent startups, certStore.loadAll() returns them instantly — no re-provisioning. Store them in Postgres, Redis, Vault, S3, whatever fits your infrastructure. The proxy doesn't care.

NFTables: Kernel-Level Forwarding

For maximum throughput on Linux, SmartProxy can bypass user space entirely. Routes configured with NFTables forwarding use kernel-level DNAT/SNAT via @push.rocks/smartnftables. The proxy sets up the nft rules; the kernel forwards packets directly. Your application never sees the bytes.

{
  name: 'nftables-forward',
  match: { ports: 443, domains: 'fast.example.com' },
  action: {
    type: 'forward',
    targets: [{ host: 'backend', port: 8080 }],
    tls: { mode: 'terminate', certificate: 'auto' },
    forwardingEngine: 'nftables',
    nftables: { preserveSourceIP: true }
  }
}

This is the option for when you need raw throughput and are willing to run as root. The Rust engine handles everything else; NFTables handles forwarding at the kernel level.

Runtime Control

Nothing requires a restart. Ports, routes, and certificates can all be managed while the proxy is running:

// Add or remove listening ports
await proxy.addListeningPort(8443);
await proxy.removeListeningPort(8080);

// Swap routes atomically
await proxy.updateRoutes([...newRoutes]);

// Provision or renew certificates on demand
await proxy.provisionCertificate('my-route-name');
await proxy.renewCertificate('my-route-name');

// Real-time metrics
const metrics = proxy.getMetrics();
console.log(`Active connections: ${metrics.connections.active()}`);
console.log(`Throughput in: ${metrics.throughput.instant().in} bytes/sec`);
console.log(`UDP sessions: ${metrics.udp.activeSessions()}`);

The metrics system samples per second into a circular buffer with one-hour retention. Rate calculations are accurate for any window you request. Per-route and per-IP throughput tracking comes free — no external monitoring agent needed.

What You Get from the Architecture

The TypeScript/Rust split via IPC isn't a compromise. It's the entire point. The Rust binary is a standalone process. If your TypeScript application crashes, in-flight TCP connections in the Rust engine don't drop. You can upgrade your control plane without interrupting traffic. The Rust code stays a normal Cargo project with normal Rust tests. The TypeScript code stays a normal npm package with full type safety.

Every route is a plain typed object — no magic, no hidden abstractions. You compose them with standard TypeScript: spread operators, arrays, functions, whatever fits. The type system catches misconfigurations at compile time, not at 3 AM.

The source is MIT-licensed and available at code.foss.global/push.rocks/smartproxy. Install it with pnpm install @push.rocks/smartproxy. A single pnpm build compiles both the TypeScript and cross-compiles the Rust engine.

Read more