At SeatGeek, security and trust are as critical as speed and scale. When integrating with external systems, especially ones as widely used as AI Agents, we need to ensure that every incoming request is authentic and untampered. That means validating cryptographic signatures at high throughput, without sacrificing latency or reliability.
Why This Matters: Security First
When our API receives requests from the ChatGPT Agent, we are not merely managing application traffic; we are creating a secure trust boundary.
Without signature verification, anyone could impersonate the ChatGPT Agent by crafting requests that look correct but are forged. This opens the door to:
- Spoofing — malicious actors pretending to be ChatGPT Agent.
- Replay attacks — reusing valid requests to trigger actions again.
- Tampering — altering request data in transit.
Signature validation ensures that bad actors can’t impersonate trusted agents, while legitimate requests from ChatGPT pass through reliably. This distinction is what lets us protect our systems from abuse without blocking real usage.
ChatGPT uses Ed25519 signatures, which offer a secure and swift way to verify message signatures. HTTP Message Signatures, utilizing Ed25519 cryptography, provide:
- Proof of origin — the request can only be signed by a holder of the private key.
- Integrity — any modification breaks the signature.
- Replay protection — timestamps in signatures let you reject stale requests.
By validating these signatures at the gateway:
- You block fake requests before they hit your backend.
- You reduce the attack surface by enforcing cryptographic trust.
- You shift security left — stopping bad traffic earlier in the stack.
This is a critical security measure, not just a formality. It directly ensures system and data integrity, preventing significant vulnerabilities and potential breaches. Its purpose is to fortify our defenses against threats.
First Stop: Trying at the Edge (Fastly)
Our first instinct was to verify signatures as early as possible — right at the edge with Fastly — so that invalid requests never even reached our infrastructure. Fastly’s VCL provides cryptographic functions for hashing and HMAC, but it currently doesn’t support Ed25519, the signing algorithm used by ChatGPT Agent.
Supporting Ed25519 at the edge would require moving to Compute@Edge with a custom WebAssembly crypto library. While possible, this path would add operational complexity. Fastly’s three-restart limit per request is already partially utilized by existing features, leaving less capacity for new implementations.
Given these constraints, we shifted the verification to Kubernetes Gateway API (Kong), where Ed25519 is already supported through the bundled OpenSSL. This lets us avoid extra moving parts while keeping verification close to the origin. To make those cryptographic calls directly from Lua, we used FFI (Foreign Function Interface), a LuaJIT feature that lets us call C functions like OpenSSL directly from Lua without writing a native module.
Understanding HTTP Message Signatures (RFC 9421)
HTTP Message Signatures (RFC 9421) defines a standard way to sign requests by building a canonical string from specific components, then verifying it with a public key.
The process works by:
- Selecting specific HTTP components (e.g., method, path, and certain headers).
- Building a canonical string from those components in a precise format.
- Signing that string with a private key on the sender side.
- Verifying the signature with the corresponding public key on the receiver side.
Example headers from a ChatGPT request:
1 2 3 |
|
The corresponding canonical string would be:
1 2 3 4 5 |
|
For successful verification, the Signature-Input definition requires an exact match in every newline, quote, and component order.
Once the RFC was understood, the next step was to build a Kong plugin that could parse headers, construct this canonical string, and verify the Ed25519 signature.
Implementation Challenges and Debugging Process
During development, subtle but critical issues emerged:
- Incorrect quoting of component names — must wrap all names in double quotes.
- Path handling errors —
@path
must exclude query parameters. - Double-quoting signature-agent — the header already contains quotes.
- Static component assumptions — must parse the order from Signature-Input.
To validate correctness, we used Cloudflare’s web-bot-auth as a reference implementation. By feeding real ChatGPT Agent request data into web-bot-auth and comparing the generated canonical string to the Kong implementation, mismatches could be quickly identified and resolved.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
|
Once the canonical string matched exactly, the only remaining step was cryptographic verification.
Choosing the Crypto Backend
Three options were considered for Ed25519 verification inside Kong:
Method | No FFI | Native Libs | Works in Kong | Safe for Production? | Notes |
---|---|---|---|---|---|
FFI + libsodium | ❌ | ❌ | ✅ | ✅ | Fast, portable, and uses libsodium’s own Ed25519 implementation — but requires shipping extra native libs with Kong. |
FFI + OpenSSL (direct to Kong’s bundled OpenSSL) | ❌ | ✅ | ✅ | ✅ | Leverages Kong’s bundled, pinned OpenSSL with guaranteed Ed25519 support — no extra dependencies and consistent behavior in all official builds. |
resty.openssl.pkey (system libcrypto) | ✅ | ✅ | ❌ | ❌ | Handles the digest parameter inconsistently for Ed25519, causing unpredictable failures. |
Decision: FFI + OpenSSL ensures consistent availability. Benchmarks indicate that Ed25519 can perform approximately 70,000 verifications per second per core. This translates to about 20,000 verifications per second per Kong Gateway instance using LuaJIT FFI.
Code Implementation: Message Construction + Verification
The final implementation has two key steps:
- Build the canonical message string from Signature-Input following RFC 9421 rules.
- Verify the signature using OpenSSL’s Ed25519 API via FFI.
Rather than embedding full snippets here, we’ve published an open-source version of the Kong plugin: seatgeek/kong-chatgpt-validator.
The repository contains:
- Canonical Message Builder: RFC 9421-compliant parsing and message construction.
- Ed25519 Verification (OpenSSL via FFI): minimal wrapper around OpenSSL with caching, base64url handling, and error reporting.
The GitHub repository contains the complete implementation and examples for a production-ready plugin. This plugin offers configuration options for key IDs and Public Key, and is suitable for those interested in experimenting with, validating, or adapting this approach for their own Gateway setup.
Production Considerations
Signature verification in production should be considered one component within a comprehensive defense-in-depth strategy, rather than a standalone solution. It’s crucial to maintain your existing security measures, including Web Application Firewall (WAF) rules, rate limits, schema validation, IP/ASN reputation checks, and anomaly detection. Even with valid signatures, a compromised or poorly implemented agent could still issue malicious or fraudulent requests. Therefore, signature verification should enhance, not replace, these established controls.
From a security standpoint, production implementation will adopt a “fail-closed” security posture by rejecting requests with invalid signatures (missing, expired, unverifiable, or mismatched fields like origin, method, or path). Performance is another key consideration – for ensuring predictable latency, we will be caching public keys (JWKs), respecting cache headers, pinning by kid, and warming the cache during deployment/key rotation. Crypto operations can add precious milliseconds in the hot path of a request, and it is best to perform crypto operations only if necessary. Quick checks like enforcing public key/signature lengths (e.g., 32-byte Ed25519 public key, 64-byte signature), validating encoding, and cap header sizes will be able to filter out requests that are obviously spurious. Slice data by kid/partner/origin and configure alerts for spikes or sustained failure rates to identify abuse and integration issues (key rotations, clock skew, header regressions) before outages.
Impact
This work made it possible to reliably identify and verify traffic from the ChatGPT agent without blocking or discarding requests prematurely. In production, this gave our stakeholders the ability to analyze how fans are actually using the agent to interact with our platform, creating new visibility into adoption and engagement patterns. Just as importantly, signature verification did not replace our existing defenses; it complemented them. By keeping our shield stack in place, we ensured that even correctly signed requests could still be stopped if they looked malicious or abusive.
From a performance standpoint, signature verification added about 600 - 900 μs per ChatGPT request, which is roughly 3% of average gateway latency. Regular fan traffic was untouched — verification only runs when a request carries both Signature and Signature-Agent headers. In production, the extra step stayed well below our service-level budgets. This allowed us to strengthen trust in ChatGPT agent traffic without any measurable impact on the fan experience.
Final Thoughts
Validating ChatGPT’s HTTP Message Signatures wasn’t just an exercise in cryptography; it was about reinforcing SeatGeek’s commitment to trust, security, and reliability. AI agents are becoming first-class citizens in our platform, with behaviors, traffic patterns, and security requirements that differ significantly from fans.
To support this securely, we built an implementation of RFC 9421 verification in Gateway API, ensuring:
- Authenticity — every request claiming to be from ChatGPT truly is.
- Integrity — data hasn’t been altered in transit.
- Abuse prevention — spoofed or replayed traffic is blocked before hitting core services.
Because this challenge is not unique to SeatGeek, we’ve open-sourced the core implementation as seatgeek/kong-chatgpt-validator. This allows other teams experimenting with ChatGPT agent to reuse, adapt, and improve the validator rather than reinventing it.
This capability isn’t a one-off patch; it’s part of a long-term security posture to adapt our infrastructure for new ways fans — from traditional browser navigation to AI-driven tools — use SeatGeek, without compromising on performance or user experience.