Code, Design, and Growth at SeatGeek

Jobs at SeatGeek

We are growing fast, and have lots of open positions!

Explore Career Opportunities at SeatGeek

The Transactional Outbox Pattern: Transforming Real-Time Data Distribution at SeatGeek

On the Data Platform team at SeatGeek, our goal is to make producing and consuming data products a delightful experience. To achieve this, we transformed part of our data stack to be more real-time, performant, and developer-centric.

This post explores a modern approach to the transactional outbox pattern, highlighting key design decisions, challenges we addressed, and how Apache Kafka®, Postgres, Debezium, and schemas enabled seamless real-time data distribution.

Why real-time data matters to SeatGeek

Leveraging real-time data is crucial in the live events industry. From schedule announcements to last-minute ticket on-sales, things move quickly, and agility is key. Companies that react swiftly can deliver a superior fan experience. Real-time data enables SeatGeek to personalize the fan journey, provide live, actionable insights to stakeholders, and optimize operations by dynamically responding to demand signals.

A new approach to data distribution

Our vision for data distribution is to establish a continuous loop – from production to consumption, enrichment, and feedback to the source. This article focuses on one part of that loop: data production.

data distribution as a continuous loop

Goals

To guide our efforts and stay aligned with the overall vision, we set the following goals.

  1. Guarantee data integrity and consistency.
  2. Achieve scalable, predictable performance.
  3. Encourage domain-driven, reusable data that supports diverse uses.
  4. Foster shared ownership through domain-driven design and data contracts.
  5. Prioritize developer experience.

Existing approaches for producing data

Before introducing a new approach, we evaluated the existing approaches to producing data to ensure we weren’t duplicating work.

1. Polling Consumers

In this approach, consumers periodically check for updates in the data, through an API or direct database access. This pattern is common in ETL (Extract, Transform, Load) workflows. A notable example is Druzhba, our open-source tool for data warehouse ETL.

Key Limitations

  • Introduces inherent latency, especially when updates are frequent.
  • Places unpredictable load on source systems.
  • Direct database access violates service boundaries.
  • Reintegration overhead for each new consumer.

2. Direct Publish

The next strategy involves publishing data directly to consumers. A classic example of this is publishing user activity data, also known as clickstreams, to Kafka. In this case, the publish step does not need to be transactional, however, there are instances where it must be. For example, when an order is created and the inventory needs to be updated, ensuring that both actions occur together is crucial, so you want updates to be all-or-nothing. In such cases, partial failures can lead to inconsistencies between systems, undermining trust in the data.

the dual-write problem

Key Limitations

  • Dual-write problem: Lack of transactional guarantees introduces data inconsistencies.
  • Ensuring consistency across systems is hard, and techniques like distributed transactions/event sourcing add unnecessary complexity.
  • Increased operational complexity to address consistency between systems.

3. Change Data Capture

Lastly, Change Data Capture allows us to subscribe to all database updates and propagate those changes downstream. Because changes are automatically captured at the database level, we avoid inconsistencies that can arise from dual writes.

Key limitations

  • Shifts complexity to consumers.
  • Varying implementations across consumers lead to inconsistencies.
  • Data format changes are not detected, increasing the likelihood of downstream breakages.
  • This is a variation of direct database access and hence violates service boundaries.

In the end, we concluded that none of the existing approaches would help us achieve our goals, so we sought a new solution.


Picking a solution

We began by drafting an RFC (Request for Comments) document to outline the proposed approach and evaluate alternatives – a standard practice at SeatGeek that enables us to gather stakeholder feedback and make informed decisions. For our real-time data distribution needs, we focused on four key areas as shown in the image below.

four key areas

After reviewing the alternatives for each area, we settled on the transactional outbox pattern for its simplicity and effectiveness in addressing the dual-write problem while ensuring data integrity. We opted to have applications drive the event publishing to take advantage of domain events, which are best defined at the source. For relaying messages, we chose transaction log-tailing with Debezium, an established tool that efficiently captures changes from the database. Finally, we selected Kafka as our message broker primarily for its reliability as a log store, which enables us to reuse data effectively. Additionally, since we were already using Kafka in our infrastructure, it made sense to leverage it. We also decided to enforce the usage of schemas to promote shared ownership of the data.

A modern twist on the transactional outbox pattern

Before diving into the twist, let’s briefly recap the traditional transactional outbox pattern with an example. Imagine an application needs to publish an event related to a customer order. This process might involve updates to the sales, inventory, and customers tables. We’ll also assume the use of Postgres, Debezium, and Kafka. The following steps occur, as illustrated in the diagram below.

  1. Construct a domain event: The application creates a domain event that includes relevant information about the order, such as the number of items purchased, and customer details.
  2. Insert the event into the outbox table: The domain event is written to a dedicated “outbox” table, a temporary storage location for events. This step occurs within the same database transaction as the updates to the sales, inventory, and customers tables.
  3. Commit the transaction: Once the transaction is committed successfully, the changes are recorded in the Postgres write-ahead log (WAL).
  4. Relay and publish: Debezium captures all changes to the outbox table, so when it detects a new entry in the outbox table, it relays that event to Kafka for downstream consumers to process.

outbox flow

Challenges with the single outbox table

Early during the RFC process, we identified potential challenges with using a single outbox table, prompting us to explore alternatives to improve performance and scalability.

Performance impact

  • A single outbox table for an entire database can become the bottleneck, especially with high write throughput.
  • Lock contention is also a significant risk, as multiple concurrent writes compete for access to the table.

Complexity with table clean-up

  • Managing the size of the outbox table over time is crucial and requires a separate, external cleanup process.
  • Having an aggressive cleanup process risks lock contention between inserts and deletes.
  • Conservative cleanup could lead to an ever-growing outbox table, which increases the likelihood of performance degradation.

Bypassing the outbox table

Fortunately, we discovered that Postgres has a really neat feature: skip the outbox table and write to the write-ahead-log (WAL) directly! The WAL is central to Postgres’s durability and performance. It logs all changes before they are applied to data files, ensuring that transactions are committed reliably. WAL entries are sequential and optimized for high write throughput.

Writing directly to the WAL is made possible through logical decoding, a mechanism that allows Postgres to stream SQL changes to external consumers, and is also how Postgres uses to replicate changes from the primary to replicas.

Postgres provides a built-in function, pg_logical_emit_message() (documented here), for writing custom data to the WAL as part of a transaction. We will later see how we leverage this functionality to emit domain events, along with metadata, from applications to Kafka.


Implementation: From Skateboard to Car

We adopted an incremental approach, starting with a simple, low-risk solution and gradually building toward a more full-fledged, production-ready system. Following this “Skateboard to Car” philosophy allowed us to experiment and validate our assumptions early on, and move with a higher velocity. This also allowed us to incorporate feedback from users, and minimize risk.

Proof of concept (aka the “Skateboard”)

Before rolling out a full-scale implementation, we wanted to uncover the unknowns and build confidence in our approach. To do this, we:

  • Enabled logical replication on one of our most active databases.
  • Set up Debezium Server.
  • Integrated pg_logical_emit_message() in a high-traffic, critical request path.
  • Simulated an on-sale scenario to load-test this setup at a much higher scale than normal.

Results

The initial results were promising:

  • Excellent performance: Debezium delivered exceptional throughput under heavy load.
  • Minimal database impact: Writing to the WAL directly introduced negligible overhead, even during high write throughput.

Challenges

However, we also uncovered some key challenges:

  • Debezium’s Outbox Event Router

    • Debezium has an Outbox Event Router that routes events from the database to specific Kafka topics.
    • It assumes the existence of an outbox table, which we were bypassing with our direct write-to-WAL approach.
    • It does not integrate with the schema registry, necessitating a custom solution to handle message routing.
  • Debezium Server JSONSchema support

    • Debezium Server did not support JSON format with schemas enabled.
    • In its absence, the default behavior was base64 encoding entire records, which exposed internal structures and added unnecessary complexity for consumers.
    • This limitation made Debezium Server unsuitable for our use case, leading us to use Kafka Connect® instead.

Rolling out a full-scale implementation (aka the “Bicycle”)

The implementation involves 4 core components:

  1. The Kafka Connect cluster(s) for Debezium
  2. Data contracts and schemas
  3. A library for applications to produce data
  4. A custom Single Message Transformation (SMT) to route messages to their appropriate destination

1. Kafka Connect and Debezium

We run Debezium as a source connector on Kafka Connect rather than using Debezium Server, because of the previously noted limitations with schemas. The Kafka Connect clusters run distributed workers on Kubernetes, to ensure high availability and scalability. Additionally, the clusters for transactional outbox are completely isolated from the rest of our Kafka Connect clusters.

2. Data contracts and schemas

Schemas play a foundational role in our data infrastructure, serving as contracts between producers and consumers. They define the structure of the data being exchanged, ensuring data integrity, compatibility, and decoupling.

We use Confluent Schema Registry for managing Kafka schemas across the company. It validates data at the source, ensuring only well-structured data is published. Schema management tools in our CI/CD workflows automate the detection, validation, and migration of schema changes. This eliminates the risk of breaking downstream consumers while maintaining seamless integration across teams.

Schema ownership has cultivated a forward-thinking culture at SeatGeek, where developers consider the evolution of data contracts when designing applications. This shared responsibility enables us to scale our systems confidently while maintaining data quality. While developers maintain responsibility for schema compatibility, the Data Platform team supports this with tools and guidance to simplify the process.

Schemas are a requirement for using the transactional outbox workflow, and are tightly integrated into the library. Messages are validated using the schema registry before writing to the WAL. Downstream consumers use the same schema for deserialization, ensuring consistency throughout the pipeline.

3. Producing data from applications

Our approach here was to create a library that applications could integrate. We started with a library for Python – the most popular language for services at SeatGeek. Since then, we’ve added support for C#, and plan to support Golang as well. The key features of the library are:

  • Schema validation and serialization using the Confluent Schema Registry.
  • Writing directly to the WAL using pg_logical_emit_message().
  • Custom metadata injection such as tracing context and message headers as part of message prefix.

The centralized nature of the library ensures that schema validation and WAL writes are standardized across the organization. Developers cannot bypass the schema registry, ensuring consistency and reliability.

Examples and code snippets

Breakdown of pg_logical_emit_message() components
Breakdown of pg_logical_emit_message() components

Example: Using the outbox library in an application

      
from transactional_outbox import Outbox

outbox = Outbox()

@app.post("/some/api/route")
async def handler(req):
  # -- Handle the request --
  #   - open a database connection
  #   - execute business logic

  # -- Publish data --
  #   - write to the WAL
  await outbox.write_to_outbox(
    session = session,             # the database connection
    kafka_topic = "...",           # The Kafka topic to write to
    kafka_key = "...",             # An optional key for the message
    message: dict = { ... },       # the message to write to the WAL
    kafka_headers: dict = { ... }  # custom Kafka headers
  )

  # -- Flush the data --
  session.commit()

  # -- End --
      
    

Snippet: Write to outbox function

      
async def write_to_outbox(
  self,
  session: Union[Session, AsyncSession],
  message: dict[str, Any],
  kafka_topic: str,
  kafka_key: Optional[str] = None,
  kafka_headers: Optional[dict[str, str]] = None,
  span_context: Optional[dict[str, str]] = None,
  commit: bool = False,
) -> None:
  # Validate that the DB session is active...
  # Serialize the message using the schema registry
  serialized_message: bytes = serialize_msg_schema_registry(
    topic_name=kafka_topic,
    message=message,
    sr_client=self.registry_client,
  )

  # Build the prefix metadata (Kafka topic, key, headers, tracing context, etc.)
  prefix_obj = MessagePrefix(
    kafka_topic=kafka_topic,
    kafka_key=kafka_key,
    kafka_headers=kafka_headers or {},
    span_context={}  # Add tracing or other metadata here if needed
  )
  serialized_prefix: str = prefix_obj.serialize()

  # Use pg_logical_emit_message() to write to the WAL
  statement = select(
    func.pg_logical_emit_message(
      True,
      serialized_prefix,
      serialized_message,
    )
  )
    await self.execute_session(
    session=session,
    prepared_statement=statement,
  )
      
    

4. Routing messages using Single Message Transforms (SMTs)

SMTs are a feature of Kafka Connect that enables users to modify or transform messages as they pass through the pipeline. We built an SMT to replace Debezium’s built-in outbox event router, with support for the schema registry.

How it works

  • The SMT distinguishes between heartbeat records and outbox records based on the Connect schema name (io.debezium.connector.common.Heartbeat and io.debezium.connector.postgresql.MessageValue, respectively).
  • Heartbeat records are passed along with no modification.
  • For outbox records, the prefix field embedded in each message contains metadata like which topic the message should be routed to.
  • The span context and headers from the metadata are moved into the output record’s headers.
  • Additional telemetry data such as end-to-end latency is emitted by the SMT based on source metadata that Debezium includes by default.
  • The content of the outbox record is preserved and emitted as raw bytes (Note: This requires the use of Kafka Connect’s ByteArrayConverter).

flowchart of the single message transform


Building a robust system (aka the “Car”)

The next step in our journey was to make the whole system fault-tolerant and easy to manage.

Adding heartbeat events

As Debezium processes logical decoding messages, it reports the last successfully consumed message back to Postgres, which tracks this in the pg_replication_slots table. WAL segments containing unprocessed messages will be retained on disk, so Debezium must remain active to prevent disk bloat.

Additionally, the number of retained WAL segments depends on overall database activity, not just the volume of outbox messages. If database activity is significantly higher, PostgreSQL retains more WAL segments, causing unnecessary disk usage.

To address this, we set up Debezium to periodically emit heartbeat events by updating a dedicated heartbeat table and consuming the resulting changes. These events are also published to a separate Kafka topic, allowing us to monitor Debezium’s progress and connectivity.

heartbeat events

Heartbeat events serve two purposes:

1. Advancing the replication slot position for WAL cleanup:

  • Heartbeats are periodically committed to the WAL alongside regular database updates.
  • When Debezium processes these heartbeats, it advances its replication slot position to reflect the latest WAL segment it has consumed.
  • This update signals Postgres that all WAL segments before this point are no longer needed, marking it for safe removal from the disk.

2. Monitoring Debezium connectivity:

  • Debezium is configured to periodically execute a query on the source database, ensuring it remains active.
  • Debezium’s health can be monitored by checking the recency of heartbeat messages in the dedicated Kafka heartbeats topic.

Automatic connector restarts

One thing we noticed was that connectors occasionally failed, either due to a network partition or expired database credentials, and had to be restarted.

To reduce the need for human intervention, we wrote a script that uses the Connect API to check the status of connectors on the pod. The script is executed as a livenessProbe on Kubernetes, and anytime the probe fails, Kubernetes restarts the container, which also restarts the connector.

Note: This had to be a livenessProbe and not readinessProbe because the Connect API doesn’t become available until the readinessProbe succeeds.

automatic restarts using Kubernetes probes

Observability

Adding observability through distributed tracing was a key part of empowering users to visualize and inspect the flow of data throughout the system. Each stage emits telemetry data that is all tied together within Datadog, the observability platform we use at SeatGeek.

At the application level, the library adds a new span to the current trace context. This context is injected as metadata into the prefix object of the WAL message. The SMT then relocates that data to the Kafka message headers. Any consumers of the topic will also inherit the trace context. When this is all put together on Datadog, we’re able to visualize the flow of data from the origin (for ex, an HTTP request), all the way down to its final destination (for ex, a Flink job).

distributed tracing


Learnings and Looking Ahead

As we reflect on this journey, here are some key insights we’ve gathered and areas for future improvement.

Key Learnings

  • Development Effort: This new approach requires time and effort to develop, set up, and maintain.
  • Dependence on Observability: We’ve also learned that we heavily rely on observability tools to help us troubleshoot and ensure everything works correctly.
  • Ease of Adoption: This approach is easier to use in new projects. Retrofitting it into existing systems is much more difficult, especially when working with snapshots.
  • Cross-Team Collaboration: Close collaboration between application teams is crucial to success.

Future Areas of Focus

  • Snapshot Handling: Enhancing support for snapshots (or backfills) will be a priority, as they are essential for data recovery and completeness.
  • Schema registry developer experience: We aim to simplify the creation and maintenance of schemas.
  • Consumption Experience: Improving the data consumption side is critical. We’re focusing on stream processing with Flink to unlock advanced use cases and make working with real-time data more seamless.
  • Automation and Tooling: Investing in automation and developer tools to simplify setup and maintenance will help reduce friction and increase adoption.

Optimizing GitLab CI Runners on Kubernetes: Caching, Autoscaling, and Beyond

Introduction: Getting the Most Out of CI

Moving CI runners to Kubernetes was a game changer for our engineering teams. By adopting the GitLab Kubernetes Executor and Buildkit on Kubernetes, we resolved many painful issues we had with our old system like resource waste, state pollution, and job queue bottlenecks. But we weren’t content to stop there. Running on Kubernetes opened the door to deeper optimizations.

In this final installment of our CI series, we explore five critical areas of improvement: caching, autoscaling, bin packing, NVMe disk usage, and capacity reservation. These changes not only enhanced the developer experience but also kept operational costs in check.

Let’s dive into the details!

Caching: Reducing Time to Pipeline Success

Caching (when it works well) is a significant performance and productivity booster. In a Kubernetes setup, where CI jobs run in ephemeral pods, effective caching is essential to avoid repetitive, time-consuming tasks like downloading dependencies or rebuilding assets. This keeps pipelines fast and feedback loops tight.

We use S3 and ECR to provide distributed caching for our runners. We utilize S3 to store artifacts across all jobs with lifecycle policy that enforces a 30 day expiration. We use ECR to store container image build caches with a lifecycle policy that auto-prunes old caches to keep us within the images-per-repo limits.

These are both used by default to significantly reduce job times while maintaining high overall reliability.

Caching
Artifact and Image Caching

Why are Builds so Slow?

One interesting issue we ran into with build caching is that when preforming multi-architecture builds our caches would alternate between architectures. For example:

  • Pipeline 1 = amd64 (cached) arm64 (no cache)
  • Pipeline 2 = amd64 (no cache) arm64 (cached)
  • Pipeline 3 = amd64 (cached) arm64 (no cache)
  • … and so on

Sometimes builds would benefit from local layer caching if they land on the right pod, in which case both architectures would build quickly, making this a tricky problem to track down.

This behavior is likely due to how we build each architecture natively on separate nodes for performance reasons (avoiding emulation entirely). There’s an open issue for buildx that explains how, for multi-platform builds, buildx only uploads the cache for one platform. The --cache-to target isn’t architecture-specific, so each run overwrites the previous architecture’s cache.

Our current workaround is to perform two separate docker buildx build calls, so the cache gets pushed from each, then use docker manifest create && docker manifest push to stitch them together.

With this in place we’re seeing up to 10x faster builds now!

Autoscaling: Working Smarter, Not Harder

One of Kubernetes’ standout features is its ability to scale dynamically. However, getting autoscaling right is a nuanced challenge. Scale too slowly, and jobs queue up. Scale too aggressively, and you burn through resources unnecessarily.

Scaling CI Runners

We used the Kubernetes Horizontal Pod Autoscaler (HPA) to scale our runners based on saturation: the ratio of pending and running jobs to the total number of available job slots. As the saturation ratio changes, we scale the number of runners up or down to meet demand:

Runner Autoscaling
The total capacity autoscales based on the number of concurrent CI jobs

But this wasn’t as simple as turning it on and walking away - we had to fine-tune the scaling behavior to avoid common pitfalls:

  • Scale-Up Latency: If many jobs come in around the same time, it can take a bit for the runners to scale up enough to meet that demand. We’re currently targeting a saturation ratio of 70%. When exceeded, the system is allowed to double its capacity every 30 seconds if needed, with a stabilization window of 15 seconds.
  • Over-Aggressive Scale-Downs: To avoid thrashing from scaling down too much (and/or too fast), we scale down cautiously - removing up to 30% of available slots every 60 seconds, and waiting for a 5-minute stabilization window before taking action.

The result? Our CI runners now scale seamlessly to handle peak workloads while staying cost-efficient during quieter times.

Job queue times
Can you guess when we migrated CI to Kubernetes?

Scaling Buildkit

In our previous post, we shared how we run Buildkit deployments on Kubernetes to build container images. We also leverage an HPA to scale Buildkit deployments to try and match real-time demand.

Unfortunately, Buildkit doesn’t expose any metrics for the number of active builds, and our cluster doesn’t yet support auto-scaling on custom metrics, so we had to get creative. We ended up autoscaling based on CPU usage as a rough proxy for demand. This hasn’t been perfect, and we’ve had to tune the scaling to over-provision more than we’d like to ensure we can handle spikes in demand.

Buildkit Autoscaling
Ideally we’d have a perfect 1:1 ratio of jobs to pods, but scaling based on CPU gets us close enough

We’d eventually like to shift our strategy to use ephemeral Buildkit pods that are created on-demand for each build and then discarded when the build is complete. This would allow us to scale more accurately and avoid over-provisioning, but at the cost of some additional latency. This would also help solve some issues we’ve been having with flaky builds and dropped connections that may be due to resource contention or state pollution.

Bin Packing: Maximizing Node Utilization

Kubernetes’ scheduling capabilities gave us the tools we needed to improve how jobs were placed on nodes, making our cluster more efficient. This is where bin packing came into play.

We defined dedicated node pools for CI workloads with sufficient resources to handle multiple concurrent jobs with ease. With this we gave CI jobs dedicated access to fast hardware with NVMe disks, and opted-out of using spot instances to guarantee high reliability for the pipelines:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
  apiVersion: karpenter.sh/v1beta1
    kind: NodePool
    metadata:
      name: linux-nvme-amd64-ci
  spec:
    disruption:
      consolidationPolicy: "WhenEmpty"
      consolidateAfter: "10m"
      expireAfter: "Never"
      budgets: { nodes : "1" }
    template:
        metadata: {}
        spec:
          nodeClassRef:
            name: "linux-nvme"
          requirements:
            - key: karpenter.sh/capacity-type
              operator: In
              values:
                - on-demand
            - key: karpenter.k8s.aws/instance-local-nvme
              operator: Gt
              values:
                - "1000"
          taints:
            - key: dedicated
              value: "ci"
              effect: NoSchedule

To ensure that Karpenter wouldn’t disrupt CI jobs before they finished, we configured pod-level disruption controls to ensure the jobs (and their underlying nodes) wouldn’t get rug-pulled.

1
2
3
4
5
spec:
  template:
    metadata:
      annotations:
        karpenter.sh/do-not-disrupt: "true"

We then set informed defaults for resource requests, along with reasonable limits, to efficiently pack CI jobs onto nodes without them becoming “noisy neighbors”. We also allowed developers to set their own elevated requests and limits for resource-intensive jobs, ensuring fast execution. This fine-tuning reduced fragmentation and avoided over-provisioning resources.

The payoff was significant. We saw higher utilization across our CI node pools, using fewer hosts, and without the instability that can come from overloading nodes.

EC2 Hosts
This compares the host count during a typical week. We’re able to run more workloads on Kubernetes with fewer hosts for significant cost savings!

NVMe Disk Usage: Turbocharging I/O

Disk I/O often becomes a bottleneck for CI workloads. Leveraging NVMe storage improved our build times by reducing disk read/write latency.

Unfortunately, Bottlerocket doesn’t support using the local NVMe drive for ephemeral storage out-of-the-box, so we adapted this solution to use a bootstrap container to configure that ephemeral storage on node startup.

Capacity Reservation: Ensuring Spare Nodes for CI Workloads

Autoscaling is powerful, but waiting for nodes to spin up during usage spikes can cause frustrating delays. That’s why we implemented capacity reservation to keep spare nodes ready for CI jobs, even during off-peak hours.

We did this by over-provisioning the cluster with a few idle pods with high resource requests in the CI namespace. If Kubernetes needs to schedule a CI job but lacks available nodes, the higher-priority CI job will cause the lower-priority “idle” pod to immediately get preempted (evicted) to make room for the job, allowing it to start immediately. Kubernetes will then spin up a new node for that idle pod, ensuring the cluster has spare capacity for any additional jobs.

These pods also have init containers that simply pre-pull frequently used container images. This ensures that new nodes can start running CI jobs immediately without waiting for those images to download.

The result? CI jobs start immediately, with no waiting around for new nodes to spin up. Developers are happy, and our cluster stays responsive, even during peak hours.

Conclusion: Fine-Tuning CI for Developer Happiness

By leveraging caching, autoscaling, bin packing, NVMe disks, and capacity reservation, we’ve significantly improved both developer experience and operational efficiency. The outcome of this, along with the overall migration of runners to Kubernetes, can be summarized with the following metrics:

  • Job Queue Time: The average time for a pending job to be picked up by a runner has dropped from 16 seconds to just 2 seconds. Even more impressive, the p98 queue time has gone from over 3 minutes to under 4 seconds. Developers get faster feedback loops so they can focus on getting shit done.

  • Cost Per Job: By optimizing resource utilization and scaling intelligently, we’re now spending 40% less per job compared to our previous setup. That’s a huge win for keeping our CI pipelines cost-effective as we continue to scale.

The journey to perfecting CI is iterative, and every improvement brings us closer to a system that’s faster, more reliable, and more cost-efficient. These improvements showcase how a well-architected and finely tuned CI system can deliver substantial value; not just in raw performance metrics but also in terms of real measurable developer productivity.

Building Containers on Kubernetes with Buildkit

In our last post, we discussed why we decided to migrate our continuous integration (CI) infrastructure to Kubernetes. Now, we’re going to tackle the next major challenge: enabling engineers to build and push Docker container images with a solution that is 100% Kubernetes-native. Our solution needed to work seamlessly on Kubernetes, be reliable, and integrate easily with our existing CI/CD workflows. This shift brought unique challenges, unexpected obstacles, and opportunities to explore new ways to improve the developer experience.

Before diving into the technical details, let’s cover how we used to facilitate Docker builds on the previous infrastructure and why we chose to change our approach.

The Past: Building Images on EC2

Before Kubernetes, each CI job was scheduled onto a dedicated EC2 instance and could fully utilize any/all of that node’s resources for its needs. Besides the usual CPU, memory, and disk resources, these nodes also had a local Docker daemon that could be used for building images with simple docker build or docker buildx commands.

Visualization of the Nomad host running CI jobs alongside an exposed Docker socket
On Nomad, each host exposed its own Docker daemon to CI jobs

While this setup worked fine most of the time, it had some major limitations:

  • State Pollution: One of the biggest issues was that Docker’s settings could be affected by previous builds. For instance, if a build changed certain global settings (perhaps by running docker login commands), it could impact subsequent builds. The absence of true isolation meant that any state left behind by one build could influence the next, which created unpredictability and made troubleshooting difficult.

  • Difficult to Maintain: Every aspect of Docker image building was tightly coupled to CI, Nomad, and these EC2 instances. This made it difficult to maintain, troubleshoot, and modify as our needs evolved. Changes were risky which led to a reluctance to make improvements, and so the system became increasingly fragile over time.

  • Lack of Multi-Architecture Support: As we pivoted production workloads to Kubernetes at spot instances, we had a desire to build multi-arch images for both x86 and ARM CPU architectures. Unfortunately, our old setup only supported this via emulation, which caused build times to explode. (We did have a handful of remote builders to support native builds, but these ended up being even more fragile and difficult to manage).

These challenges led us to ask: how can we modernize our CI/CD pipeline to leverage Kubernetes’ strengths and address these persistent issues? We needed a solution that would provide isolation between builds, multi-architecture support, and reduced operational overhead - all within a more dynamic and scalable environment.

Evaluating Options

Knowing we wanted to leverage Kubernetes, we evaluated several options for building container images in our CI environment. We avoided options like Docker-in-Docker (DinD) and exposing the host’s containerd socket due to concerns around security, reliability, and performance. It was important to have build resources managed by Kubernetes in an isolated fashion. We therefore narrowed our choices down to three tools: Buildkit, Podman, and Kaniko.

All three offered the functionality we needed, but after extensive testing, Buildkit emerged as the clear winner for two key reasons:

  • Performance: Buildkit seemed to be significantly faster. In our benchmarks, it built images approximately two or three times faster. This speed improvement was crucial - time saved during CI builds translates directly into increased developer productivity. The faster builds enabled our developers to receive feedback more quickly, which improved the overall development workflow. Buildkit’s ability to parallelize tasks and effectively use caching made a substantial difference in our CI times.

  • Compatibility: Our CI jobs were already using Docker’s buildx command, and remote Buildkit worked seamlessly as a drop-in replacement. This made the migration easier, as we didn’t need to rewrite CI jobs and build definitions. The familiar interface also reduced the learning curve for our engineers, making the transition smoother.

Buildkit Architecture: Kubernetes Driver vs. Remote Driver

After selecting Buildkit, the next decision was how to run it in Kubernetes. There were two main options - a Kubernetes driver that creates builders on-the-fly, and a remote driver that connects to an already-running Buildkit instance.

We ultimately opted to manage Buildkit Deployments ourselves and connect to them with the remote driver. Here’s why:

  • Direct Control: Using the remote driver allowed us to have more direct control over the Buildkit instances. We could fine-tune resource allocations, manage scaling, and monitor performance more effectively.

  • Security: The Kubernetes driver needs the ability to create and manage arbitrary pods in the cluster, a privilege we wanted to avoid granting to the whole CI system. Using the remote driver avoids this because the Buildkit instances are not managed by the docker CLI.

  • Prior Art: We knew some other organizations were leveraging the remote driver in Kubernetes, which gave us confidence that it was a viable approach. We were able to learn from their experiences and best practices which helped us avoid some pitfalls.

The Pre-Stop Script: Handling Graceful Shutdowns

One unexpected problem we encountered relates to how Buildkit handles shutdowns. By default, Buildkit terminates immediately upon receiving a SIGTERM signal from Kubernetes instead of waiting for ongoing builds to finish. This behavior caused issues when Kubernetes scaled down pods during deployments or when autoscaling - sometimes terminating builds in the middle of execution and leaving them incomplete! This was not acceptable for our CI/CD pipelines, as incomplete builds lead to wasted time and developer frustration.

Visualization of pod termination in Kubernetes without a pre-stop script
Kubernetes sends SIGTERM immediately, causing active builds to get killed

To address this, we implemented this Kubernetes pre-stop hook. The pre-stop script waits for active network connections to drain before allowing Kubernetes to send the SIGTERM signal. This change significantly reduced the number of failed builds caused by premature termination, making our system more reliable.

Visualization of pod termination in Kubernetes
Kubernetes waits until the preStop script completes before sending the SIGTERM

Implementing the pre-stop hook involved some trial and error to determine the appropriate waiting period, and it’s still not perfect, but it ultimately provided a significant boost to build stability. This solution allows Buildkit to complete its work gracefully, ensuring that we maintained the integrity of our build process even during pod terminations.

Reflecting on the New System: Wins, Challenges, and Lessons Learned

Reflecting on our journey, there are several clear wins and other important lessons we learned along the way.

What Went Well

Moving to Buildkit has been a major success in terms of performance! Builds are as fast as ever, and using Kubernetes has allowed us to simplify our infrastructure by eliminating the need for dedicated EC2 hosts. Kubernetes provided us with the scalability we needed, enabling us to add or remove capacity as demand fluctuated. And Buildkit’s support for remote registry-based caching further optimizes our CI build times.

Challenges with Autoscaling

One area where we’re still refining our approach is autoscaling. We’d ideally like to autoscale in real-time based on connection count so that each Buildkit instance is handling exactly one build. Unfortunately the cluster we’re using doesn’t support custom in-cluster metrics just yet, so we’re using CPU usage as a rough proxy. We’re currently erring on the side of having too many (unused) instances to prevent bottlenecks but this is not very cost-efficient. Even if we get autoscaling perfect, there’s still a risk that two builds might schedule to the same Buildkit instance - see the next section for more on this.

Furthermore, we’ve noticed that putting Buildkit behind a ClusterIP Service causes kubeproxy to sometimes prematurely reset TCP connections to pods that are being scaled down - even when the preStop hook hasn’t run yet. We haven’t yet figured out why this happens, but switching to a “headless” Service has allowed us to avoid this problem for now.

What We’d Do Differently

Our decision to run Buildkit as a Kubernetes Deployment + Service has been a mixed bag. Management has been easy, but high reliability has proven elusive. If we could start over, we’d start with a solution that guarantees (by design) that each build gets its own dedicated, ephemeral Buildkit instance that reliably tears down at the end of the build.

The Kubernetes driver for Buildkit partially satisfies this requirement, but it’s not a perfect fit for our needs. We’ll likely need some kind of proxy that intercepts the gRPC connection, spins up an ephemeral Buildkit pod, proxies the request through to that new pod, and then terminates the pod when the build is complete. (There are some other approaches we’ve been considering, but so far this seems like the most promising).

Regardless of how we get there, pivoting to ephemeral instances will finally give us true isolation and even better reliability, which will be a huge win for the engineers who rely on our CI/CD system.

Conclusion

Migrating our Docker image builds from EC2 to Kubernetes has been both challenging and rewarding. We’ve gained speed, flexibility, and a more maintainable CI/CD infrastructure.

Metrics showing image build duration and job counts over time
Builds are faster on Kubernetes even as the number of build jobs increases

However, it has also been a valuable learning experience - autoscaling, graceful shutdowns, and resource management all required more thought and iteration than we initially anticipated. We found that Kubernetes offered new possibilities for optimizing our builds, but these benefits required a deep understanding of both Kubernetes and our workloads.

We hope that by sharing our experience, we can help others who are on a similar path. If you’re considering moving your CI builds to Kubernetes, our advice is to go for it - but be prepared for some unexpected challenges along the way. The benefits are real, but they come with complexities that require careful planning and an ongoing commitment to refinement.

What’s Next?

Stay tuned for the next post in this series, where we’ll explore how we tackled artifact storage and caching in our new Kubernetes-based CI/CD system. We’ll dive into the strategies we used to optimize artifact retrieval and share some insights into how we managed to further improve the efficiency and reliability of our CI/CD workflows.

Introducing Mailroom: An Open-Source Internal Notification Framework

Mailroom logo

We’re excited to introduce our latest open-source project: Mailroom! It’s a flexible and extensible framework designed to simplify the creation, routing, and delivery of developer notifications based on events from platform systems (like Argo CD, GitHub/GitLab, etc.) In this post, we’ll share how Mailroom helps to streamline developer workflows by delivering concise, timely, and actionable notifications.

Crafting Delightful Notifications

On the Developer Experience team we believe strongly in the importance of timely, actionable, and concise notifications. Poorly crafted or overly spammy notifications can easily lead to frustration - nobody wants to have their focus disrupted by useless noise. Instead, we believe that the right notification at the right time can be incredibly powerful and even delightful, but only if done thoughtfully.

Notifications must be immediately useful, providing users with just the right amount of information they need to understand what is happening without overwhelming them. At SeatGeek, we carefully considered how to make notifications effective and meaningful, rather than intrusive or overwhelming.

For example, when we adopted ArgoCD, we could have easily implemented basic notifications via Slack via Argo’s notification system to a channel - perhaps something like this:

Example of a basic ArgoCD Slack notification
What was deployed? Was it successful? There’s not enough context here.

Sure, we could have gotten more clever with the templated JSON to include more information, but the basic template-based approach with a static recipient list would only take us so far. We wanted more control, like the ability to use complex logic for formatting, or sending different notifications to dynamic recipients custom-tailored to their role in the deployment (merger vs committer). This would enable us to provide a better experience like this:

Example of a custom ArgoCD Slack notification
The perfect amount of information! :chefkiss:

This requires creating custom notifications from scratch with code, being deliberate about what information we include, exclude, and how it gets presented. Our goal was to make 99% of notifications immediately useful without requiring further action from the user - if a notification disrupts or confuses rather than informs, it isn’t doing its job.

Why We Built Mailroom

The idea for Mailroom arose from a common pattern we observed. To build these delightful notifications, our Platform teams had a repeated need for specialized Slack bots to handle notifications from different external systems. Each bot basically did the same thing: transforming incoming webhooks into user-targeted notifications, looking up Slack IDs, and sending the messages. But building and maintaining separate bots meant setting up new repositories, implementing telemetry, managing CI/CD pipelines, and more. Creating new bots each time meant repeating a lot of boilerplate work.

Mailroom emerged from the desire to solve this once and for all. Instead of building standalone bots, we created a reusable framework that handles all the internal plumbing - allowing us to focus directly on delivering the value that our users craved.

With Mailroom, creating new notifications is straightforward. Developers simply define a Handler to transform incoming webhooks into Notification objects, and Mailroom takes care of the rest - from dispatching notifications to users based on their preferences, to managing retries, logging, and delivery failures.

How Mailroom’s Architecture Enables Platform Teams

Mailroom provides all the scaffolding and plumbing needed - simply plug in your Handlers and Transports, and you’re ready to go.

Diagram show the architecture of Mailroom
Mailroom’s architecture

Handlers process incoming events, such as “PR created” or “deploy finished,” transforming them into actionable notifications.

Transports send the notifications to users over their preferred channels - whether that’s Slack, email, or carrier pigeon.

The core concepts are simple, yet powerful, enabling flexibility for whatever your notification needs are. Want to send a GitHub PR notification via email and a failed deployment via Slack? Just write your Handler and let Mailroom’s Notifier do the rest!

By making it easy for developers to craft custom notifications, Mailroom helps our Platform team iterate quickly, ensuring that notifications remain targeted, relevant, and useful. By removing the boilerplate work, developers can focus on delivering real value without worrying about the underlying infrastructure.

Open Sourcing Mailroom

Today, we are announcing the availability of Mailroom as an open-source project to help other teams who face similar challenges with internal notifications. Whether you’re looking to build a quick Slack bot or need a scalable notification system across multiple services, Mailroom has you covered.

Mailroom allows Platform teams to focus on what really matters: delivering valuable information to users at the right time and in the right format - without needing to build out the underlying plumbing. We’ve provided some built-in integrations to help you get started faster, including Slack as a transport and both in-memory and PostgreSQL options for user stores. And we’re looking forward to expanding Mailroom’s capabilities with new features like native CloudEvents support in upcoming versions.

Get Started

Getting started with Mailroom is easy! You can find all the information in our GitHub repository. There’s a Getting Started guide that helps you set up a basic project using Mailroom, as well as more in-depth documentation for core concepts and advanced topics.

We welcome contributions from the community! Feel free to open issues, suggest features, or submit pull requests to help make Mailroom even better.

Check out Mailroom today on GitHub and let us know what you think! We can’t wait to see how you use it.

GitLab CI Runners Rearchitecture From Nomad to Kubernetes

Introduction: Chasing Better CI at Scale

Continuous Integration (CI) pipelines are one of the foundations of a good developer experience. At SeatGeek, we’ve leaned heavily on CI to keep developers productive and shipping code safely.

Earlier this year, we made a big push to modernize our stack by moving from using Nomad for orchestration to using Kubernetes to orchestrate all of our workloads across SeatGeek. When we started migrating our workloads to Kubernetes, we saw the perfect opportunity to reimagine how our CI runners worked. This post dives into our journey of modernizing CI at SeatGeek: the problems we faced, the architecture we landed on, and how we navigated the migration for 600+ repositories without slowing down development.

A Bit of History: From Nomad to Kubernetes

For years, we used Nomad to orchestrate runners at SeatGeek. We used fixed scaling to run ~80 hosts on weekdays and ~10 hosts on weekends. Each host would run one job at a time, so hosts were either running a job or waiting on a job.

Nomad architecture

This architecture got the job done but not without its quirks. Over time, we started to really feel some pain points in our CI setup:

  • Wasted Resources (and Money): Idle runners sat around, waiting for work, not adjusting to demand in real-time.
  • Long Queue Times: During peak working hours, developers waited… and sometimes waited a bit more. Productivity suffered.
  • State Pollution: Jobs stepped on each other’s toes, corrupting shared resources, and generally causing instability.

We decided to address these pain points head-on. What we wanted was simple: a CI architecture that was fast, efficient, and resilient. We needed a platform that could keep up with our engineering needs as we scaled.

New Architecture: Kubernetes-Powered Runners

After evaluating a few options, we landed on using the GitLab Kubernetes Executor to dynamically spin up ephemeral pods for each CI job. Here’s how it works at a high level:

GitLab Kubernetes Executor

Each CI job gets its own pod, with containers for:

  • Build: Runs the job script.
  • Helper: Handles git operations, caching, and artifacts.
  • Svc-x: Any service containers defined in the CI config (eg databases, queues, etc).

Using ephemeral pods eliminated resource waste and state pollution issues in one fell swoop. When a job is done, the pod is gone, taking any misconfigurations or leftover junk with it.

To set this up, we leaned on Terraform and the gitlab-runner Helm chart. These tools made it straightforward to configure our runners, permissions, cache buckets, and everything else needed to manage the system.

Breaking Up with the Docker Daemon

Historically, our CI relied heavily on the host’s Docker daemon via docker compose to spin up services for tests. While convenient for users, jobs sometimes poisoned hosts by failing to clean up after themselves, or they modified shared Docker configs in ways that broke subsequent pipelines running on that host.

Another big problem here was wasted resources and a lack of control of those resources. If a docker compose file spun up multiple containers, they would all share the same resource pool (the host).

Kubernetes gave us the perfect opportunity to cut ties with the Docker daemon. Instead, we fully embraced GitLab Services, which allowed us to define service containers in CI jobs, all managed by Kubernetes, with their resources defined according to the jobs individual needs. And another upside was that GitLab Services worked seamlessly across both Nomad and Kubernetes, letting us migrate in parallel to the larger Kubernetes runner migration.

Docker daemon

The Migration Playbook: Moving 600+ Repos Without Chaos

Migrating CI runners is a high-stakes operation. Do it wrong, and you risk breaking pipelines across the company. Here’s the phased approach we took to keep the risks manageable:

  1. Start Small We began with a few repositories owned by our team, tagging pipelines to use the new Kubernetes runners while continuing to operate the old Nomad runners. This let us iron out issues in a controlled environment.

  2. Expand to Platform-Owned Repos Next, we migrated all repos owned by the Platform team. This phase surfaced edge cases and gave us confidence in the runner architecture and performance.

  3. Shared Jobs Migration Then we updated shared jobs (like linting and deployment steps) to use the new runners. This phase alone shifted a significant portion of CI workloads to Kubernetes (saving a ton of money in the process).

  4. Mass Migration with Automation Finally, using multi-gitter, we generated migration MRs to update CI tags across hundreds of repositories. Successful pipelines were merged after a few basic safety checks, while failing pipelines flagged teams for manual intervention.

Was it easy? No. Migrating 600+ repositories across dozens of teams was a bit like rebuilding a plane while it’s in the air. We automated where we could but still had to dig in and fix edge cases manually.

Some of the issues we encountered were:

  • The usual hardcoded items that needed to be updated, things like ingress urls and other CI dependent services
  • We ran into a ton of false positives grokking for docker compose usage in CI, since it was often nested in underlying scripts
  • Images built on the fly using docker compose had to instead be pushed/tagged in a new step, while also needing to be rewritten to be compatible as a CI job entrypoint
    • We also set up shorter lifecycle policies for these CI-specific images to avoid ballooning costs
  • Some pods were OOMing now that we had more granular request/limits, this needed to be tuned for each job’s needs above our default resourcing
  • We also took this as a chance to refactor how we auth to AWS, in order to use more granular permissions, the downside was that we had to manually update each job that relies on IAM roles

What About Building Container Images?

One of the thorniest challenges was handling Docker image builds in Kubernetes. Our old approach relied heavily on the Docker daemon, which obviously doesn’t translate well to a Kubernetes-native model. Solving this was a significant project on its own, so much so that we’re dedicating an entire blog post to it.

Check out our next post, where we dive deep into our builder architecture and the lessons we learned there.

Closing Thoughts: A Better CI for a Better Dev Experience

This has measurably increased velocity while bringing costs down significantly (we’ll talk more about cost optimization in the third post in this series).

Through this we’ve doubled the number of concurrent jobs we’re running

Concurrent Jobs

and reduced the average job queue time from 16 seconds down to 2 seconds and the p98 queue time from over 3 minutes to less than 4 seconds!

Job Queue Time

Modernizing CI runners wasn’t just about cutting costs or improving queue times (though those were nice bonuses). It was about building a system that scaled with our engineering needs, reduced toil, increased velocity, and made CI pipelines something developers could trust and rely on.

If your team is looking to overhaul its CI system, we hope this series of posts can provide ideas and learnings to guide you on your journey. Have fun!

Scaling Our Security Culture with Security Awareness Month

Security @ SeatGeek Presents: Hacktober

At SeatGeek, we believe security isn’t just the responsibility of a dedicated team – it’s something we all own. As part of our ongoing efforts to build a strong security culture, we’ve dedicated the month of October to educating, engaging, and empowering our entire company around key security concepts. This effort, which we’ve dubbed “Hacktober,” is all about scaling security awareness in a way that reaches everyone, from engineers to business teams, no matter their location.

Why Security Awareness Matters

We say this constantly, but it’s always worth repeating: security is everyone’s problem. The challenge is that while security covers a huge range of topics, we have limited time and relatively small teams to handle it all.

Security threats, specifically cybersecurity threats, continue to evolve and attackers often target the weakest links. Educating everyone about the fundamentals of security not only reduces risks but also empowers individuals to make smarter decisions. Whether it’s spotting phishing attempts or securing personal devices, awareness is the first line of defense. Security Awareness Month is our chance to drive this message home and foster a proactive security mindset across all teams.

Hacktober Events

SeatGeek is a globally-diverse and remote-friendly company, so when coming up with live events to host throughout the month, we had to keep that in mind. Overall, the categories of events and materials we decided to invest in during Hacktober were Presentations, Published Media, and Hands-on Activities.

By taking a Show & Tell approach to most of our presentations, we were able to make some informative and entertaining feature dives and demonstrations of our tooling and detection & response processes. Some of the exciting topics this month included a deep dive into Malware Alert Response, How We Do DevSecOps at SeatGeek, and a closer look at Kubernetes and Cloud Security.

A big focus of the majority of published media we’re sending out is all about Phishing, Smishing, Vishing, and whatever other kind of ishings are out now (Gen-AI-ishing?). While we do regular training and testing for phishing, it’s often a topic that can benefit from different approaches. For Hacktober, we created an engaging live-action video on phishing and, spoiler alert, we’ll be running a phishing fire drill later this month 🤫.

And not to say we saved the best for last, but this was one that a lot of us were most excited for: we held a Security Capture the Flag event that saw a better-than-expected turnout from across the company! 🙌🎌

Welcome to the SeatGeek CTF ASCII Banner

We tend to notice that more folks engage with Security Capture the Flag events – where they can learn about security topics through puzzles and challenges – than other event types. We have challenges that span many different categories including Web and API security, Cloud Security, Network, Forensics, Cryptography, Reverse Engineering, and more! By including trivia, basic forensics, and OSINT challenges, we were also able to make the event fully inclusive for the entire company, not just engineers!

A Focus on Shifting Left

One reason we want to increase security awareness across the company is to drive the company to Shift Left. At a high level, this has been the primary focus of all of Hacktober!

The idea, or strategy, of “Shift Left” is to integrate security practices as early as possible into the software development lifecycle (SDLC). The goal is to prevent vulnerabilities from being introduced early in the process so that they don’t become issues later.

While shifting left is part of our larger goal, we don’t see it as just a box to check or a series of milestones. Instead, we’re embracing it as part of SeatGeek’s normal engineering culture. Automation, self-service, ease-of-use, and ultimately, meeting scalability requirements are all driving forces that not only enable us to work towards this idea for the company, but aligns our entire engineering department with it as well!

Conclusion

Security is an ever-evolving challenge, but it’s one we face together as a company. By embedding security into our daily processes, embracing automation, and tapping into community-driven efforts like bug bounty programs, we’re able to scale both our knowledge and defenses. Hacktober is just one chapter of many in our journey at SeatGeek, but it’s an essential step in building a security-first mindset. We encourage you to prioritize security in everything you do too!

Happy Hacking

Hacktober Pumpkin

Introducing Our Open-Source HCL Actions for Backstage: Making Terraform Iteration Easier

Introduction

At SeatGeek, we manage our infrastructure with Terraform, and as part of the Developer Experience team, we’re always looking for ways to help our product teams iterate on infrastructure within their own repositories—whether it’s setting up AWS resources, configuring alerts, or spinning up a new repository. Our goal is to provide golden paths that simplify these tasks.

We know not every engineer is familiar with Terraform, and that’s why we focus on creating these golden paths: to standardize the process and lower the barrier to entry. This way, teams can get started with Terraform quickly and confidently.

To further support this effort, we’re excited to introduce two open-source packages: @seatgeek/node-hcl (Github, npm) and a Backstage plugin for HCL scaffolder actions (Github, npm), designed to make working with Terraform even easier.

🙅 The Problem: Terraform + Node.js + Backstage

The challenge we faced was finding a way to simplify the process of bootstrapping new Terraform code across our repositories, many of which already had Terraform set up. We already rely on Backstage to create software templates for Terraform, but this only worked well in the context of greenfield repositories. For repositories with existing Terraform configurations, things became a bit more complicated.

Out of the box, Backstage’s built-in fetch:plain and fetch:template actions either create new files or fully overwrite existing ones—a basically all-or-nothing approach. This presents a challenge with shared files in the root Terraform directory like provider.tf, state.tf, main.tf, variables.tf, and output.tf, which may not always be present. And when they are, overwriting them adds extra work for engineers, who then have to manually reconcile these changes–adding more engineering toil in a solution aiming to reduce it.

Since Backstage is built on Node.js, we are limited to the plugins and libraries available within that ecosystem. Initially, it appeared that existing Node.js tools could help us handle HCL. Libraries such as @cdktf/hcl2json (based on @tccmombs/hcl2json) allowed us to convert HCL to JSON, which seemed promising as the first step towards manipulating the data. Then by leveraging open-source Backstage plugins, such as Roadie’s scaffolder actions for working with JSON directly, we could have addressed part of the issue. However, without a way to convert the JSON back into HCL, this would be a partial, incomplete solution at best.

And unfortunately, converting JSON back to HCL isn’t as straightforward as it sounds. As noted in this post by Martin Atkins, while HCL can be parsed into JSON for certain use cases, it’s not without its limitations, particularly with Terraform HCL:

A tool to convert from JSON syntax to native [HCL] syntax is possible in principle, but in order to produce a valid result it is not sufficient to map directly from the JSON infoset to the native syntax infoset. Instead, the tool must map from JSON infoset to the application-specific infoset, and then back from there to the native syntax infoset. In other words, such a tool would need to be application-specific.

In other words, while it’s technically possible to convert JSON back into the native HCL format, the conversion would need to understand the specific structure and nuances of Terraform’s flavor of HCL. Otherwise, you’d end up with valid HCL syntax that isn’t valid Terraform.

This meant the only viable path was to work directly with HCL for both reading and writing—but there was no Node.js-based solution for doing exactly that as Hashicorps HCL spec is only available in Go.

That’s where we found a solution in Go’s WebAssembly port.

🔥 The Solution: Go + WebAssembly + Backstage

After exploring different options and observing how tools like HashiCorp’s terraform-cdk utilized Go-based solutions to handle converting HCL to JSON and formatting HCL in Node.js, we decided to also build on top of Golang’s WebAssembly (Wasm) capabilities. Golang’s WebAssembly port allows Go code to be compiled and run on the web, enabling JavaScript engines like Node.js to take advantage of the Go ecosystem. By leveraging this technology, we were able to write the functions we needed for merging HCL files directly in our Backstage plugins.

Here’s a quick breakdown of how we integrated Go’s WebAssembly with Node.js in our Backstage plugin to handle HCL file operations like merging:

Building @seatgeek/node-hcl

We developed @seatgeek/node-hcl to leverage Golang’s WebAssembly port as the first piece that will allow us to bridge the gap between Backstage’s Node.js-based ecosystem and the HCL world of Terraform. This package allows us to write the functions we need in Go for working directly with HCL and later export these functions as Javascript.

1. Defining the implementation: (source) We first define the Go function we need to expose, such as a Merge function that combines two HCL strings using the hclwrite package from HashiCorp’s HCL library.

2. Registering the Javascript bindings: (source) Once the Go function is defined, it is registered and made callable within JavaScript through WebAssembly. This process involves mapping the Merge function to a JavaScript-compatible format.

1
2
3
4
5
6
7
8
9
registerFn("merge", func(this js.Value, args []js.Value) (interface{}, error) {
  if len(args) < 2 {
    return nil, fmt.Errorf("Not enough arguments, expected (2)")
  }

  aHclString := args[0].String()
  bHclString := args[1].String()
  return hcl.Merge(aHclString, bHclString)
})

3. Compiling Go to Webassembly: (source, source) The final code is then compiled to WebAssembly by targeting GOOS=js GOARCH=wasm. This creates a main.wasm file that is then gzipped and packaged. Since we want to also have this executable in a browser, the Javascript support file is packaged alongside the compiled wasm.

1
2
3
4
GOOS=js GOARCH=wasm go build -o main.wasm
gzip -9 -v -c main.wasm > dist/wasm/main.wasm.gz
# ...
cp -rf "$(go env GOROOT)/misc/wasm/wasm_exec.js" dist/wasm/

4. Exporting the bindings as Javascript: (source, source) Both the compiled wasm and the Javascript support file are loaded, allowing us to wrap the WebAssembly bindings in a JavaScript function that can be imported and used within any Node.js project.

Integrating into Backstage

The last piece for putting it all together was to create a Backstage plugin that could make use of our new package. We developed @seatgeek/backstage-plugin-scaffolder-backend-module-hcl for this purpose. It provides custom scaffolder actions to read, merge, and write HCL files through Backstage software templates.

From this point, all we had to do is:

1. Import the library: Include the published @seatgeek/node-hcl package into our new Backstage plugin.

2. Register the actions: (source, source) Develop the custom scaffolder actions using Backstage’s plugin system. These actions will use the merge function we exported earlier.

3. Import the plugin: Include the published @seatgeek/backstage-plugin-scaffolder-backend-module-hcl plugin to our internal Backstage instance at packages/backend/src/index.ts.

1
2
3
4
5
const backend = createBackend();

// ...

backend.add(import('@seatgeek/backstage-plugin-scaffolder-backend-module-hcl'));

4. Reference the actions in a software template: Then with all the pieces together, anyone can just include the actions in their software templates like so:

1
2
3
4
5
6
7
8
9
spec:
  steps:
    - id: merge-hcl
      name: Merge HCL Files
      action: hcl:merge:files:write
      input:
        aSourcePath: ./a/versions.tf
        bSourcePath: ./b/versions.tf
        outputPath: ./new/versions.tf

💡 Final Thoughts

At SeatGeek, we strongly believe in the value of open source. Solving this problem wasn’t just a win for our team—it’s something we knew could help other teams in similar situations. By open-sourcing @seatgeek/node-hcl and our Backstage plugin for HCL scaffolder actions, we’re contributing back to the community that has helped us build amazing solutions. Whether you’re using Terraform, managing Backstage software templates, or simply want to work with HCL in Node.js, we hope these tools will make your life a little easier.

Both libraries are available on our GitHub. Contributions, feedback, and issues are welcome. We hope these tools can serve as a starting point or a valuable addition to your developer workflows.

Happy HCL-ing!

Upgrading an Enterprise Scale React Application

Introduction

At SeatGeek, the Fan Experience Foundation team recently upgraded from React 17 to 18, and we’d like to share some of the processes we followed to make this upgrade possible on a large-scale React codebase.

Before diving deeper, some context on the codebase we are upgrading is necessary. SeatGeek has been moving its React application over to the NextJS web framework for some time now. This means that we have two React applications in our codebase, both working together to serve seatgeek.com. Thus, throughout this article we will refer to the two applications as sg-root and sg-next to differentiate between the various problems we will cover. Further, we write our frontends in TypeScript, and our unit tests in Jest, so we expect to have no errors when we are finished. With this knowledge in hand, let’s begin the upgrade.

Getting Started

Most reading this are likely already aware that React has published an upgrade guide for version 18. However, when working on a large scale React application, you cannot simply upgrade your react and type dependencies, and call it a day. There will be many workarounds, compromises, and fixes that will need to be applied to make your application functional. While React 18 itself has few breaking changes, our complex configuration of dependencies, both old and new, can lead to some difficult hurdles. Before following the upgrade guide, we recommend thinking hard about what your upgrade plan is. Write down any known unknowns before you begin. Below are some of the questions we wrote down, before beginning on this journey.

How does React recommend upgrading to version 18?

We were fortunate the documentation for React was written so clearly. After reading the upgrade guide, it became obvious that we didn’t have to subscribe to every new feature, and that much of an existing application would be backwards compatible. Perhaps the new React beta docs could take a note from how NextJS is gathering community feedback in their documentation. For example, we would love to see compatibility tables for widely used react packages, such as react-redux in a future upgrade guide.

How does the open source community handle upgrades to 18?

While some react packages communicated their compatibility with React 18 well, we found others such as @testing-library/react to be lacking in comparison. The more digging we had to do into obscure Github issues and StackOverflow posts, the worse the experience became. With that said, the majority of communities made this process easy for us, and that says a lot for how far the JavaScript community has come.

Are any of our dependencies incompatible with 18?

When this project began, we didn’t come across any dependencies that did not have support for React 18. Unfortunately, we later discovered that we still had tests that relied on enzyme in legacy corners of the application. It turned out that enzyme did not even officially support React 17, so this would become one of the more inconvenient problems to deal with.

Which new features will we immediately be taking advantage?

While we did not have any immediate plans to use some of the new React hooks, we did intend on integrating the new server rendering APIs, as well as the updates to client-side rendering, such as replacing render with createRoot.

Answering these questions helped us understand the high-level problems we would face. With some initial research out of the way, we began following the official upgrade guide.

Dependency Upgrades and Hurdles

Throughout the upgrade, many dependencies were updated for compatibility, in other cases it was done to fix a bug. We started by upgrading what you’d expect, react itself in sg-root. For sg-next react is only a peerDependency of the project, so only the types had to be added.

After react 18 was finished installing, we took a look at some of the react dependencies we used with webpack and NextJS, to see if any of them could be removed or upgraded. Since @hot-loader/react-dom had no plans of supporting React 18, we removed it from the sg-root project. Developers could continue to use fast refresh through NextJS in the sg-next project.

Next we attempted to compile the application. An error was thrown that complained about multiple versions of React. We ran yarn why react, and noticed that we had an older version of @typeform/embed that relied on React 17.

Fortunately, we use yarn at SeatGeek, so we were able to take advantage of yarn’s selective dependency resolutions feature to workaround scenarios where our dependencies were not yet supporting React 18. While this could be problematic in special cases, it worked perfectly for our needs. If you are struggling with an older dependency that relies on React 17 or lower, during the upgrade, we highly recommend this approach if you happen to be using yarn already. It’s worth keeping in mind however, that forcing a version of react to resolve for all dependencies, is not the correct approach when those dependencies are incompatible with the new version of React. After adding react to the resolutions, and running yarn, our yarn.lock was recreated, and the error went away.

At this point the application was compiling successfully. We were able to see the first runtime errors as soon as any page loaded. The first runtime error came from the react package 😲.

The stack trace for this error didn’t provide any obvious culprits and searching for similar issues online returned many results. Each one, tended to arrive at a completely different solution. After reading plenty of Github discussions, we came across a merge request in a public repository. While the error was not identical, the developer was upgrading styled-components in hopes to achieve compatibility with React 18. Sure enough, upgrading styled-components to v5.3.6 resolved the issue. While we don’t know which commit was responsible, we know it must be one of the ones shown here.

With that fixed, yet another error was thrown.

Researching this error quickly landed us on the following StackOverflow post. Which made sense, since we were still running react-redux v7, and v8 added support for React 18. After removing the @types/react-redux package (since types are now built-in), and upgrading react-redux to v8, the error disappeared.

Here is a final list of dependencies that were modified to complete this upgrade.

Application Changes

After reading over the Upgrade Guide, it became clear that we would have to update both our usage of Server Rendering APIs in sg-root, as well as the Client Rendering APIs in sg-root and sg-next. While this is a fairly straightforward find and replace, it’s important to do this methodically, and make sure you don’t miss any files in such a large project. This is especially important since many of the older APIs were deprecated, and not removed. Your application will continue to compile, but receive runtime errors and warnings when used in the wrong way.

Something worth calling out in the guide, about the new render method:

We’ve removed the callback from render, since it usually does not have the expected result when using Suspense

While the AppWithCallbackAfterRender example provided by the documentation may work for some, we found this example from Sebastian Markbåge to work for our needs. Below is a more complete version of it, for convenience. But as they also mention in the guide, the right solution depends on your use case.

If you aren’t already using StrictMode, now is probably a good time to start. While it wasn’t widely used in our repository, it would have made upgrading much easier. NextJS strongly recommends it, and offers a simple configuration flag for enabling it app-wide. We will likely adopt an incremental strategy in the future, to introduce it to our codebase.

At SeatGeek, most of our applicable react code lives in .ts and .tsx files, some of the older legacy code is in .php files. It is possible we may have missed the legacy code if we were moving too fast, so it’s a good idea to take it slow, and be thorough when working on a large project. Lastly, make sure to update any forms of documentation throughout your codebase, and leave comments where possible to provide context. You will find this makes subsequent upgrades significantly easier.

Deprecations and Breaking Changes

One of the bigger breaking changes in React 18, is the removal of Internet Explorer support. While this didn’t directly impact our upgrade process at SeatGeek, it is worth calling out as something to consider. As we were supporting Internet Explorer on our site not long ago. If your clients need to support IE, sticking with 17 or lower is your only choice. Although with IE 11 support from Microsoft ending last year in ‘22, you should probably look to change which browsers your clients are using.

An issue we did experience however, can be found in the Other Breaking Changes section of the upgrade guide, called “Consistent useEffect timing”. This change caused certain functionality throughout our application to stop working as expected. Mostly this was relegated to DOM events, such as keypresses and clicks. While each case was unique, the broader issue here has to do with timing as React says. You’ll find that after identifying and fixing a few of these issues, that your application was really just relying on the unpredictable behavior in React 16 & 17.

Type Checking

At SeatGeek our frontend applications are written in TypeScript, a superset of JavaScript that adds optional static typing. To upgrade to React 18 successfully, the TypeScript compiler would need to report that we had no type errors. By upgrading our @types/react and @types/react-dom dependencies we would begin to see what type errors we would have to resolve.

Children Prop

One of the biggest TypeScript definition changes in 18, is that React now requires the children prop to be explicitly listed when defining props. Let’s say we have a dropdown component, with an open prop, and we would pass children to render as menu items. The following example explains how the type definitions would need to change to support React 18.

Code Mod

TypeScript will not compile a React 18 application, until these errors are resolved. Fortunately the React documentation recommends using a popular open source code mod to resolve these errors. The mod covers the conversion of implicit children props to explicit, as well as a handful of other scenarios. This process was not without issues however, and did require us to go back in and manually fix areas of the codebase where the code mod created either invalid code, or incorrect types. For this reason, each transform was ran individually, and only valid changes were committed.

Circular JSON

After all code mod transforms had run, and some minor manual fixes were made, we ran the TypeScript compiler. The terminal displayed the following unhelpful type error, with no additional context.

After some debugging and research into this error, we were led to the following chain of Github issues and pull requests

The fix for diagnostic serialization sounded promising, so we began upgrading TypeScript one minor version at a time. We found TypeScript v4.6 got rid of the type error and produced useful compiler output. We chose not to upgrade TS further, since we wanted to keep the amount of unnecessary change to a minimum. Now when we ran tsc , we received the following output.

TS Migrate

While some of the errors were unique to the TypeScript v4.6 upgrade, others were due to type errors the code mod was unable to resolve for us. We focused on fixing the React specific errors, and chose to suppress the rest of the errors using airbnb’s ts-migrate. This tool is extremely helpful in making it easier to fix type errors over time, especially if you are working on a large codebase that was not originally written in TypeScript. We chose to prioritize the suppressed type errors at a later date, and move on. With TypeScript reporting zero type errors when we ran tsc, we were able to proceed to the next challenge in upgrading React.

Testing

For any large enterprise product with a lot of tests, you’ll likely find this to be where you spend most of your time upgrading a core dependency of your project. Seatgeek.com has 4,601 tests in sg-root and 3,076 tests in sg-next (not including end-to-end tests), at the time of writing. Some of these tests rely on newer practices, such as those in the popular @testing-library/react package. Others are legacy tests, that rely on the now defunct enzyme package. As you can imagine, getting all of these tests passing would require tackling some unique problems.

React Testing Library

Unsurprisingly, after upgrading React, you also need to make some changes to your core testing utility belt. If you were writing tests for React Hooks, you’ll have to remove @testing-library/react-hooks as renderHook is now part of the testing library.

You can start with replacing the import statements.

Then you will have to rework some of your tests. Several of the changes you’ll need to make to support the new method of react hook testing are mentioned here. Even after these changes, you will likely need to make many more tweaks to your tests, so we recommend taking your time with this transition, and committing often.

Warning: An update to Component inside a test was not wrapped in act(…)

You will likely see this log often, sometimes it is even a helpful warning. The majority of the time, however, during this upgrade it was a red-herring. No matter how many articles, and Github issues we read on the topic, we tended to find our own solution to the problem. Sometimes the issue had to do with our Jest configuration, how mocks were setup in the test file, or even how Jest Timer functions were used. In a few cases we even ended up having to wrap fireEvent in testing libraries act API, which should not have been necessary.

We found it to be much more difficult than we would have liked to get the test suite that used @testing-library/react up to date. It was like pulling teeth, due to the complex changes to how React 18 behaves with the addition of the IS_REACT_ACT_ENVIRONMENT global (more on this below), and many of the underlying API changes to react testing library. If your test code isn’t up to the highest standards of quality, you may find this challenge daunting at first, and it may best to distribute the effort involved. My last bit of advice here would be to read carefully both of the documentation, and this blog post. Worst case scenario, you’ll find yourself reading dozens of 100+ comment Github issues, but hopefully it doesn’t come to that.

Enzyme

Working around issues with Enzyme on React 18 was not easy, especially because Enzyme is dead. Not only is it missing official support for React 17, but according to the developer who dragged it through the mud to run on 17, we were not going to get support of any kind for 18.

While many of these problems you’ll have to deal with on a case by case basis, you will still need some partial support for React 18 to avoid having to remove Enzyme from hundreds or thousands of test files. My recommendation would be to fork enzyme-adapter-react-17 as was done here, and you’ll find many of your tests passing as a result.

Ultimately though, you’ll need to replace a few of the more problematic tests with @testing-library/react and reconfigure the “React Act Environment” as described below. Then begin the process of paying down your technical debt with enzyme before they release React 19.

React Act Environment

In React 18, we saw the proposal and introduction of global.IS_REACT_ACT_ENVIRONMENT. This allows us to communicate to React that it is running in a unit test-like environment. As React says in their documentation, testing libraries will configure this for you. Yet you should be aware that this comes with much to manage yourself, depending on the test.

For that reason, we added the following utility methods, which make it easier to manage whether this new flag is set.

In an ideal world, this would be unnecessary, and we imagine over time tools such as this, will move out of our code, and into the testing library itself. Until it is inevitably made obsolete.

Window Location

Prior to upgrading, we relied on Jest 23, which used an older version of JSDOM. This meant we could use the Proxy API to intercept, and occasionally redefine properties of an object. This was especially useful for globals, such as window.location that were sprinkled all across the application. Over the years, you’d find developers changing the location of the current page in many different ways.

For example

  • window.location = url;
  • window.location.href = url;
  • Object.assign(window.location, { href: url });
  • window.location.assign(url);
  • etc

Proxies could easily intercept all of these attempts to change the object, and allow our test framework to assert properties that were changing on the global location object. Yet in newer version of Jest, Proxying was no longer possible on global objects. Instead you get all sorts of runtime TypeErrors, that can be difficult to debug, or find the source of.

Since this was a fairly common issue that had been around since JSDOM 11, we would need a general solution that could be applied to any of our tests, and require minimal changes. In order to continue to support our test suite, we introduced a small test helper, and got rid of the old Proxying logic. This helper could be used on a case by case basis, and mocks the window location, rather than intercepting its mutations.

Then we can call this helper function before our tests run, and properly test window.location changes once again.

Silence Warnings

In some of the older legacy tests that relied on enzyme, we found React calling console.error. This resulted in several failing test suites. Below is an example of the reported error message.

To workaround this, we added a small utility that can be used on a case by case basis to avoid React’s complaints, allowing the tests to continue as they would normally uninterrupted. Generally, you wouldn’t want to silence calls to console.error. However, in the case of these tests, it was safe to ignore them. You can find the utility to suppress these error messages below.

Error Boundary

In some circumstances, you might find asserting that an error occurred, is more difficult than it should be in Jest. In these cases, consider leveraging React’s ErrorBoundary components. If you have a component that you want to assert is showing a fallback when failing, you can use @testing-library/react to wrap your component in an <ErrorBoundary fallback={jest.fn()}> and expect your mock function to be called. See the example below for more on how this works in practice.

Bugfixes

After thorough review and months of QA, several bugs were identified throughout our application. The bugs were logged, researched, and fixed. Then new rounds of QA occurred, until we felt confident that this upgrade wasn’t going to break any critical functionality. Below is an overview of a few of the bugs we ran into while upgrading to React 18.

Click Events

Throughout our website we have many different components that rely on click events that occur outside of an HTML element. Such as dropdown, modals, popovers, and more. After upgrading to React 18, most of these components appeared not to render at all. This was because the trigger to open these components, often fired a click event, that would cause them to immediately close. This is the “Consistent useEffect timing” problem we described earlier that React wrote about in their documentation on Breaking Changes. After a little digging into the React repository issues, we came across this issue that described the problem well.

in React 18, the effects resulted from onClick are flushed synchronously.

One possible workaround is to delay the subscription with a zero timeout.

Given the advice, we went ahead and tried wrapping our window.addEventListener calls in setTimeout and our components began behaving as we might expect them to once more. It’s worth mentioning, that this issue can also be fixed by calling stopPropagation on the triggering elements, but this might be untenable depending on how large your codebase is.

You can see this issue in more detail in the sandbox below

Search Rendering

The seatgeek.com website includes a lot of search functionality to make it easy for our users to find tickets to the latest events. While testing React 18, we noticed an almost imperceptible flash of our search results dropdown on screen as the page was loading. This was after months of QA, and no one had reported any issue like this. So we were not sure if it was possible we were just seeing things. In order to verify this, we opened the developer tools and clicked on the performance tab. We enabled the screenshots feature, and then clicked record and refresh. The results proved that we were not in fact going mad, but that the dropdown was appearing on the page, albeit only for 100ms.

To make this easier to experience, below you can find a reproduction sandbox of the issue. These sandboxes are best viewed in their own window, since you’ll need to use the developer tools as we described above, or refresh the page a few times at the very least.

React 18 Flashing Dropdown Bug Sandbox

React 17 without Flashing Dropdown Bug Sandbox

After reviewing the example code, we are sure a few readers will be wondering why we aren’t just using the autoFocus attribute, and calling it a day. In our application, our search component is used in dozens of places all over the application. This means we have cases where we not only need to manage the focus, but need to modify the DOM based on whether or not the input is focused. In other words, we often need to implement controlled components, over uncontrolled ones.

In the past, Safari appears to have had issues with this, and so someone decided to fix that bug. The solution to the Safari issue, at the time, appears to be to use a Promise.resolve and defer focus. However, this problematic code lead to the issues we now see in React 18. If you remove the Promise, you’ll find the issue disappears. Because timing was important, we knew the problem was likely related to the addition of automatic batching in React 18.

With automatic batching in mind, if we wrap the call to focus and setFocused in a flushSync instead of removing the promise, we will also find this fixes the issue. However, this feature can hurt the performance of your application, so you may want to use it infrequently if you must.

Ultimately, the fix for this issue was fairly uninteresting, and we were able to avoid the addition of flushSync, and removing the Safari patch, by just adding more state, and skipping unnecessary state updates at the proper time. This will depend on your use case however, so we felt it was important to review some of the alternatives that may work in this circumstance.

Post-release issue

While we devoted plenty of time to testing these changes in real environments, in the weeks that followed one issue was reported. Shallow routing via NextJS appeared to be 100x slower than normal.

5s Idle Frame
5s Idle Frame while routing

After further investigation, we came across the following stack trace.

Stack Trace
React warning stack trace

React was telling us that something in the application was calling setState when it shouldn’t be. Reviewing the stack trace revealed that a redux dispatch was called while we were in the middle of routing. More so, it showed that this dispatch call was coming from the root _app file, used to initialize pages in NextJS. With this information we were able to identify two problematic dispatch calls in our root application that are fired every time we re-render our root. Reverting to React 17 proved to work as expected. So the question was, why was an upgrade to React 18 causing the entire app to re-render when we were performing a shallow route navigation via NextJS?

We still don’t know for certain what this specific issue was caused by, but we do believe this is related to using React 18 with NextJS. It appears that if you have a component that calls the useRouter hook, then even shallow routing will trigger those components to re-render, even if it happens to be the root of your application. We aren’t the only ones who have discovered this behavior either. We imagine in a future version of NextJS they will fix useRouter and avoid unnecessary re-rendering. For the time being, we were able to get rid of the inappropriate dispatch calls, and prevent the 100x slow down we experienced. Hopefully a future update will make shallow routing as reliable as it was prior to upgrading.

Conclusion

While we are now running React 18, we are far from finished. We have a lot of work to do to get our legacy code up to date, so future upgrades will be less burdensome. enzyme is low hanging fruit in that regard. React 18 included many new APIs, many of which our own libraries, have yet to take advantage of. Additionally, now that we are running on version 18, we can begin our upgrade to NextJS 13, unlocking even more potential for our developers and our customers alike.

We have covered most of what was required to upgrade seatgeek.com to React 18, you may face unique problems of your own when attempting it. Hopefully some of this has proved useful, and eases the upgrade path when you finally choose to upgrade.

Thanks for reading, and don’t hesitate to reach out. We are always looking for like-minded developers who are ready to push us onward and upward.

https://seatgeek.com/jobs

Browser-Based Load Testing Without Breaking the Bank

Last month, droves of Bruce Springsteen fans lined up at SeatGeek.com to buy tickets to see the Boss. It was one of our highest demand onsales in SeatGeek history, and it went by like a breeze, a success owed in part to our investment in browser-based load testing. We thought this would be a good opportunity to share how we built our own browser-based load testing tool to keep our software in tip-top shape using AWS Batch and Playwright at an astonishingly low price.

In a load test, we simulate the behavior of multiple users interacting with an application to see how that application responds to traffic. These users can be, and often are, simulated on the protocol level: a burst of purchases during a flash sale comes in the form of thousands of HTTP requests against a /purchase endpoint, a spike in sign-ons to a chatroom comes in the form of websocket connections, etc. This method for load testing is called protocol-based load testing and comes with a vast ecosystem of tooling, such as JMeter, Taurus, and Gatling. It is also relatively cheap to run: I, for example, can run a test that spawns a few thousand HTTP requests from my laptop without issues.

Browser-based load testing, alternatively, simulates users on the browser level: a burst of purchases during a flash sale comes in the form of thousands of real-life browser sessions connecting to our website and submitting purchases on its checkout page. Browser-based load testing allows us to simulate users more closely to how they actually behave and to decouple our tests from lower-level protocols and APIs, which may break or be entirely redesigned as we scale our systems.

But browser-based load testing comes with a much sparser tooling ecosystem. And, importantly, it is not so cheap: I, for example, can barely run one browser on my laptop without overheating; I wouldn’t ever dare to run one thousand. Once we get into the territory of requiring tens or hundreds of thousands of users for a test, even modern cloud solutions present difficulties in running the necessary compute to power so many browser sessions at once.

This post details how (and why) we built our own browser-based load testing tool using AWS Batch and Playwright to simulate 50,000+ users without breaking the bank.

Browser-based load testing at SeatGeek

The desire for browser-based load testing at SeatGeek began as we developed our in-house application for queueing users into high demand ticket onsales. You can learn more about the application in our QCon talk and podcast on the subject. In short, users who visit an event page for a high demand ticket onsale are served a special web application and granted a queue token that represents their place in line for the sale. The web application communicates with our backend via intermediary code that lives at the edge to exchange that queue token into an access token once the user is admitted entry, which allows the user to view the onsale’s event page. Edge cases, bottlenecks, and competing interests pile on complexity quickly and demand rapid iteration on how we fairly queue users into our site. Notice for a high demand onsale can come to us at any time, and we need to always be prepared to handle that traffic.

The one thing that remains fairly constant during all of this is the user’s experience on the browser. It’s crucially important that while we may change a request from polling to websockets, take use of some new feature from our CDN, or restructure how a lambda function consumes messages from a DynamoDB table, we never break our core feature: that tens of thousands of browsers can connect to a page and, after some time waiting in queue, all be allowed access to the protected resource.

We initially set up some protocol-based load tests but found it increasingly difficult to rely on those tests to catch the kind of performance issues and bugs that occur when real browsers set up websocket connections, emit heartbeats, reload pages, etc. We also found that the tests added overhead to our iteration cycles as we frequently experimented with new APIs and communication protocols to deal with increasing scale. What we wanted was a test like this:

  1. I, a user, visit a page protected by our queueing software, say at /test-queue-page. I see some text like “The onsale will begin soon.”
  2. An orchestrator process begins to allow traffic into the protected page
  3. I wait some time and eventually see the protected page. Maybe I see the text “Buy tickets for test event.” If I don’t see the protected page within the allotted wait time, I consider the test to have failed.
  4. Multiply me by X.

Why our own solution?

The test we want is simple enough, and it obviously will be a lot easier to pull off if we run it using one of the many established vendors in the performance testing world. So why did we decide to build the solution ourselves?

The key reason is cost. Not every load testing vendor offers browser-based tests, and those who do seem to optimize for tests that simulate tens to hundreds of users, not tens of thousands or more. In meetings with vendors, when we asked what it would cost to run one test with ~50,000 browsers for ~30 minutes (the max time we’d expect for a user to spend waiting for entry into an event), we were often quoted figures in the range of $15,000 to $100,000! And this was only a rough estimate: most vendors were not even sure if the test would be possible and provided an on-the-fly calculation of what the necessary resources would cost, which many times seemed like an attempt to dissuade us from the idea all together.

But we weren’t dissuaded so easily. Of course, we couldn’t justify spending tens of thousands of dollars on a performance test. At that price we could just pay 50,000 people a dollars each to log onto our site and text us what happens. Unfortunately, though, none of us had 50,000 friends. Instead, we turned to every developer’s best friend: AWS.

Picking a platform

At a given moment, we want to spawn ~50-100 thousand browsers, have them perform some given commands, and, at the least, report a success or failure condition. We also want a separate orchestrator process that can do some set up and tear down for the test, including spawning these browsers programmatically.

🚫 AWS Lambda

What is the hottest modern way to spawn short-lived compute jobs into the cloud? Lambda functions! Using Lambda was our first hunch, though we quickly ran into some blockers. To list a few:

  1. Lambda functions have a strict 15 minute execution limit. We want to support browsers that live for longer than 15 minutes.
  2. We found (and validated via the web) that the lambda runtime is not great at running chromium or other headless browsers.
  3. Lambda doesn’t have a straightforward way of requesting a set number of invocations of one function concurrently.

✅ AWS Batch

AWS Batch, a newer AWS service intended for “[running] hundreds of thousands of batch computing jobs on AWS​​,” seemed to fit our requirements where Lambda failed to do so. According to their documentation:

AWS Batch dynamically provisions the optimal quantity and type of compute resources (e.g., CPU or memory optimized instances) based on the volume and specific resource requirements of the batch jobs submitted. With AWS Batch, there is no need to install and manage batch computing software or server clusters that you use to run your jobs, allowing you to focus on analyzing results and solving problems.

Unlike Lambda functions:

  1. AWS Batch jobs have no maximum timeout.
  2. EC2 instances, one of the available executors for Batch jobs, easily support running headless browsers.
  3. AWS Batch’s array jobs allow a straightforward pattern for invoking a set number of workloads of a job definition.

Even better, Batch allows workloads to be run on spot instances - low-demand EC2 instance types provided at a discount price - allowing us to cut the cost of our compute up front.

If we ever urgently needed to run a high volume load test during low spot availability, we could always flip to on-demand EC2 instances for a higher price. To further save cost, we opted to run our Batch compute in a region with cheaper EC2 cost, given that at least in our initial iterations we had no dependencies on existing infrastructure. Also, if we were going to suddenly launch thousands of EC2 instances into our cloud, we thought it could be better to have a little isolation.

Here is a simplified configuration of our AWS Batch compute environment in terraform:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
resource "aws_batch_compute_environment" {
    compute_environment_name_prefix = "load-test-compute-environment"
    compute_resources {
        // “optimal” will use instances from the C4, M4 and R4 instance families
        instance_type = ["optimal"]
        // select instances with a preference for the lowest-cost type
        allocation_strategy = "BEST_FIT"
        // it’s possible to only select spot instances if available at some discount threshold,
        // though we prefer to use any compute that is available
        bid_percentage = 100
        max_vcpus = 10000
        type = "EC2"
    }
    // we let AWS manage our underlying ECS instance for us
    type = "MANAGED"
}

Implementation

Our load test consists of two components: the simulator and the orchestrator.

The simulator is a small Node.js script that simulates a user’s behavior using Playwright and chromium. The simulated user visits a URL that is protected by our onsale queueing software and follows a set of steps before either returning successfully after arriving at the event page or emitting an error. This script is baked into a Docker image and deployed to AWS Batch as a job definition.

The orchestrator is a Go app that orchestrates the entire test execution. It both interacts with our onsale queuing software to control the protected resource under test and dispatches concurrent executions of the simulator to AWS Batch as an array job.

Outside of AWS Batch, we rely on our standard tooling to run the tests:

  • Gitlab CI runs the orchestrator as a scheduled CI/CD job, with some parameterization for toggling the number of simulated users (small-scale tests, for instance, run regularly as end-to-end tests; larger-scale tests require a manual invocation)
  • Datadog observes test executions and collects and aggregates data from browsers and orchestrator
  • Slack notifies stakeholders of test runs and results
  • Terraform provisions AWS Batch and related compute infrastructure

All together, our load test looks something like this:

Diagram

Multiple browser sessions per Batch job

One caveat illustrated in the diagram above is that each simulator instance runs multiple browser sessions in parallel rather than a single browser. This is for three reasons. First, there is a floor on the CPU that can be allocated to an AWS Batch EC2 job: 1vCPU. We found that our browser only required about a tenth of 1vCPU to run. Let’s say Batch is attempting to schedule our jobs onto a fleet of m4.large instances, which can provision 2vCPU of compute and cost $0.10 an hour. To keep things simple, we’ll say we want to run 100,000 browsers for an hour. If each Batch job runs one browser, and each Batch job requires 1vCPU, we can only run two browsers per EC2 instance and will ultimately require 50,000 EC2 instances to run our test, costing us $5,000. If each job, instead, can run 10 browsers, we can run twenty browsers per EC2 instance and will only need 5,000 instances, reducing our cost to 10% of our original price.

Second, array jobs have a maximum of 10,000 child jobs, so if we want to run more than 10,000 users, we need to pack more than one browser session into each job. (Though even if Batch were to raise the limit of array child jobs to 100,000, we’d still need to run multiple browser sessions in parallel for cost reasons.)

Third, running multiple sessions on one browser means we can get more users using fewer resources: ten sessions running on one browser is cheaper than ten sessions each running on individual browsers.

One drawback of this approach is that we can’t rely on a one-to-one mapping of users→jobs to report per-user test behavior. To help us isolate failures, the simulator returns a non-zero exit code if any of its users fails, and each user emits logs with a unique user ID.

Another drawback is: what happens if nine of ten users in a simulator job succeed in passing through the queue in a minute, but the tenth queues for 30 minutes keeping the job alive? This is quite common, and we inevitably pay for the unused resources required to run the 9 finished users, since AWS Batch will hold that 1vCPU (as well as provisioned memory) for the entirety of the job’s run.

For now, running multiple browsers per job is a tradeoff we’re willing to make.

Abstractions on abstractions

One challenge in implementing our solution on Batch is that Batch involves many nested compute abstractions. To run jobs on Batch, a developer must first create a Compute Environment, which contains “the Amazon ECS container instances that are used to run containerized batch jobs”. We opted to use a Managed Compute Environment, in which AWS automatically provisions an ECS cluster based on a high-level specification of what resources we need to run our batch jobs. The ECS cluster then provisions an ASG to manage EC2 instances.

When running small workloads on Batch, this level of abstraction is likely not an issue, but given the more extreme demands of our load tests, we often hit bottlenecks that are not easy to track down. Errors on the ECS or ASG plane don’t clearly percolate up to Batch. An array job failing to move more than a given percentage of child jobs past the pending state can require sifting through logs from various AWS services. Ideally, Batch would more clearly surface some of these issues from underlying compute resources.

💸 Results

We have been able to reliably run ~30 minute tests with 60,000 browser-based users at less than $100. This is a massive improvement compared to the tens of thousands of dollars quoted by vendors. We run our high-traffic load test on a weekly cadence to catch major regressions and have more frequently scheduled, low-traffic tests throughout the week to keep a general pulse on our application.

Here is the total cost from our AWS region where we run our tests (we don’t run anything else in this region):

Diagram

Conclusion

Browser-based load testing is a great way to realistically prepare web applications for high levels of traffic when protocol-based load testing won’t cut it. Moving forward we’d like to try extending our in-house testing tool to provide a more configurable testing foundation that can be used to test other critical paths in our web applications.

We hope our own exploration into this approach inspires teams to consider when browser-based load testing may be a good fit for their applications and how to do so without breaking the bank.


If you’re interested in helping us build browser-based load tests, check out our Jobs page at https://seatgeek.com/jobs. We’re hiring!

Allow Us to Reintroduce Ourselves

SeatGeek brand evolution

SeatGeek was started 12 years ago because we knew ticketing could be better. Since then, we’ve pushed the industry forward through product innovation: launching a top-rated mobile app, becoming the first to introduce fully dynamic maps, creating a metric to rate tickets by quality (now an industry norm) and introducing fully-interactive digital tickets with Rally. While our product is beloved by those who use it, the vast majority of fans have never heard of us.

So we think it’s time to bring the SeatGeek brand to the masses. To help us achieve this goal, we used this past year to rethink our brand strategy and reimagine our look and feel. We focused on creating something:

  • Bold like the events we ticket
  • Human like the emotions they evoke
  • Distinct from our competitive set
  • Confident in our expertise

See below for the full principles that guide our new brand:

 Brand Pillars

Today, we’re excited to share SeatGeek’s new look. From our logo to our app and everything in between, our new brand represents everything SeatGeek is and what we bring to ticketing. Here are some of the foundational elements:

Wordmark

Colors

Typography

While technology has changed how we can experience live, our “why” for loving them is timeless; they are unpredictable, emotion driving, life in HD. Our tech expertise is lived out in the products we build, services we provide, and industry-shifting strategies we execute. To balance that, our new brand leans into that unchanging magic of live events. Retro concert posters, trading cards, tangible ticket mementos, lit-up marquees - we will take the opposite approach of the landscape right now, and “go back” to push forward.

Retro Inspiration

Retro Inspiration Colors

We believe the new brand balances the innovation with the history, the modern with the retro, the future with the past. We accomplish this through a bold, yet approachable wordmark, a tangible color palette, an inviting tone of voice and more. All at the service of the die hards, the Deadheads, the rodeo fans, and the Broadway patrons alike. See below for examples of the new system in action:

App Onboarding

Website Homepage

Out of Home Advertising Example

Icon and Splash Screen

Billboard Examples

Partner lockups

We married our own obsession with the ticketing space with a diverse roster of talented partners that brought their own perspectives and inspirations, including Mother Design, Hoodzpah and Mickey Druzyj, to provide our internal team the tools to bring the rebrand to life across our many products and channels.

We believe great brands belong to the full organization, so to that end we ensured a broad group from across the organization was involved in the rebrand process. We’re excited to launch this rebrand as live events come back, and we believe they’re better than ever with SeatGeek.