Code, Design, and Growth at SeatGeek

Jobs at SeatGeek

We are growing fast, and have lots of open positions!

Explore Career Opportunities at SeatGeek

SeatGeek Adds Facebook as Distribution Partner to Fuel Event Discovery

We launched our primary ticketing platform a year ago with one overarching goal: to get more fans to live events. Through our open API approach, an open ticketing ecosystem will create opportunities to increase distribution, empower teams and artists to sell on the platforms of their choice, and eliminate fraud through a process of barcode verification. At the end of the day, our goal is to increase discovery of live events by using the power of the open web, putting tickets where fans are already spending time online.

Today we’re taking a major step toward that goal, and are thrilled to add Facebook as a distribution partner for SeatGeek. As the world’s largest social media network, Facebook powers an incredible amount of delightful discovery of information for their users, whether it is through a news post, long-lost childhood friend, or adorable cat video. We’re stoked to be able to add a piece of the live entertainment world to that.

SeatGeek primary ticketing client Sporting Kansas City will be the first to utilize this new partnership, selling tickets directly to fans through Facebook. We’re excited to expand this partnership to more SeatGeek partners in 2018 as the industry moves toward openness.

While we have already seen great success with SeatGeek distribution partners such as Gametime and TicketNetwork, adding the world’s largest social network to the mix is a game changer for both clients and fans. Exposing inventory directly to fans on Facebook will make for a seamless shopping experience, resulting in more tickets sold, more events attended, and most importantly, more fun for fans.

Our Facebook partnership comes at an exciting time for SeatGeek, coinciding with the recent announcement of SeatGeek Enterprise, our powerful stack of primary ticketing services solidified into one brand. SeatGeek Enterprise promotes our vision of an open ticketing world, in which venues and rightsholders enjoy unprecedented flexibility, transparency and monetization potential. Our open distribution approach is a crucial piece of the SeatGeek Enterprise offering and one of the most important ways we are empowering rightsholders and putting more control in the hands of teams and venues.

We’re thrilled for fans to begin to see this partnership in action in their newsfeeds, as we continue to put the fan first in an industry that for too long has been more than happy to sit on its hands when it comes to innovation.

Facebook-SeatGeek Integration Screenshot

Employee Spotlight: Erin Elsham, People Ops

Welcome to SeatGeek Employee Spotlights – an opportunity to meet the fantastic folks on our world-class team.

By day, we’re a group of talented developers, designers, marketers, and businessfolk working together to build something new and different. But we are also fans and live event junkies of every kind: diehard sports fans, passionate concert-goers, sophisticated theater enthusiasts, and more. From our lives outside the office and within, we all have interesting stories to tell.

Up next: Erin Elsham, on our People Ops team!

Erin Elsham

Where were you born?
Toledo, Ohio. Well, I was born there – we moved to Iowa after a year, then to Kansas. My dad worked in the agriculture business, so we moved around the midwest with him a bit.

Have you always lived in NYC?
Nope! I packed 2 suitcases and bought a one-way ticket from Kansas City 6 years ago.

It’s kind of funny – I went to college in Kansas as well. I was the odd one out of my really close friends and went to Kansas State, even though I was a Kansas University fan growing up, so that was a funny rivalry thing. New York City, for some reason, draws a lot of Kansas people – there are a couple of well-known KU bars here, and I have some friends from growing up who had moved out here right after school. I went to visit one of my best friends and her husband who were living in Williamsburg, and at the time I had just gotten out of school and was bored with what i was doing, so I moved out there. They were nice to let me stay with them for a bit. I literally packed two suitcases and moved, and six years later here I am.

Where did you go to school?
Kansas State University, Go Jayhawks! Oh wait, that’s KU (I grew up a KU fan). That’s the thing - I went to school at K-State, but was not into K-State sports.

Any funny roommate or apartment stories in NYC? Feels like everyone has at least one…
I’ve always lived in WIlliamsburg, and once we found an apartment with a backyard, we decided we’d stay there for a while. Because it’s a shared backyard, we’re friends with all of the tenants now, so if there’s ever a backyard party or barbecue, everyone is invited. One time, a friend of a friend who does photo shoots and works with modeling agencies asked if she could do a shoot back there. They brought in a really fancy model and props that totally confused the neighborhood, it was pretty funny.

Here’s a picture.

How would your friends describe you in 3 words?
Full of surprises!

I’ve been told that I’m like a stealth bomber – I’m pretty quiet for the most part, but I listen to everything, so I always kind of know what’s going on. Even with friends and family, I’ll know what’s going on and plan something that will surprise them in one way or another. For example, my boyfriend’s 30th birthday: he really likes Wet Hot American Summer, so I planned a themed party in our backyard and made everyone wear 70s camp gear. He had no idea. Also, the SeatGeek portraits – nobody knew that was coming. I like surprising people.

Best project you’ve worked on at SeatGeek?
There are a ton! I love to show off how fun we are as a company, so employee portraits and Life@SeatGeek Instagram account are the top!

What is your dream project to work on at SeatGeek?
Workation was a blast planning last year and I can’t wait to plan an even better event for this year. It’s a huge challenge planning an event for 150 people, but I’ve got some good ideas and am super excited.

What are three “fun facts” about yourself that people would be surprised to know?
I’m an aerialist. I perform trapeze in seasonal circus cabarets in Brooklyn, wear funky costumes and fly in the air – think that’s the most fun fact I’ve got. It’s all volunteer, and I choose my own songs and make the costumes I wear during shows. I’ve been doing it for about 5 and a half years at the same circus school in Williamsburg, am really close with my teacher and have made some amazing friends there.

Favorite place(s) to hang out in NYC?
Backyards and rooftops – there’s nothing better than living in the borough with the view! There are a lot of cool places, but they’re always too crowded – once you find someone with rooftop access, you’ve made it. The beach is fun to go to as well – I usually try Rockaway.

Best vacation you’ve ever taken?
So far, my first “real” vacation as an adult with no plans and no weddings was Puerto Rico with my fiance three years ago. I’ve been a bridesmaid like seven times, and have had a wedding almost every single year, so I haven’t taken a lot of vacations because it was all weddings for a while. Puerto Rico is great – it’s cheap, fun, and the water is clear.

Favorite SeatGeek snack?
Eggs? Do those count? Is that weird?

Why do you love SeatGeek?
It’s the people – everyone’s cool. They’re all so amazing, friendly and welcoming. So many smiles. There is not a single person I don’t feel like I could talk to.

Favorite part of the new office?
SG Portrait wall – sorry, I’m biased! That was the most fun I’ve had on a project. I had to be secretive about it (which I love), and we hired an outside designer for it and worked with them to get the details right and continue to work together for SG 1-year anniversaries!

Faster (Re)deploys with Docker-build-cacher

Builds a service with docker and caches the intermediate stages

At SeatGeek we use Multi-stage Dockerfiles to build the container images that we deploy to production. We have found them to be a great and simple way of building projects with dependencies in different languages or tools. If you are not familiar with multi-stage Dockerfiles, we recommend you take a look at this blog post.

In our first days of using them in our build pipeline, we found a few shortcomings that were making our deploys take longer than they should have. We traced these shortcomings to a missing key feature: It is not possible to carry statically generated cache files from one build to another once certain source files in the project change.

For example when building our frontend pipeline we have to invoke yarn first to get all the npm packages. But this command can only be executed after adding the yarn.lock and package.json files to the Docker container. Because of the nature of how Docker caching works, this meant that each time those files are modified, the node_modules folder cached in previous built was also trashed. As you may already know, building that folder from scratch is not a cheap operation.

Here’s an example that illustrates the issue.

Imagine you create a generic Dockerfile for building node projects

1
2
3
4
5
6
7
8
9
10
11
12
13
14
FROM nodejs

RUN apt-get install nodejs yarn

WORKDIR /app

# Whenever this image is used execute these triggers
ONBUILD ADD package.json yarn.lock .

# Dowanload npm packages
ONBUILD RUN yarn

# Build the assets pipeline
ONBUILD RUN yarn run dist

We can now build and tag a Docker image with for building yarn based projects

1
docker build -t nodejs-build .

The tagged image can be used in a generic way like this:

1
2
3
4
5
6
7
8
9
10
11
# Automatically build yarn dependencies
FROM nodejs-build as nodedeps

# Build the final container image
FROM scratch

# Copy the generated app.js from yarn run dist
COPY --from=nodedeps /app/app.js .

# Rest of the Dockerfile
...

So far so good, we have build a pretty lean docker image that discards the node_modules folder and only keeps the final artifact. For example a set of js bundles from a React application.

It’s also very fast to build! This is because each individual step is cleverly cached by Docker during the build processes. That is, as long as none of the steps or files used in the step have changed.

And that’s exactly where the problem is: Whenever the package.json or yarn.lock files change, Docker will trash all the files in node_modules directory as well as the cached yarn packages and will start downloading from scratch, linking and building every single dependency.

That’s far from ideal, as it takes significant time to rebuild all dependencies. What if we could make a change to the process so that changes to those files do not bust the yarn cache? It turns out we can!

Enter docker-build-cacher`

We have built a slim utility that helps overcome the problem by providing a way to build the Dockerfile and cache all of the intermediate stages. On subsequent builds, it will make sure that the static cache files that were generated during previous builds will also be present.

The effect it has should be obvious: your builds will be consistently fast, at the cost of a bit of extra disk space.

Building and caching is done in separate steps. The first step is a replacement for the docker build command and the second step is the cache persisting phase.

1
2
3
4
5
6
export APP_NAME=fancyapp
export GIT_BRANCH=master # Used to internally tag cache artifacts
export DOCKER_TAG=fancyapp:latest

docker-build-cacher build # This will build the docker file
docker-build-cacher cache # This will cache each of the stage results separately

How It Works

The docker-build-cacher tool works by parsing the Dockerfile and extracting COPY or ADD instructions nested inside ONBUILD for each of the stages found in the file.

It will compare the source files present in such COPY or ADD instructions to check for changes. If it detects changes, it rewrites the Dockerfile on the fly, such that FROM directives in each of the stages use the locally cached images instead of the original base image.

The effect this FROM swap has is that disk state for the image is preserved between builds.

docker-build-cacher is available now on GitHub under the BSD 3-Clause License.

Make sure to grab the binary files from the releases page


If you think these kinds of things are interesting, consider working with us as a Software Engineer at SeatGeek. Or, if backend development isn’t your thing, we have other openings in engineering and beyond!

Managing Consul and Vault: Introducing Hashi-helper

Disaster Recovery and Configuration Management for Consul and Vault

This post is the first of a bonus series on tooling for the Hashi-stack - Consul, Nomad, Vault. We also recommend our previous series on using Vault in production.


Configuration Management for your Configuration

In our initial Vault rollout, one pain-point we quickly came across was managing Consul and Vault configuration. We use both Hashicorp tools for managing secrets and access control across our entire infrastructure, and knowing what configuration was setup where in each cluster is quite critical. On top of this, disaster recovery quickly became an issue we knew we needed to tackle before a more broad roll-out.

One of our Systems Engineers started looking at what our needs were in regards to properly managing Consul and Vault configuration, and came up with a wonderful workflow through the use of a tool we like to call hashi-helper. We use hashi-helper internally to manage multiple clusters in different environments via a git repository that contains our canonical configuration. It is now pretty trivial for us to:

  • Standup a vault cluster using our normal provisioning toolchain.
  • Unseal vault via gpg key via the normal vault tooling, or keybase via hashi-helper vault-unseal-keybase.
  • Provision mounts, policies, and secrets using hashi-helper vault-push-all.
  • Provision custom registered consul services via hashi-helper consul-push-services.
  • Manage all of our configuration via either blackbox gpg-encrypted files or AWS KMS encryption through tooling such as sm.

Example Workflow

For those curious about what a typical workflow might look like, the following directory structure may be suitable for a typical organization with a Consul/Vault cluster per-environment:

  • /${env}/apps/${app}.hcl (encrypted) Vault secrets for an application in a specific environment.
  • /${env}/auth/${name}.hcl (encrypted) Vault auth backends for an specific environment ${env}.
  • /${env}/consul_services/${type}.hcl (cleartext) List of static Consul services that should be made available in an specific environment ${env}.
  • /${env}/databases/${name}/_mount.hcl (encrypted) Vault secret backend configuration for an specific mount ${name} in ${env}.
  • /${env}/databases/${name}/*.hcl (cleartext) Vault secret backend configuration for a specific Vault role belonging to mount ${name} in ${env}.

Here is an example for managing Vault Secrets:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
# environment name must match the directory name
environment "production" {

  # application name must match the file name
  application "XXXX" {

    # Vault policy granting any user with policy XXXX-read-only read+list access to all secrets
    policy "XXXX-read-only" {
      path "secret/XXXX/*" {
        capabilities = ["read", "list"]
      }
    }

    # an sample secret, will be written to secrets/XXXX/API_URL in Vault
    secret "API_URL" {
      value = "http://localhost:8181"
    }
  }
}

hashi-helper is available now on GitHub under the BSD 3-Clause License.


If you think these kinds of things are interesting, consider working with us as a Software Engineer at SeatGeek. Or, if backend development isn’t your thing, we have other openings in engineering and beyond!

TopTix Joins SeatGeek!

We believe that modern technology can be a force for good in live entertainment. That’s why we created SeatGeek Open – a platform which enables teams, artists, and fans to buy and sell tickets across the open web. Openness means harnessing the power of the internet to create better experiences.

In order to launch SeatGeek Open last August, we needed to find a partner that shared our vision, and whose powerful box office technology would enable a true API-driven entertainment platform. We did an exhaustive worldwide search, but in the end the selection of Israel-based TopTix was remarkably clear and unequivocal. They brought unprecedented technology and incredible talent to SeatGeek Open.

Today, we’re over-the-moon excited to announce that TopTix is joining SeatGeek.

TopTix was started in 2000 in Karmiel, Israel by two remarkable entrepreneurs, Eli Dagan and Yehuda Yuval. Since then, they’ve grown to 115 employees across four continents. As we’ve gotten to know the people at TopTix, we’ve been struck by how similar they are to the people at SeatGeek – bright, humble, and driven by a core belief in the transformative power of technology. We couldn’t be more stoked that this extraordinary group is now part of our team.

TopTix’s primary ticketing platform, called SRO, is by far the strongest and most modern backend ticketing platform on earth. It serves more than 500 venues, and processes over 80 million tickets a year in 16 countries across the globe. Current TopTix clients range from museums and theaters to festivals and sports teams, including well-known organizations such as the Ravinia Festival, the Royal Dutch Football Association, West End theaters, and several English Premier League clubs.

We’re excited to continue to support all current TopTix clients from TopTix’s seven global offices. The TopTix engineering team will remain 100% focused on SRO, working to further extend the platform with all of SeatGeek’s resources now behind them.

We believe the future of ticketing is open. Openness means enabling teams and venues to fill their venues. Most importantly, it means making fans happier.

Joining forces with TopTix will allow us to make SeatGeek Open an even more powerful platform and to create more great experiences for fans. SeatGeek and TopTix have already helped Sporting Kansas City, our first client, sell more tickets and reach new fans, while giving supporters easy access to the games they love. Now that we’re officially operating as one, we’re feverishly excited about the potential this unlocks and our future together as a single united company.

Using Vault: A Practical Guide

This post is the second of a two-part series on using Vault in production. Both posts are slightly redacted forms of internal documentation. This post covers day-to-day usage of Vault, while the previous post covered our specific workflow.

Please note that some of the referenced tooling is not currently publicly available.

Not all of our practices will apply to your situation, and there are cases certainly where our setup may be suboptimal for your environment.


Using Vault

This is the tl;dr you were looking for…

For Developers

Developers should expect credentials to live in environment variables that can be loaded into an app when needed. These credential names are specified within an app.json manifest file, and Ops should be contacted to place their values in Vault. The deploy process currently uses the vault api to retrieve appropriate credentials for a service, transparent to the development staff.

The basic flow for reading or writing credentials is the following:

  1. Login to Vault and receive a token
  2. Make a request with the token to read or write

Schema

Secrets accordingly to the following convention:

1
secret/ENVIRONMENT/APP/KEY value=VALUE

Login

Note: Developer Vault logins require a Github Access Token (https://github.com/blog/1509-personal-api-tokens)

CLI:

1
vault auth -method=github token=GITHUB_ACCESS_TOKEN

Upon success, a Vault token will be stored at $HOME/.vault-token.

HTTP_API:

1
2
3
curl \
  -L http://vault.service.consul:8200/v1/auth/github/login \
  -d '{ "token": "GITHUB_ACCESS_TOKEN" }'

List

CLI:

1
vault list secret/path/to/bucket

This will use the token at $HOME/.vault-token if it exists. Otherwise, you will need to login via vault auth first.

HTTP API:

1
2
3
4
curl \
    -H "X-Vault-Token: VAULT_TOKEN" \
    -X GET \
    -L http://vault.service.consul:8200/v1/secret/path/to/bucket?list=true

Read

CLI:

1
vault read secret/path/to/key

This will use the token at $HOME/.vault-token if it exists. Otherwise, you will need to login via vault auth first.

HTTP API:

1
2
3
4
curl \
    -H "X-Vault-Token: VAULT_TOKEN" \
    -X GET \
    -L http://vault.service.consul:8200/v1/secret/path/to/key

Write

CLI:

1
2
vault write secret/path/to/key \
    value=THISISASECRET

This will use the token at $HOME/.vault-token if it exists. Otherwise, you will need to login via vault auth first.

HTTP API:

1
2
3
4
5
6
curl \
    -H "X-Vault-Token: VAULT_TOKEN" \
    -H "Content-Type: application/json" \
    -X POST \
    -d '{"value":"THISISASECRET"}' \
    -L http://vault.service.consul:8200/v1/secret/path/to/key

For Admins

Initialization

Required reading: https://www.vaultproject.io/intro/getting-started/deploy.html

vault init is used to bootstrap a new Vault cluster. This generates a number of keys and requires a majority threshold of these keys in order to unseal Vault (more on unsealing below).

For SeatGeek’s Vault cluster, vault init has been run as follows:

1
2
3
4
vault init \
    -key-shares=5
    -key-threshold=3
    -pgp-keys = "keybase:OPS1,keybase:OPS2,keybase:OPS3,keybase:OPS4,keybase:OPS5"

Note: Local public key files can also submitted for the pgp-keys option

Initializing Vault this way leverages its support for authorizing users to be able to unseal Vault via their private GPG keys. This method was chosen as we already using blackbox to encrypt secrets within certain repositories.

When vault is initialized, an unseal tokens are printed out for each pgp key specified. The order of these keys matches the order in which the pgp keys were specified, and each can only be decrypted by the corresponding pgp key. Those unseal tokens should be securely distributed to the corresponding operations engineer and stored in a secure fashion. Loss of keys exceeding the threshold will result in a loss of ability to unseal the cluster.

Vault should only need to be reinitialized if all of the data in lost, which for SeatGeek would be in a loss of the Consul Cluster.

Unsealing

Vault boots up in a sealed state, and in this state no requests are answered. Each machine within a Vault cluster can be in a sealed or active state, and all must be unsealed before answering any requests. There is also a standby state, in which the machine is unsealed but not primary, and is ready for failover if the currently active primary dies.

A Vault machine can be unsealed via the following command:

1
2
export VAULT_ADDR="http://ip.for.vault.instance:8200"
echo "$VAULT_UNSEAL_KEY" | base64 -D | keybase pgp decrypt | xargs vault unseal

The VAULT_UNSEAL_KEY is specific to each user who was specified in the vault init command. All unseal keys were distributed at the time of initialization.

Note: This requires having the vault binary installed locally.

Root Token

An initial root token is created when the Vault Cluster is initialized. Other root tokens are created from this token, and as such, if a root token is needed it must be created by an existing holder of a root token.

A new root token can be created from an existing root token via the following command:

1
vault token-create -metadata "name=ADMIN_NAME" -display-name="ADMIN_USER_NAME" -orphan -no-default-policy

Note: You must be logged in with a root token in order to run this command

In the emergency case that a new root token needs to be created, the following command can be run:

1
vault generate-root

This operation requires a majority of unseal key holders to execute.

Note: At this time of writing, Vault 0.6.2 has deprecated this workflow surrounding root tokens and our usage is subject to change in the future.

Provisiong a New Service

When provisioning a new service, secrets can simply be written to the appropriate bucket (secret/ENVIRONMENT/APP_NAME/KEY value=VALUE). Everything under secret/ is a “key” and all necessary paths will be created.

Whitelisting a New Service

On first boot of infrastructure, the Jenkins Whitelist deploy job, a dependency of the Jenkins Configure job, will be run. If not already, the new application will be created and its corresponding IAM Role will be whitelisted. This job also ensures all other IAM Role whitelists are up to date in Vault.

To ensure that Jenkins has the correct permissions, a special role allowing it access to write auth and policy documents should be written to Vault. The following can be used to create the policy, which is stored with all other custom vault policies:

1
vault policy-write ENV-jenkins data/vault/policies/env-jenkins-policy.hcl

Policies must be audited on a regular basis, consistent with all other internal auditing processes.

When used in contexts that do not easily support passing roles, you can create a vault token for this or any policy. The following creates a renewable token that is valid for 60 seconds:

1
2
TOKEN_ID="$(uuidgen)"
vault token-create -policy="ENV-jenkins" -display-name="ENV-jenkins" -id="$TOKEN_ID" -ttl=60s

Rekey and Key Rotation

Vault’s usage of unseal keys is based on Shamir’s secret sharing algorithm.

https://en.wikipedia.org/wiki/Shamir%27s_Secret_Sharing

The vault rekey command allows for the recreation of unseal keys as well as changing the number of key shares and key threshold. This is useful for adding or removing Vault admins.

The vault rotate command is used to change the encryption key used by Vault. This does not require anything other than a root token. Vault will continue to stay online and responsive during a rotate operation.

Disaster Response

In the case of an emergency, Vault should be sealed immediately via:

1
vault seal

This will prevent any actions or requests to be performed against the Vault server, and gives time to investigate the cause of the issue and an appropriate solution.

  1. A secret stored in Vault is leaked A new secret should be generated and replaced in Vault, with a key rotation following.
  2. Vault user credentials are leaked The user credentials should be revoked and a key rotation should be performed.
  3. Vault unseal keys are leaked A rekey should be performed.

If you think these kinds of things are interesting, consider working with us as a Software Engineer at SeatGeek. Or, if backend development isn’t your thing, we have other openings in engineering and beyond!

Friendly AWS Infrastructure Discovery with Haldane

seatgeek open sourced seatgeek/haldane
A friendly http interface to the aws api

A common task when working in server infrastructure is to take inventory of what is available. This can be useful for figuring out what is out of date, when certain pieces were introduced to your environment, or even taking stock of what items might be hidden that you otherwise were not aware of.

For AWS users, there are a dozen ways to inspect your infrastructure:

  • Bare AWS APIs
  • AWS cli tools
  • Internal service discovery platforms
  • External dashboards such as Netflix’s Spinnaker

Early on in SeatGeek’s history, we relied heavily on the AWS API to figure out what ec2 instances were available in our infrastructure for the purposes of service discovery. As we grew in both traffic and footprint, this became unwieldy, and suffered from rate-limiting issues, retry bugs, and general auth errors across the various utilities that interacted with the AWS API. Thus was born haldane, a friendly http interface to the AWS API.

SeatGeek uses haldane to expose a simple http interface to the AWS API which can be easily integrated into our toolchain. Here is an example haldane query:

1
curl http://localhost/nodes?query=Api&tags.environment=production

The following are a few of the resources exposed via haldane:

  • /amis: Corresponding to AWS EC2 AMIs
  • /instances: Corresponding to AWS EC2 Instances
  • /instance-types: Corresponding to available AWS EC2 Instance Types
  • /rds-instances: Corresponding to AWS RDS Instances

Under the hood, haldane queries the AWS API using Boto3, and caches the resultset in memory for a configurable amount of time. This ensures relatively fresh data from AWS, while reducing the probability of hitting the AWS rate-limit.

In the time since we initially developed haldane, SeatGeek has seen explosive growth, and it is no longer a solution we can depend upon for server discovery at scale. That said, it can be very useful for any of the following use-cases:

  • Static infrastructure
  • Generating CSV reports
  • Querying for outdated resources
  • Inspecting the state of small clusters
  • Retrieving the IP address of a random instance in a cluster

Internally, we’ve built a few such tools to support infrastructure spelunking, and while we may not rely on it as heavily as we used to, we hope that others can find utility in using haldane.

You can find haldane on Github, under the BSD 3-Clause License.


If you think these kinds of things are interesting, consider working with us as an Infrastructure Engineer at SeatGeek. Or, if infrastructure isn’t your thing, we have other openings in engineering and beyond!

The Ugly Truth: Why Legacy Ticketing Companies Love Fraud

by Jeff Ianello, EVP, SeatGeek Open Partnerships at SeatGeek. After more than a decade spent working on the team and league side of sports business, Jeff joined SeatGeek to help build the future of live event ticketing.

It’s five minutes until tipoff of a sold out game when my phone rings. If you’ve ever worked in a box office, you know that a phone call right before tipoff is never a good thing. If you haven’t, I can tell you that it almost always means there’s a problem.

All too often, that problem is that fans are walking up to customer service to tell them that their tickets couldn’t scan. They purchased from one of the largest ticket marketplaces in the world, but one who wasn’t partnered with our team. We explored further and found that these tickets were double sold. In some cases, the seller intentionally defrauded users by intentionally pushing the same ticket to multiple markets. Other times, it was merely an accident or a misunderstanding of marketplace policies.

Either way, the buyers got the short end of the stick. We received angry emails the next day from the fans who couldn’t get in. Twitter was ablaze. The fans’ feelings about our club, of whom they are lifelong supporters, were tarnished. On what should have been one of the most exciting nights of the year, I was left frustrated as an executive.

When I led ticketing for the Phoenix Suns, this was an altogether too familiar experience. In a 19,000 person building, only a handful of double sold tickets may occur each game, but when it happens, a fan’s night can be ruined. Even when marketplaces such as SeatGeek and StubHub provide replacements, some of the magic of the experience is lost and both the team and marketplace’s brands can be damaged. More importantly, the prospect of a bad experience looms the next time that fan thinks about buying a ticket. Will this be the one percent chance where the tickets won’t work? This friction causes some fans not to buy tickets. When that happens, we all lose.

After the Suns, I joined the NBA league office where I worked closely with every team’s ticketing department. The NBA, like other leagues, has multiple primary ticketing companies. Each primary ticketing company has their own “preferred” secondary site. Spectra and AXS partner with StubHub, Ticketmaster owns their own resale, and Veritix uses Flash Seats. Each “preferred” reseller carries “verified” tickets. When a customer buys through one of these preferred platforms, a new barcode is created. The old barcode is no longer valid, while the new one guarantees entry.

What I didn’t learn, but what I realized when I came to SeatGeek, is that none of this has to happen in an age of APIs. APIs are the glue that hold networks together; the way that information systems talk to one another to perform functions such as displaying airfares on travel sites, processing online credit card payments, or in this case, re-issuing a barcode when someone buys a resale ticket. All the legacy ticketing company has to do is expose this functionality to more than one party.

This led me to a realization even more insidious. Ticketing companies want fraud. It is only the existence of unverified tickets that makes verified tickets a valuable asset. Barcode verification can help prop up a ticketing company’s exchange or be an asset that can be sold for millions of dollars to the highest bidder. That status quo benefits ticketing companies at the expense of fans, marketplaces, and teams. They can’t keep that secret any longer.

As I talk to colleagues around the industry, there’s a growing frustration that barcode validation is being held hostage as a way to extract money. I expect in the next three years, the pendulum will swing the other way. Leagues will mandate that all barcodes must be re-issued across every site. Fans are the lifeblood of teams. Why would a team or league want their fans to buy fraudulent tickets when it is completely avoidable?

The time has come for teams and leagues to stand up for fans and demand that primary ticketing companies give up the fabricated spectre of unverified tickets. Instead, by making use of readily available technology to verify tickets for fans – regardless of where the purchase was made – we can all help ensure that when the lights go down, or the team steps out on the court, fans who’ve put their dollar down are there to enjoy the show.

Announcing Pano, an Immersive Venue Experience

Great technology at SeatGeek helps us give fans as much information as possible before making a purchase. Features such as Deal Score, which ranks tickets by quality in addition to price, and our interactive maps, which are detailed beyond any others in the industry, make the ticket-buying experience easy by giving customers a full understanding of what they are purchasing upfront. Today we’re thrilled to launch Pano, a new product feature that offers the absolute best way for fans to check out what their view at an event will look like before purchasing a ticket.

Pano is an immersive stadium experience that allows fans to digitally interact with and explore a venue. Built in partnership with our first primary ticketing client, Sporting Kansas City, Pano allows fans to see the view of the SKC field at Children’s Mercy Park from every section in the stadium. It offers full 360-degree views from each vantage point and the ability to click around the venue to “fly over” to a different area of the stadium and compare another view.

Pano

To build Pano, our team took photos from hundreds of locations around Children’s Mercy Park and used them to create a digital model of the venue. From those flat images, software helped us understand the depth of the images in a way that would truly represent each view and be most helpful to fans. Essentially, we created a three-dimensional world using photos of the stadium, which allows the customer to easily navigate and compare views from all across the venue.

Children's Mercy Park

The result is a much more immersive, and more realistic, experience than you see anywhere else on the market. It has the power to take all of the guesswork out of the process of buying a ticket, and is a powerful tool that enables fans to truly experience what they’re buying before they purchase.

This past summer, we launched SeatGeek Open, our vision for the future of ticketing. Like everything we do at SeatGeek, each piece of Open that’s being built has the fan experience in mind, from barcode verification that can eliminate fraud, to elegant APIs allowing for smooth ticket-buying integrations within fans’ favorite websites and apps. Expect to see additional venues released under Pano for future SeatGeek Open clients.

Stay tuned for a more technical look at the behind-the-scenes development of Pano.

SeatGeek is a 2017 Best Place to Work

Making SeatGeek a great place to work is serious business for our team, and it’s one of the things we’re most proud of as a company. We’re thrilled that Glassdoor has recognized us as one of the top five Best Places to Work in 2017.

The best thing about being included on the Glassdoor list may be that unlike some other, similar awards, there was no self-nomination process. Instead, it’s entirely based on feedback SeatGeek employees have voluntarily and anonymously shared as company reviews on Glassdoor over the past year.

Being part of the SeatGeek team has its perks. We frequently hold team outings and organize an annual retreat, our kitchen is fully stocked with more snacks and caffeine than most humans can handle, and a monthly ticket stipend helps employees attend the best live events New York City has to offer.

But while those perks are fantastic and make working at SeatGeek fun, what we’re really proud of - and believe is more unique - is that SeatGeek is an amazing place to contribute, grow, and build something special. There’s a belief here in the power of live entertainment to improve lives and make people happier, and a genuine passion for building a product that enables great experiences.

As the team continues to grow, we’re fortunate to have found people that continue to embody SeatGeek’s company values. Just as our product encourages transparency to our users - in the form of things like Deal Score, which labels listings as both good values as well as bad - we value and practice transparency internally, by doing things such as presenting board meeting decks to employees, regularly requesting feedback on company performance, and communicating company news early and often.

While we operate in ticketing, SeatGeek is first and foremost a technology company, and our work reflects that. People at SeatGeek love to build things, and we leverage technology to do so faster and better. While our competitors see technology as a disturbance and a challenge to overcome, we embrace it as our competitive edge. An obsession with quality - every pixel, every line of copy, every customer interaction - makes the difference between good and amazing.

The people at SeatGeek that bring these values to life are our most valuable asset, and are what we hear about most often when employees talk about their favorite parts of working here. Feel free to browse examples of the feedback those employees shared on Glassdoor about what it’s like to work at SeatGeek.

SeatGeek Team


If this sounds interesting to you, come work with us at SeatGeek! We have a number of roles open across engineering, marketing, business development, and more.