ChairNerd

Code, Design & Growth at SeatGeek

React Infinite: A Browser-ready Efficient Scrolling Container Based on UITableView

We’re growing more every day, adding new brokers to our comprehensive list of ticket sources, and expanding our list of event tickets. With this, and our continuing focus on cross-event search, we’re showing more ticket listings to more people than ever before.

The default DOM scrolling implementation is, unfortunately, inefficient. Tens of thousands of DOM nodes that are out of the view of the user are left in the DOM. For cross-event comparisons in particular, this quickly makes the performance of our ticket listings unacceptable.

React Infinite solves this with an approach popularized by iOS’s UITableView. Only DOM nodes that are in view or about to come into view are rendered in full. This makes scrolling performance constant throughout the length of the entire list regardless of the number of items added.

We’re using React Infinite in production on our event map pages right now; because we only have pages for events in the future, a link would not be appropriate. To see one, head to one of our team pages for the New York Giants, or the New York Mets, or the New York Knicks, and click on the green button for an event to see them in action in the Omnibox.

To get you started, here is an example that implements an infinite scrolling list with a simulated loading delay of 2.5 seconds:

And the code to do it:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
var ListItem = React.createClass({
    render: function() {
        return <div className="infinite-list-item">
        List Item {this.props.key}
        </div>;
    }
});

var InfiniteList = React.createClass({
    getInitialState: function() {
        return {
            elements: this.buildElements(0, 20),
            isInfiniteLoading: false
        }
    },

    buildElements: function(start, end) {
        var elements = [];
        for (var i = start; i < end; i++) {
            elements.push(<ListItem key={i}/>)
        }
        return elements;
    },

    handleInfiniteLoad: function() {
        var that = this;
        this.setState({
            isInfiniteLoading: true
        });
        setTimeout(function() {
            var elemLength = that.state.elements.length,
                newElements = that.buildElements(elemLength, elemLength + 1000);
            that.setState({
                isInfiniteLoading: false,
                elements: that.state.elements.concat(newElements)
            });
        }, 2500);
    },

    elementInfiniteLoad: function() {
        return <div className="infinite-list-item">
            Loading...
        </div>;
    },

    render: function() {
        return <Infinite elementHeight={40}
                         containerHeight={250}
                         infiniteLoadingBeginBottomOffset={200}
                         onInfiniteLoad={this.handleInfiniteLoad}
                         loadingSpinnerDelegate={this.elementInfiniteLoad()}
                         isInfiniteLoading={this.state.isInfiniteLoading}
                         >
            {elements}
        </Infinite>;
    }
});

React.renderComponent(<InfiniteList/>,
        document.getElementById('react-example-one'));

For the complete documentation, head over to the Github repo, or download it on NPM with npm install react-infinite or Bower with bower install react-infinite. We hope you’ll be able to use React Infinite in creating a better, faster, and smoother web.

The Next Five Years

We started SeatGeek nearly five years ago with the goal of helping people enjoy more live entertainment by building great software.

Our goal hasn’t changed, but its scope has. We’ve gone from a team of two to a team of forty. From the desktop web to iOS, Android and mobile web. And from a handful of active users (hi Mom!) to millions.

We think we’re onto something big. And we’ve decided to partner with some exceptional folks to get SeatGeek moving even faster. This past week we closed a $35M Series B round led by Accel Partners, alongside Causeway Media Partners, Mousse Partners, and a number of other great investors (full list here).

From going hoarse screaming for your favorite team, to dancing along with your favorite band, live entertainment is a deeply personal, aesthetic experience. We think the software that enables those moments should be too. We are a technology company. Everyone at SeatGeek is driven to create something elegant, intuitive and useful. This financing gives us one of the tools we need to do that more quickly and for more people than ever before.

The last five years have been a blast. The next five will be even better. We’re going to remain focused on building amazing software that helps people have fun. And we’re excited to partner with Accel and others to help us make it happen.

High Performance Map Interactions Using HTML5 Canvas

Before and after

Last week, you may have noticed that we released a facelift for our interactive maps. Our Deal Score markers have finally been brought up to 2014 design standards to match the Omnibox. However, what may not be as apparent is that our maps are now between 10 and 100 times faster, depending on the device.

Background

This blog post from March gives a good overview of how our maps used to work. Our maps consisted of three different layers: an image tile layer, an SVG layer, and a Leaflet marker layer.

Old style

This is how our map used to look. The actual stadium is an image tile, the blue section outline is an SVG layer, and the green dot is a Leaflet marker, an HTML element containing an image. There are a couple drawbacks to this approach…

Performance

While Leaflet markers work well for maps with a small number of markers, we were pushing the limits how many markers could be drawn on the map. At a row-level zoom, we can have thousands of markers on the screen at a given time. Since each marker is an individual DOM element, the browser must move around thousands of DOM elements at the same time when panning and zooming. This meant slow performance on even the fastest of computers and even worse performance on mobile.

Code Complexity

With the addition of section and row shape interactions, our code became incredibly complex. We were listening to mouse events coming from the tile layer, the SVG layer, and the marker layer. This resulted in a mess of code trying to handle every corner case, e.g. we receive a mouseout event from a marker and a mouseover event from the SVG layer.

Marker Clustering

A common way to handle large numbers of markers is to use clustering, such as the Leaflet markercluster plugin.

Marker Cluster

This is an effective way to reduce the number of DOM elements on screen. Unfortunately, clustering like this does not work for our use case. In our maps, the markers need to be specific to either a row or a section. Marker clusters, which are based only on marker positions, could result in some unintuitive ticket groupings, e.g. a VIP box and the front row of an upper level section. Therefore, we needed to come up with a solution that would maintain the section and row level detail views, while achieving the same performance as marker clusters.

HTML5 Canvas

A few months ago, we made the decision to drop support of Internet Explorer 8. In addition to making every engineer here very happy, this also opened up the possibility of using canvas for our map markers, something we have been looking forward to for a long time.

The HTML5 canvas element is basically a low-level drawing region. It supports basic drawing operations, but does not have the concept of a scene graph or event handling for anything drawn to it. Most importantly for us, modern browsers are incredibly fast at drawing to canvas elements, often using hardware acceleration.

Canvas Tiles

Our plan was to move from using SVG section outlines and Leaflet markers to using tiled canvas elements. This means that instead of forcing the browser to move thousands of DOM elements when panning and zooming the map, we can draw the markers to the canvas tiles once per zoom level and move the canvas tiles themselves around. Browsers are much better at moving 16 elements around on the screen than 2,000.

Here is what the canvas tiles look like (with debugging on) at our lowest zoom level:

Canvas Debugging

And at our highest zoom level:

Canvas Debugging Zoomed

This is by no means a new idea. Leaflet itself supports basic canvas tiling and some cool things have been done with it. However, using canvas tiles for our purposes presents some very interesting challenges.

Hit Testing

By consolidating the SVG and marker layers into a single canvas tile layer, we were able to greatly consolidate our mouse interaction code. The bounding boxes of the section and row shapes as well as the markers were put into our favorite spatial data structure, the R-Tree, for fast lookup. As markers sometimes extend past the edge of the shape they are in, we first check for marker intersect and then fall back to shape intersect.

Drawing

In order to maintain a high frame rate, we need to make the drawing step as fast as possible. Every time Leaflet requests a tile to be drawn, we calculate the bounding box it covers on the map. Then, we look up what markers fall within that bounding box plus a small buffer, to avoid markers right next to the edge of a tile being clipped. We then iterate through the markers and draw them to the tile. We perform a similar process for drawing hovered and selected shape outlines.

Tile Redrawing

There are a couple of events that cause tiles to need to be drawn or redrawn. On zoom, a new set of canvas tiles are requested and drawn at the correct scale. When a shape is hovered or selected, we also must redraw the tile or tiles that contain it. In order to minimize the number of tiles redrawn, we keep track of a redraw bounding box. Between each redraw, we update the redraw bounding box to contain the shapes that need to be drawn or cleared. Then, when the redraw function gets called, we draw only the tiles that contain the redraw bounding box. Now, we could clear and redraw only parts of each tile, but it turned out we got the performance we were looking for without introducing the extra code complexity of sub-tile redrawing.

Here you can see how the canvas tiles are redrawn. Each redraw colors the updated tiles the same color.

Canvas Redraw

And on mobile.

Canvas Redraw Mobile

Buffered Marker Drawing

All was going great until we decided the markers needed a slight drop shadow to help visually separate them from the underlying map. Drawing drop shadows in canvas is notoriously slow. However, drawing images or other canvas elements to a canvas element is quite fast. Therefore, while we are waiting for our tickets to load, we create small canvas elements for every marker color (and at two different sizes, since we enlarge the marker on hover). Then, when we need to draw the markers in the canvas tiles, we can pull from these buffered marker canvases. This way, we only incur the cost of shadow blur once and use the comparatively fast drawImage when performance counts.

Results

Flexibility

As the markers are now procedurally drawn, we can now change their styling whenever we want to. Even the legend is a canvas element that correctly spaces the markers if we change their sizes.

Legend canvas

Code Complexity

By switching to canvas markers we were able to greatly reduce the complexity of our event handling code. Probably the best thing to ever see in a GitHub pull request, an overall code decrease.

GitHub Diff

Performance

The Chrome timeline pretty much sums up the staggering performance increase.

Old map.

Old Performance

New map.

New Performance

As you can see, the main performance gain comes from greatly reducing the browser rendering time (purple). Across all devices, the maps now stay comfortably over 60fps, inertial panning works smoothly, and our mobile site is considerably more usable.

If this type of stuff gets you excited, we are always looking for engineers. Come join us!

A Lightweight iOS Image Cache

A flexible image caching library for image rich iOS applications

Our iOS app is image rich. To create appealing views we rely heavily on performer images, all of which must first be fetched from a remote server. If each image needed to be fetched from the server again every time you opened the app, the experience wouldn’t be great, so local caching of remote images is a must.

Version 1 - Ask for an image, get it from disk

Our first image cache was simple but effective. For each image view we’d ask for an image from cache, using its remote URL as the cache key. If it was available in the local disk cache a UIImage would be created from the file on disk, and returned immediately. If it wasn’t found on disk it would be fetched async from the remote URL, cached to disk, then a new UIImage returned.

For our purposes at the time this was perfectly adequate. But it had one point of unnecessary weakness: each cache request required the image to be loaded again from disk, which comes with the performance cost of disk access and image data decoding.

Version 2 - Memory caching

Thankfully Apple’s UIImage has a built in memory cache. So by changing a single line of code our image cache could go from being a disk only cache to a disk and memory cache.

When you ask UIImage for an image via imageNamed: it first checks its own memory cache to see if the image has been loaded recently. If so, you get a new UIImage at zero cost. So instead of something like this:

1
return [UIImage imageWithContentsOfFile:[self absolutePathForURL:url]];

We could get memory caching for free, simply by doing this:

1
return [UIImage imageNamed:[self relativePathForURL:url]];

UIImage will search its memory cache and, if found, return the image at no cost. If it isn’t in the memory cache it will be loaded from disk, with the usual performance penalty.

Version 3 - Fetch queues, prefetching, and variable urgency

As the design of our app evolved we became increasingly image greedy, wanting to show richer, larger images, and more of them.

Getting these larger images on screen as quickly as possible is critical to the experience, and simply asking the cache for each image at display time wasn’t going to cut it. Larger images take longer to load over the network, and asking for too many at once will result in none of them loading until it’s too late. Careful consideration of when the image cache is checked and when images are fetched from remote was needed. We wanted precaching and fetch queues.

fastQueue and slowQueue

We settled on two queues, one serial and one parallel. Images that are required on screen urgently go into the parallel queue (fastQueue), and images that we’ll probably need later go into the serial queue (slowQueue).

In terms of a UITableView implementation, this means that a table cell appearing on screen asks for its image from fastQueue, and every off screen row’s image is prefetched by adding it to slowQueue.

We’ll need it later

Assuming we request a page of 30 new events from the server, once those results arrive we can queue up prefetching for each of their images.

1
2
3
4
5
- (void)pageLoaded:(NSArray *)newEvents {
    for (SGEvent *event in newEvents) {
        [SGImageCache slowGetImageForURL:event.imageURL thenDo:nil];
    }
}

The slowGetImageForURL: method adds the image fetch to slowQueue, allowing them to be fetched one by one, without bogging down the network.

The thenDo: completion block is empty in this case because we don’t need to do anything with the image yet. All we want is to make sure it’s in the local disk cache, ready for immediate use once its table cell scrolls onto screen.

We need it now

Cells that are appearing on screen want their images immediately. So in the table cell subclass:

1
2
3
4
5
6
- (void)setEvent:(SGEvent *)event {
    __weak SGEventCell *me = self;
    [SGImageCache getImageForURL:event.imageURL thenDo:^(UIImage *image) {
        me.imageView.image = image;
    }];
}

The getImageForURL: method adds the image fetch to fastQueue, which means it will be done in parallel, as soon as iOS allows. If the image was already in slowQueue it will be moved to fastQueue, to avoid wasteful duplicate requests.

Always async

But wait, isn’t getImageForURL: an async method? If you know the image is already in cache, don’t you want to use it immediately, on the main thread? Turns out the intuitive answer to that is wrong.

Loading images from disk is expensive, and so is image decompression. Table cells are configured and added while the user is scrolling the table, and the last thing you want to do while scrolling is risk blocking the main thread. Stutters will happen.

Using getImageForURL: takes the disk loading off the main thread, so that when the thenDo: block fires it has a UIImage instance all ready to go, without risk of scroll stutters. If the image was already in the local cache then the completion block will fire on the next run cycle, and the user won’t notice the difference. What they will notice is that scrolling didn’t stutter.

Thought we needed it but now we don’t

If the user scrolls quickly down a table, tens or hundreds of cells will appear on screen, ask for an image from fastQueue, then disappear off screen. Suddenly the parallel queue is flooding the network with requests for images that are no longer needed. When the user finally stops scrolling, the cells that settle into view will have their image requests backed up behind tens of other non urgent requests and the network will be choked. The user will be staring at a screen full of placeholders while the cache diligently fetches a backlog of images that no one is looking at.

This is where moveTaskToSlowQueueForURL: comes in.

1
2
3
4
5
6
7
8
// a table cell is going off screen
- (void)tableView:(UITableView *)table
        didEndDisplayingCell:(UITableViewCell *)cell
        forRowAtIndexPath:(NSIndexPath*)indexPath {

    // we don't need it right now, so move it to the slow queue         
    [SGImageCache moveTaskToSlowQueueForURL:[[(id)cell event] imageURL]];
}

This ensures that the only fetch tasks on fastQueue are ones that genuinely need to be fast. Anything that was urgent but now isn’t gets moved to slowQueue.

Priorities and Options

There are already quite a few iOS image cache libraries out there. Some of them are highly technical and many of them offer a range of flexible features. Ours is neither highly technical nor does it have many features. For our uses we had three basic priorities:

Priority 1: The best possible frame rate

Many libraries focus heavily on this, with some employing highly custom and complex approaches, though benchmarks don’t show conclusively that the efforts have paid off. We’ve found that getting the best frame rates is all about:

  1. Moving disk access (and almost everything else) off the main thread.
  2. Using UIImage’s memory cache to avoid unnecessary disk access and decompression.

Priority 2: Getting the most vital images on screen first

Most libraries consider queue management to be someone else’s concern. For our app it’s almost the most important detail.

Getting the right images on screen at the right time boils down to a simple question: “Do I need it now or later?” Images that are needed right now get loaded in parallel, and everything else is added to the serial queue. Anything that was urgent but now isn’t gets shunted from fastQueue to slowQueue. And while fastQueue is active, slowQueue is suspended.

This gives urgently required images exclusive access to the network, while also ensuring that when a non urgent image later becomes urgently needed, it’s already in the cache, ready to go.

Priority 3: An API that’s as simple as possible

Most libraries get this right. Many provide UIImageView categories for hiding away the gritty details, and most make the process of fetching an image as painless as possible. For our library we settled on three main methods, for the three things we’re regularly doing:

Get an image urgently
1
2
3
4
__weak SGEventCell *me = self;
[SGImageCache getImageForURL:event.imageURL thenDo:^(UIImage *image) {
    me.imageView.image = image;
}];
Queue a fetch for an image that we’ll need later
1
[SGImageCache slowGetImageForURL:event.imageURL thenDo:nil];
Inform the cache that an urgent image fetch is no longer urgent
1
[SGImageCache moveTaskToSlowQueueForURL:event.imageURL];

Conclusion

By focusing on prefetching, queue management, moving expensive tasks off the main thread, and relying on UIImage’s built in memory cache, we’ve managed to get great results in a simple package.

An iOS SDK for the SeatGeek Web Service

seatgeek open sourced seatgeek/SGAPI
The SG Api SDK for iOS

The SeatGeek Platform provides a web service for our massive database of live events, venues, and performers. If you want to build live event information into your app or website the SeatGeek Platform is the best way to do it. Until now, if you wanted to use it in an iOS app you had to handle all of the awkward network requests and response processing yourself. With today’s release we’ve made that a whole lot easier.

Since the first release of our iOS app we’ve been gradually evolving a handful of libraries to manage communicating with our API, progressively abstracting away the messy details so we can focus on writing features. Today’s CocoaPod release is that code, in the same form as we use it ourselves. The first line in our app’s Podfile is:

pod 'SGAPI'

Fetching and Inspecting Results

The SeatGeek Platform is all about events, venues, and performers, so the same is true of the iOS SDK. Individual result items are encapsulated in SGEvent, SGVenue, and SGPerformer objects, and query result sets are fetched with SGEventSet, SGVenueSet, and SGPerformerSet objects.

Objective-C

1
2
3
4
5
6
7
8
9
10
11
// find all 'new york mets' events
SGEventSet *events = SGEventSet.eventsSet;
events.query.search = @"new york mets";

events.onPageLoaded = ^(NSOrderedSet *results) {
    for (SGEvent *event in results) {
        NSLog(@"event: %@", event.title);
    }
};

[events fetchNextPage];

Swift

1
2
3
4
5
6
7
8
9
10
11
12
// find all 'new york mets' events
let events = SGEventSet.eventsSet()
events.query.search = "new york mets"

events.onPageLoaded = { results in
    for i in 0..<results.count {
        let event = results.objectAtIndex(i) as SGEvent
        NSLog("%@", event.title())
    }
}

events.fetchNextPage()

Output

New York Mets at San Diego Padres
New York Mets at Seattle Mariners
... etc

Query Building

SGAPI uses SGQuery to build all its URLs. If you’d prefer to use your own data models or HTTP request classes and just want a tidy way to build API queries, then SGQuery is what you’re looking for.

Objective-C

1
2
3
4
5
SGQuery *query = SGQuery.eventsQuery;
[query addFilter:@"taxonomies.name" value:@"sports"];
query.search = @"new york";

NSLog(@"%@", query.URL);

Swift

1
2
3
4
5
let query = SGQuery.eventsQuery()
query.addFilter("taxonomies.name", value: "sports")
query.search = "new york"

NSLog("%@", query.URL())

Output

http://api.seatgeek.com/2/events?q=new+york&taxonomies.name=sports

Additionally, every item set (SGEventSet etc) has a query property which you can modify directly to add filters and parameters, change perPage and page values, etc.

Conclusion

See the documentation on GitHub and CocoaDocs for more details. If anything doesn’t make sense or could be improved, let us know. We’ll be evolving the SDK over time, and are looking forward to seeing how you make use of it!

Improving the Search-by-price Experience

A slider for React

A few months ago we launched the Omnibox, a single reconfigurable ticket-buying interface that replaced our old static listings and a thicket of popup windows. The Omnibox is written entirely in React, Facebook’s new user interface framework, and in doing this we had to come up with our own solutions to user interface elements and innovate where we could.

One of the products of our work on the Omnibox was a price slider component, which allows users to filter tickets by price: Price filter example

But for an event with large price ranges - the Super Bowl, for example - a simple linear slider would be unwieldy. Tickets are likely sparsely populated across the full domain of prices and, more importantly, users are far more interested in lower-priced tickets than the exorbitantly priced ones.

We solved this problem with two features of the slider: firstly, the upper limit of the price slider was truncated to the 90th percentile of ticket prices, and only dragging the slider handle to its right end will reveal all tickets greater than that price:

Price slider dragged to the right shows tickets exist above that price

Secondly, the slider’s scale is no longer assumed to be linear. The implementation currently deployed on the SeatGeek site positions the slider on the horizontal axis using the square root function, making lower prices take up more space than the less-desirable higher-priced tickets.

Non-linear price slider demonstration

Today we’re happy to open source this two-handled slider implementation written in React; it has no dependencies other than React itself.

Open Sourcing Our Admin Panel

The first version of the SeatGeek Dev Challenge. Crack open the beers.

In a land before time, SeatGeek created an extremely hackable admin panel. Its primary purpose was to stir the curiosity of developers who might be looking for a new job. You can read more about it in this previous post.

While the response over the years to the dev challenge was great, we retired the dev challenge over a year ago. Please stop trying to hack into our backend! (If you do happen to be hacking our website, we’d appreciate a heads-up on any vulnerabilities you find at hi@seatgeek.com Responsible disclosure and whatnot.

In order to cater to the curious, I took the opportunity to open source the dev challenge. It’s a small Sinatra app that you can run locally and hack to your heart’s content.

A few notes about the panel:

  • Configuration is done using environment variables with a few sane defaults.
  • You’ll need a local SMTP server to send email notifications of new applicants. We used postfix at the time, but you can use whatever you’d like.
  • Applicant resumes are stored on disk. Yes, we know, there are probably better ways than what we did, but since the box was in a DMZ, it was probably okay. Not like we weren’t trying to have you hack us anyhow.
  • Ruby 1.9.3 is what we used to deploy—actually 1.9.1 at the time, but it works with 1.9.3—but no guarantees that it will work with a newer Ruby. Pull requests welcome!

We’d like to thank all the thousands of developers who have hacked our backend over the years. Don’t worry, we’ll have a new challenge soon.

In the meantime, we’re still hiring engineers.

Spatial Data Structures for Better Map Interactions

Last week we launched a feature that even the most die-hard SeatGeek fans probably didn’t notice. However, we think that this feature makes a huge difference in usability and overall user experience, even if only at a subconscious level. You can now interact with the section and row shapes themselves, rather than just section/row markers.

For anyone not familar with our maps, here is an example of what one looks like:

Map example

Each of those markers represents a ticket or a group of tickets in that section. Until recently, all of the map interactions revolved around those markers. In order to find out more about the tickets in a section or row, the user would have to hover or click on the marker itself.

Fitts’s Law

One major concept in human-computer interaction is Fitts’s Law. Fitts’s law models the time it takes for a user to move a pointing device (e.g. cursor) over an object. In order to decrease the time to select an object, one can do one of two things: decrease the distance between the cursor and the object, or increase the size of the object. On SeatGeek’s maps we are constrained by the layout of venues, so our only option is to increase the marker size.

The natural way to increase the target area of a marker is to expand it to the shape of its section. However, it turns out this isn’t straightforward.

How Our Map Pages Work

First, a little background on how our map pages work. We use an excellent library, Leaflet, as the foundation for our interactive maps. The maps themselves start out as vector files. These are then rasterized into a bunch of tiles, such as this:

Example of a tile

Leaflet handles the logic for mouse interactions and displaying the correct tiles depending on the current view. The markers are a custom Leaflet layer (for performance reasons, but that is a whole other blog post). Then, we overlay a vector path as an SVG layer when a marker is either hovered or clicked.

SVG Highlighting

First Attempt at Section Interaction

A while back, when we first implemented our current style of maps, we considered adding polygon interaction instead of just using the markers. Given that we had the SVG paths of all of the shapes for the highlighting purposes, we decided to add all of these SVG elements to the map so that we could use the built-in event handling that browsers provide.

Unfortunately, that resulted in terrible performance on the map page. At the row level, we can have as many as a few thousand SVG elements being drawn at the same time. Combine that with all the markers we have to draw, and the map grinds to a halt. We decided to shelve the section interaction and move on to other features.

A Renewed Attempt

With the launch of our new map design, called the Omnibox, click and hover interactions became much more central to the interface.

The breakthrough was realizing that we could implement our own logic for hit-testing, or checking if a given mouse position is inside of a polygon. This means we didn’t have to add any additional elements to the DOM (like we did before with the SVG elements).

The naive approach would be to iterate through every polygon and check if the mouse is inside it using the ray casting algorithm.

However, we can do even better. By using a spatial data structure, such as an R-Tree, we can reduce the lookup complexity from linear to logarithmic. Below is an example of a 2D R-Tree. Imagine that the lowest level of the tree contains references to the actual polygons. Each node in the tree represents a bounding box that is the union of all the children below it in the tree.

R-Tree Example

Luckily enough, we were able to find a leaflet-compatible implementation of an R-Tree by Calvin Metcalf. Basically our use of it looks like this:

  1. On page load, convert all of the SVG shapes to geoJSON and insert into the R-Tree.
  2. On mouse events, transform the mouse position to the map coordinate system and make a throttled call to our intersection testing function.
  3. Call a search on the R-Tree with the transformed position of the mouse.
  4. The R-Tree will return a list of all the leaves (shapes) whose bounding boxes had been intersected.
  5. Iterate through the shapes and perform the ray casting algorithm to check for intersection.
  6. Return the intersected shape.

Results

The user can now hover over any point in the polygon, instead of just the marker! It works on sections:

And it works on rows:

Most importantly, all of this works without any impact on performance.

Event Pages Through the Ages

If you think about SeatGeek like we do, then in your head you probably picture an event page. You know, the page that has the venue map, the colorful dots, and the big list of tickets from all over the web, ranked by Deal Score. All the best reasons to know & love SeatGeek are encapsulated in this one single page. And it is not only the functional core of SeatGeek, it’s also our most highly-trafficked page type.

With so much riding on the event page, we’re constantly working on incremental and under-the-hood improvements. We normally avoid committing obvious, disruptive changes, but a few times in SeatGeek’s history we’ve launched major redesigns of our event page—the most recent of which happened earlier today.

Here I’ll give an overview of the latest changes and, for posterity, a quick tour through earlier SeatGeek event page history.

Today’s release

Inspiration

In the year and a half since we launched the last major version of the event page we started making mobile apps. Designing for mobile devices forced us to reconsider the SeatGeek experience from scratch, and once we launched our apps—in particular our iPad app—they became new sources of inspiration for the website. For example, we began to think much harder about conservation of screen real estate.

Internally, today’s milestone inherited the name “Omnibox” from an eponymous Google feature. Not Chrome’s address bar, but rather a more obscure reference to a CSS class found in the new Google Maps’ control panel. Although many people have griped about Google Maps’ recent update, we admired the idea of having a single page element whose content could change based on interactions with the underlying map.

What changed

In the main sidebar, we swapped our large, color-filled section headers and deal score labels for more elegant circles and lines that more closely resemble our latest iOS designs. We also moved the filter controls and box office link from the top of the sidebar to the bottom. The result is that ticket listings get higher precendence on the page.

Old sidebar vs. new sidebar, side by side

The new version of the section info view (below, on the right) looks very similar to the old, with the notable exception that it doesn’t appear in a popover overlaid on the map, but rather in the sidebar. Popovers had a lot of annoying characteristics, not least of which was that they were starting to feel decidedly web-1.0-y. As an added bonus, under the new sidebar scheme, it’s now possible to apply search filters to tickets within individual sections.

Section Info view, side by side

If you can believe it, the old version of the ticket info view (below, on the left) was actually a second popover that appeared beside the first popover containing section info. Now that all this information is in the sidebar, the map won’t get cluttered (which was especially problematic on smaller viewports), and the ticket details are much more legible.

Ticket Info view, side by side

Last but not least, we moved the old event info bar (seen in the top half of the image below) into the site header. This frees up more space for the venue map. In order to make room for event info in the new site header, we consolidated the category page links (i.e. “MLB”, “NFL”, etc.) into a dropdown off the main SeatGeek logo.

Event info

To really get a feel for the wonders of the new event page, you should really check it out. Go try a search, or check out one of these random event pages.

Earlier event pages

Here we take a walk down memory lane, through the annals of SeatGeek event page history. We’ll begin with the very first 2009-era event page before venue maps even existed and end on today’s latest event page release, ten notable iterations later.

Full disclosure: there’s a gratuitous, self-indulgent number of screenshots ahead. Only the most obsessive SeatGeek fans need read any further.

#1 The original SeatGeek event page was launched—along with SeatGeek itself—in September 2009. It contained no venue maps. SeatGeek was all about price forecasting, and making recommendations about whether to buy a ticket now, or wait to buy. If you wanted to buy, there was a table of tickets from various sellers, sorted by price.

1

#2 In early 2010, SeatGeek licensed the rights to use venue maps, provided by a third party named SeatQuest (now defunct). According to engineer #1, working with SeatQuest maps was reportedly a nightmare.

2

#3 Before long, ticket listings and venue maps started stealing screen real estate away from the price forecast part of the page.

3

#4 The event page’s first major redesign happened in Summer 2010.

4

#5 Soon after the Summer 2010 redesign, we scrapped SeatQuest in favor of our own venue maps, which should look a lot more familiar to current SeatGeek users. Also worth pointing out that by now the price forecast feature is relegated to a small-ish button area above the map, and restricted to signed-in users.

5

#6 Sometime in early 2011, we made the long-standing switch from a lefty sidebar to a righty—a change that would persist all the way until yesterday.

6

#7 In mid/late 2011, we redesigned the site again. Note the dark blue primary colors, and the new styling for the sidebar.

7

#8 In the first half of 2012, the dark blue from the previous version softened into the lighter SeatGeek blue of today.

8

#9 This update featured some new sidebar styling and abolished the permanently overlaid search bar in favor of a more compact event info bar. This version reigned supreme from Fall 2012 all the way until March 12, 2014.

9

#10 Omnibox: The cleanest SG event page yet. (Note the lefty sidebar, a clear throwback to the year 2010.)

10

Celebrating Valentine’s Day Late With Some iPad Love

Back in December, we released an epic update to our iOS app that featured tons of big additions and improvements for iPhones. But this time, it’s all about the iPad.*

This is much more than just a quick catch-up release. We took stock of all the additions from v2.0 for iPhone, and thought holistically about what the ideal experience for each one would be like on a tablet. So, iPad users, sorry about the wait. But we think you’ll find that it was well worth it.

* Well, technically it’s not all about the iPad. We also made a few UI tweaks to the iPhone app, and added a “Recent Search” list to both apps so you don’t have to keep typing the same things over and over again. Anyway, the rest of this blog post is focused on big, new iPad features.

Demo time

Like gifs? Great! Here’s one:

iPad demo gif

If that was too fast for you, we’ve also got a few higher-res screenshots that you can inspect more closely down below. But to really get the best feel for the new app, we’d strongly suggest installing it now →

Logging In

Now that you can log in (you couldn’t before), we can remember things for you – like artists, teams, and events that you’re tracking. Later, when you’re logged in to SeatGeek somewhere else like on your laptop or your iPhone, you’ll have access to all the same preferences.

You can log in with a SeatGeek account or a Facebook account. (One nice thing about Facebook accounts is that we can help you automatically track performers for you if you’ve already liked them on FB.)

Log-in modal screenshot

Tracking

As noted above, you can now track artists, teams, and events right in the app. Tracked items are easily accessible in your new My Events and My Performers screens. We’ll even send you reminders when new shows are announced or when a tracked event is approaching. Plus, now that we know a bit more about what you like, our event recommendations for you will get better and better.

Here’s what the My Performers screen looks like:

My Performers screen

To track an event, just tap the heart in the corner of any event screen:

Event view with heart

What! JT’s coming to town?!

Push notification screenshot

Redesigned Home Screen

We thought to ourselves why bother making an update if you can’t tell the difference right away? So we also added some new hotness to the home screen:

New home screen

Tapping the Explore area near the bottom lets you browse all upcoming events near you:

New explore view

Go get it

Now that you know what’s in store for you, all that’s left is to hit the App Store to make sure you’re running the latest version.

As always, we hope you like it and will let us know if you find bugs or have any cool ideas.