The most popular applications have real-time user experiences. People expect to see their taxi cab moving on a map, to see live updates of their colleague’s edits to a document and to be notified instantly if their post is liked. But building the infrastructure for this is a hard problem, according to Justin Karneges, founder and CEO of Fanout.

The Mountain View, Calif.-based company’s push technology has been described as a cross between a reverse proxy and a message broker.

It grew out of Karneges’ frustration while working as Chief Technology Officer at Livefyre, which provides live comments functionality.

“This was 2010, 2011, but the space of SaaS for push was pretty early. And in general, the tools weren’t designed for APIs, which was a big part of what we were doing at Livefyre,” he said, adding that he set out to build the tool he wished he had back then. “Essentially, we built Akamai for push.”

“Our belief is that as the world becomes more API-connected, data will increasingly need to be pushable across API boundaries to meet the expectations of downstream users. We’re seeing early evidence of real-time APIs at big companies like Twitter, Dropbox and Slack. Our goal is to provide reusable tools so that any organization can do the same, and at scale,” he said.

Read More:   Update Hybrid Cloud Machine Learning on Kubernetes with Azure Arc

Pushpin at the Core

Fanout has open sourced its core technology, Pushpin, the drop-in proxy server that pins client connections open.

It’s available as cloud or self-hosted options. Customers use their existing HTTP-based API backend to authenticate connections and ensure reliable delivery.

“We’re purely a server-side solution — server-side assist. The consuming clients don’t have to know we exist. So there’s no stand-up client library. The API contract that’s spoken between the client and our edge servers are defined by our customers. It works a little differently than some of the more end-to-end systems like Pusher and PubNub,” said Karneges.

The integration’s a little more complicated — you have to have a backend, which can include serverless/function-as-a-service (FaaS), he explained. It works as a proxy service in front of it. The advantage is that you control the API contract, which is really useful if you want to expose your API publicly.

“The actual flow is when the client connects to our edge, we actually make a request to our customers’ back-end server to say, ‘Hey, a connection just came in and what should we do with this?’ and that’s how we get that kind of transparent magic,” he said.

An example Karneges likes to use is CBIX.ca, one of its customers which has a Bitcoin-related API that doesn’t require authentication. You can make requests to streaming.CBIX.ca. There are different endpoints for pricing and trades that are occurring.  The protocol is similar to Twitter’s streaming API — each line that comes out of the response is a minimal JSON payload.

“What’s really interesting about their API is that we power it, but you’d never know it. If you go to their API doc, it’s their API. If you do a lookup on the DNS domain, you can see that it actually does point to us, but if they ever didn’t want to use us for some reason, they could point the API to their own servers and it would be the exact API contract and their clients wouldn’t care. We wanted to build a more transparent system that doesn’t muck with the API contract.”

Read More:   4 things to consider when hiring Software Developers from LatAm

Other customers include Mozilla, Medium and Appbase.

The proxy is a key component, but there’s two main integration points, he explained.

You have to answer a proxy’s request from Fanout. When a proxy request comes in, you have to tell it what to do with it. In the case of an HTTP request, your backend server might say, ‘Turn this into a streaming connection with these pub/sub channels.’ So it does use pub/sub for the pattern of data push, but it’s behind the scenes.

“The client doesn’t say, ‘Hey server, I want to listen to channel ABC,’ it says ‘Hey domain.com, here’s an HTTP request’ and then the channel is then negotiated between our edge and the customer’s back end. The client doesn’t even know it’s being subscribed to anything. It’s unaware of the channel schema. Once the connection is set up, as a separate process, the customer can publish data to Fanout and we’ll [send] it out through our network and inject it through any open connection. When you publish data to Fanout, you include basically the data payload, which is transport-specific based on what kind of listener you have, but you can include one or more representations of the data,” he said.

Fanout supports sending data over five, low-level transports: HTTP long-polling, HTTP streaming, WebSockets, Webhooks and XMPP.

The company uses Redis as a memory database, ZeroMQ for moving messages between servers, Django for its website and API, and C++ for Pushpin.

Going forward, the company wants to expand its reliability feature, which for now only works with HTTP, Karneges said, and expand into more regions – it currently has three: San Francisco, London and Singapore. It also plans to flesh out an on-premises offering, better packaging its cloud components for local use.

Read More:   Update Webinar: The NoSQL Maturity Model, From Assessment to Microservices

Feature Image: “Fan Out” by Justin Tallaksen, licensed under CC BY-SA 2.0.