As the distributed database market continues to expand — with competitors eager to meet growing business demand for databases that can perform high-volume transactions at scale with low latency — benchmarking tests are becoming a key promotional and proof of concept tool helping newer providers argue their business case.

While established players like Basho and Redis have a fairly stable and growing grip on the enterprise-level market, newer players like Aerospike, FoundationDB and MemSQL are hoping that by publishing benchmark tests of their transactional processing capabilities, that they can woo new customers to their products.

FoundationDB: 14.4 Million Write Transactions per Second

FoundationDB has released a new version of their database product, aimed at enabling a new generation of Internet of Things and device-driven interactive applications to be built that keeps a single view of a massive distributed database while allowing a constant stream of read and writes to the data.

“One of the hardest things to scale is write transactions,” says Dave Rosenthal, CEO and Founder of FoundationDB, who has been working on version three of the database from for the past year. “Scaling transactions with lots of writes happening all at the same time is difficult: in the past, we have been able to manage 300-400,000 random writes to the database every second. That’s a pretty good number, but there have been some businesses pushing bigger numbers than that.”

Rosenthal cites a recent Netflix post that last year stood out as the industry’s best practice. In the documented test, Netflix were able to run Cassandra at scale on a thousand core cluster that maintained 1.1 million writes per second.

“That was one of the really cool benchmarks that caused a lot of people to stand up,” says Rosenthal. “It was about three times faster than our 2.0 product.”

According to Rosenthal, many doubted FoundationDB’s ability to take on that level of transactional capacity, especially given FoundationDB’s architecture which is built on a single node.

foundationDB transactional writes per second

With the announcement of Version Three, FoundationDB launched their new transactional processing engine: “Thas been a massive project for us. It is based on a totally new scalable design. The benchmark we are showing is running 14.4 million transactions per second, so that is an order of magnitude faster than the Netflix test.”

To enable this sort of transaction processing, FoundationDB has also released an update to its own language — Flow 2 — that is a blend of C++ and Erlang. Flow 2 provides some new batch and scale algorithms that work with the transactional processing engine to reduce latency and increase scalability.

“For example, starting a transaction with the highest level of guarantees would be a 3 millisecond operation in FoundationDB 2. Maybe you would get that down to 2 milliseconds. Now Foundation 3.0 it is going to take 3-400 microseconds, with no loss in latency when you build up volume of transactions,” says Rosenthal.

Rosenthal argues that this is much faster than tests shared by other competitors. Write transactions slow processing down as much more work needs to be done to maintain a single dataset in realtime, especially if it is distributed over a cluster of machines.

For some early adopters, the 4 GB memory needed to run FoundationDB has been an obstacle. Users in the FoundationDB community forum note that a 4GB ECC RAM minimum requirement “seems unreasonably high”. One user notes they are used to other database options, for example Neo4j where, if memory use is expected to be low, maximum memory limits can also be set low.

This user is looking for a solution that has a web front-end that fits in less than “1GB RAM even under heavy stress loads, so remaining RAM would be left for FDB. We read once and cache immutable objects so FDB would be used to read/write mutable objects only (low load) this is why we imagine sharing CPU/RAM/SSD resources for front-end and backend on the same server would work well enough.”

Read More:   Discover the Potential of Millennial Market (Generation Y) in 2023

So for them, the 4GB official minimum amount of memory required to run a FoundationDB process is too high a price at present. FoundationDB are currently running further testing to see what may be possible.

Meanwhile, Rosenthal sees the new FoundationDB release as having the potential to empower a whole new range of Internet of Things products, but to date actual use cases have been limited to gaming and email marketing analysis.

Customer.io — an email-focused marketing automation provider — is currently doing a trial of FoundationDB’s capacities using the database provider’s free up-to-six-nodes production cluster plan. Customer.io have been using a MySQL relational database to manage account subscriptions and email campaigns of their customers, Redis to queue and cache, and MongoDB for clickstream data. But as they have begun scaling up from 5 to over 400 customers, their current data architecture is unable to quickly identify which email should be sent to each customer based on each targets online behavior: for all of Customer.io’s customers this has already become a 1 TB+ dataset that needs to be analyzed and transacted upon in real time. In a case study paper from FoundationDB, they say they have helped “sort what emails to send and to whom based on actions within the web apps monitored. FoundationDB is able to do easy range scans for quick retrieval of ID information.”

While marketing and gaming may be popular use cases, they are hardly world-changing. This is where database platforms like Basho have the true advantage, regardless of benchmark tests. Adam Wray, CEO of Basho, counts a third of the top Fortune 50 enterprises as customers and points to the usage of Basho’s Riak product with the UK and Danish health services as examples of where distributed database providers must influence if they are to compete in the market.

In recent weeks, Paris has proposed the idea of eliminating private cars from the urban core of the city in order to deal with ever-growing traffic congestion problems that are creating gridlock for the distribution of goods and services to the city hub, let alone impacting on labor economics as the city’s workforce gets stuck in cars. More efficient energy usage drawing on big data analytics on household and industry energy consumption; realtime demand-driven ticketing prices for events and travel; global marketing campaigns that identify when a brand or music artist is trending; financial trading; and online and mobile electoral campaigns could all possibly use the sort of transactional processing power that FoundationDB are claiming to offer. For now, FoundationDB have yet to make headway in leveraging their benchmarks to demonstrate to these enterprise and government markets the power of their database platform.

Aerospike: Focusing on Speed at Scale

Meanwhile, Redis competitor Aerospike started in 2010, positioned themselves as the “first flash-optimized, in-memory NoSQL database.” Last year, they open sourced their database software, and “we are now seeing a much broader range of applications and adoption in unexpected corners,” confirms Monica Pal, Chief Marketing Officer for Aerospike.

Pal and Tech Lead Sunil Sayyaparaju say the goal of Aerospike is to provide the speed of Redis, at the scale of Cassandra, but with the economies of flash. “We have built a database where the indexes are in RAM, and the data can be in RAM, Flash, or a combination, so speed at scale: this is what we are talking about,” says Sayyaparaju. “We started as simple key value store, and then added additional indexes and user defined functions, but we wanted to make sure that as we added features we didn’t lose performance. Our latest tests show, in fact, improvements.”

“Aerospike has traditionally been used as a front-edge operational database with a Hadoop cluster at the back,” says Sayyaparaju. “We are accelerating traditional databases like MySQL. A second use case is where hot data is in Aerospike and historical data is in Hadoop and decisions are being made drawing on a combination of both.”

Read More:   Have We Reached Its Limits? – InApps 2022

Sayyaparaju and Pal say that because of the speed efficiencies, they are seeing customers swap from automatic sharding database solutions to Aerospike.

Advertising fraud detection service Forensiq, for example, made the switch from Redis to Aerospike. Their services need to determine in an instant whether an online ad is being viewed by a real person or by a bot, in order to identify the true value of internet advertising opportunities. This involves continuously scoring more than 50 million users, mobile devices and IP addresses.

Matt Vella, CTO at Forensiq explains that they were handling half a million transactions per day, and solutions were coming to a standstill.

“We’ve been growing our number of transactions at a rate of 100 or 1000 times, and we will need to scale another 1000 times to deal with the massive amounts of traffic that we need to scan every day,” he said.

Vella says Forensiq’s two main products — a premium solution aimed at identifying when customers should purchase ads and a Javascript tagging solution that sits on a customers website and collected datapoint from visitors — are now both transacting a billion-plus requests per day.

“Many of the databases available now are ready for production uses, it is not the same world as ten years ago when there were just a handful of big players. So it really comes down to what you are architecting and your needs for a specific application. Not many companies are using just one database for everything they are doing. We are using a combination of databases for each of those applications. For example, we are using Aerospike for our realtime analytics, and aggregating data. We are collecting characteristics in real time, looking at how they change, say over 90 days, and then we are running algorithms to determine a score.

“We were convinced Redis was the right solution for us. However, it doesn’t have built-in sharding, although there are companies that have pieced solutions together to provide those features, but we decided that it would not work for us in production. That’s what led us to Aerospike. It is still pretty new but it has a good feature set — not the most robust — but for us it was enough and it does have the sharding that could help us with low latency and performance at scale, and the costs are extremely worth it.

“We are also using MongoDB to store original datasets. That works pretty well, it has lower needs in terms of updates it needs per second, so for longer term data it is mostly adequate for our needs.”

Vella describes a situation that reflects the way in-memory database services are needing to constantly evolve: “We are developing something that will allow us to store full data for massive clients, especially for those with hundreds of billions of impressions per day. We need to store all of that — or at least samples of that data — for analysis, so we have been using MongoDB for that purpose, but the volumes are getting so massive that we are building an in-memory database to handle it. As each ad impression has multiple data points that come in at different times, we want a database that can collect the data points as they come in, and when we have that data, we can then analyze it.”

aerospike benchmark

In December, Aerospike published on the Google Compute Engine blog site to demonstrate their capability to process one million write transactions per second on 50 nodes. As Aerospike’s benchmark uses different variables than FoundationDB it is a little tricky to do a direct comparison, but on the face of it FoundationDB are claiming it will cost $150 an hour to do 54 million write transactions, whereas Aerospike calculate costs of $41.20 to process 1 million write transactions.

MemSQL: Benchmarks as Proof of Concept for Customers

Conor Doherty, Technical Marketing Engineer at MemSQL sees transactional processing benchmarking as an essential tool for new customer acquisition: “Benchmarks and bake-offs are a big part of our proof of concepts and evaluations for customers. Very often they’ve taken their current database to (and beyond) what it can handle, and are looking for something that can keep scaling and/or reduce latencies. It’s common for a customer to merge their transactional and analytical workloads on a single MemSQL cluster, and to go from minutes to seconds or milliseconds.

Read More:   Most Essential Data Science Tools in 2022

“The need to scale transaction processing comes up with our customers all the time,” says Doherty who points to customers like media company Ziff Davis and Canadian telco Novus who are seeing ‘orders of magnitude faster in transactional processing over MongoDB and other previously-used solutions.

memsql benchmark

MemSQL use an Oracle demo as a benchmark to show how they outperform the legacy enterprise provider, calculating they are ten times faster, but Doherty confirms that in that benchmark test, only read transactions were being performed.

However, Doherty points to other benchmarks that MemSQL has run that “can handle 10-20,000 writes per second per CPU in the cluster.”

While the idea of the benchmarks to demonstrate performance seems useful, the results can still be confusing as each database provider is using different architecture and server cluster sizes, often not clearly differentiating what is read or write transactional processing, and even presenting cost savings results in different metrics so as to leave customers to have to do their own comparisons.

“Most of the industry standard benchmarks are designed to test either purely transactional or purely analytical workloads. These are important metrics – and we continue to use them for internal regression testing and proof of concepts – but they are not a useful facsimile of modern real-time analytics workloads that require converging transaction processing (INSERT, UPDATE, DELETE, simple SELECT queries) with analytical processing (complex SELECT queries with aggregates, joins, etc.). At MemSQL, we believe that we need a modern benchmark that tests a database’s ability to analyze a dataset that changes in real time,” says Doherty.

Another difficulty in making the comparisons is that MemSQL is not a distributed database architecture per se, so their benchmark tests are all run on the same servers as the MemSQL cluster. Built by former Facebook employees who saw the need for real-time database transactional processing at scale, MemSQL might run on multiple machines or across virtual networks, but has been designed to act as a single database rather than a distributed architectural platform. Doherty sees this as a strength of the way MemSQL has been built:

“MemSQL is designed specifically to accommodate mixed read/write and online transactional/analytical processing workloads, so it is no longer necessary to maintain and synchronize many copies of data. MemSQL offers cross-datacenter replication for disaster recovery, which allows users to create arbitrarily many copies of a database. The secondary (replica) clusters can be used for analytical and read-heavy workloads. With the MemSQL approach, there are a couple of advantages: (a) there is less concern about best case/worst case latency since all transaction processing happens in a single MemSQL cluster with predictable latency and (b) it is bound by one-way network latency (rather than round-trip network latency) since transaction processing does not require communication between geo-distributed database instances. The data needs to travel across the network at least once, no matter what. But it’s better to send the data once than to send it and then have to wait for a reply (and block reads and writes in the meantime).”

For now, the work is still for potential end users to decide amongst the available database services based on their own needs and potential scaling trajectory. Gary Orenstein, CMO at MemSQL believes this will need to change, with new industry benchmarks urgently needed to show how the various database providers really can handle high volume transactions at scale, supporting applications that have a globally distributed customer base.

“I don’t think the benchmarks have caught up,” says Orenstein. “Most have focused on transactional and analytical processing. But end customers are needing the ability to do transactions and analytics simultaneously. They need to able to run analytics on those transactions. They need to be able to query the data up to the last click… that has been impossible before.”

In the final Part Three of our series on High Volume Transactional Databases, we look at how application developers can manage growth and maintain scalability in the lead up to implementing high performance database architectures.

Basho is a sponsor of InApps Technology.

Feature image via Flickr Creative Commons.