What do you do when your business depends not only on running Hadoop in a multitenant configuration, reliably and at scale, but also on eking out every last bit of efficiency from the system? For Altiscale, provider of a Hadoop-as-a-Service offering specifically targeting Hadoop workloads, the answer was simple — run Hadoop on containers and Docker.

Simple, but certainly not easy, explained Altiscale’s Nasser Manesh, orchestration product lead at the company, who shared the company’s experiences running Hadoop and Docker at last week’s Strata + Hadoop World conference.

Running Docker in production is bleeding-edge as it is. Running the company’s petabyte-scale Hadoop clusters on Docker, in production, is a whole different beast, and one for which few blueprints existed when they began their journey 18 months ago.

Why Docker?

As a workload-specific platform-as-a-service (PaaS) offering (my words, not theirs), the Altiscale Data Cloud provides what the company calls “Hadoop dialtone.” Users are able to load data into Hadoop Distributed File System (HDFS) via a variety of bulk- and event-oriented mechanisms and APIs, and deploy MapReduce, Pig, Hive and other applications to operate directly on that data, without ever having to deploy or configure Hadoop.

While the company does allow users to monitor job performance on the basis of infrastructure metrics, such as memory and CPU consumption, it otherwise abstracts away the notion of nodes, clusters and other infrastructure-level concerns. This gives the company the ability to partition and allocate resources at its discretion. In fact, getting this resource allocation right is a key determinant of the company’s margins and is thus of critical importance. As a result, optimization of this aspect of the company’s technology platform is a business mandate.

Read More:   Please Serverless, Have an Opinion – InApps 2022

Virtualization is well established to solve this partitioning/allocation problem for cloud providers, but the overhead it brings is deadly for Altiscale’s margins.

“Every single CPU cycle that goes to your hypervisor is money wasted,” says Manesh. “Likewise every byte of RAM.”

Enter Docker. Container technologies like Docker were attractive to Altiscale for their ability to offer lightweight isolation and virtualization, yielding reduced overhead, faster deployments and restarts, and simplified migrations.

Docker on Hadoop or Hadoop on Docker?

One of the first decisions Altiscale engineers needed to make was whether to run Docker in Hadoop or run Hadoop on Docker.

Running Docker in the Hadoop environment would allow the company to take advantage of YARN, the data processing framework introduced in Hadoop 2.0. This would have been ideal since the Altiscale team knows YARN well and is a contributor to the open source project. YARN’s increasingly robust resource management capabilities would allow Altiscale systems to simply ask Hadoop to launch and manage containers, eliminating a great deal of architectural complexity. Unfortunately, as the team learned, getting Docker on Hadoop to work would require support in both YARN and Docker, which would take time to materialize.

Running Hadoop in Docker containers, on the other hand, presented an immediate opportunity for the company, without the dependency on adding new features to either project. And while the resulting solution would be more complex without taking advantage of YARN — Altiscale systems would need to provision and manage Docker containers directly — Manesh and his team found it to be both repeatable and automatable.

The Hadoop on Docker approach allowed Altiscale to move HDFS’s DataNode and YARN’s NodeManager processes — traditionally paired on each of the slave nodes in its Hadoop clusters — into individual containers. Deploying these processes as containers would provide Altiscale with a number of important benefits, including giving the company a high degree of flexibility over the allocation of these processes across their infrastructure, allowing the compute and memory resources allocated to each process to be managed independently, and supporting greater overall elasticity and burstability.

Read More:   Avoiding the Pitfalls of Multitenancy in Kubernetes – InApps 2022

Where Docker Fell Short

Altiscale ran into its share of challenges getting this architecture up and running and keeping it running at scale. Many of these have been operational issues, according to Manesh.

“While Docker gets a lot of visibility from the development and DevOps communities, its operational maturity still leaves a lot to be desired,” said Manesh. “It’s not operations friendly.”

“Most of the issues that we experienced arise from the fact that Docker was not designed to support the long-running containers that are needed to support production systems, as opposed to applications in development. The support is not there.”

Manesh outlined a few of the challenges Altiscale ran into:

  • Separate orchestration, provisioning and automation was required with Docker, as opposed to what they would need running things on top of YARN.
  • Logging is difficult in a distributed Docker environment. Things you would expect to work — like using syslog to collect logs — don’t, because container-specific log entries aren’t cleanly separated from those of the host OS.
  • Networking was downright painful. Docker makes standard Linux networking tasks like managing IP addresses and using iptables hard. Altiscale engineers found numerous race conditions in how the Docker IP stack tries to allocate things, resulting in the frustrating experience of things working some times, but not others.

“We experienced a couple of major setbacks due to these issues,” says Manesh. “We had to rearchitect our nodes’ startup process three times.”

“Some of the things we ran into are issues that developers won’t see on laptops,” he added. “They only show up on real servers with real BIOSes, 12 disk drives, three NICs, etc., and then they start showing up in a real way. It took quite some time to find and work around these issues.”

The Path to Production

Altiscale released the first containerized nodes into production about nine months ago. The percentage of containerized nodes is still small at this time compared to physical nodes, as the company gradually rolls out the new architecture.

Read More:   Update Big Data Will Lead You Astray

Manesh notes that while a significant amount of work has gone into getting this project up and running, Altiscale engineers have not made changes to the Docker core that would remain proprietary.

“We’ve tried to stay away from that. Docker will continue to evolve and move forward. We don’t want to maintain our own fork.”

Rather, company engineers are active contributors to the Docker open source project and have contributed several of the scripts and tools they’ve created back to that community.

InApps Technology is a wholly owned subsidiary of Insight Partners, an investor in the following companies mentioned in this article: Docker.

Featured image: “Elephants at Sunrise” by RayMorris1 is licensed under CC BY-NC-ND 2.0.