Containers are affecting the equilibrium of InApps Technology. They represent a new conceptual way to think through app development and management. In the meantime, the host dominates as it has for many years. But how are we to manage app development and management as the traditional hosted environment does not fit so cleanly with distributed architectures? It’s these issues that keep surfacing as we explore this new dimension of the fast-moving new stack ecosystem.

Data-centric architectures serve as a backdrop to this developing scenario of mixed environments. App development and management at scale are things every enterprise has to be aware of. Managing data at scale is just a reality of doing business. Google, Etsy, Facebook, Airbnb and other Internet companies are now the models for how to scale development and operations. They are who we all look to for guidance. These companies teach the enterprise that it’s no longer a strictly siloed environment.

Sharing the data across a networked platform is the real goal. IT policies have to be semantically integrated into the data structure itself. Trust is paramount. Antiquated systems that are inherently restrictive do nothing but force the enterprise to just pay for more boxes. And who the hell has the time, space or resources to buy more expensive boxes?

A scaled-out architecture should not fundamentally change as the amount of data that is managed increases over time, said Datadog CTO and Co-Founder Alexis Lê-Quôc in notes from a recent podcast.

Read More:   Update NetApp’s ‘Project Astra’ Brings Data Management to Kubernetes


#34: Monitoring Distributed Architectures

Listen to all TNS podcasts on Simplecast.

Everything has to be coupled in some loosely formed fashion. Streaming architectures become essential with the use of central queues through services such as Apache Kafka or AWS Lambda, Lê-Quôc writes. The only formal contract is the queue and its API, which sits between producers and consumers. In this environment, application development and management can not exist in an isolated world. The entire technology stack becomes far more relevant to the developer, opening capabilities that come as software abstracts hardware environments. Vivek Juneja echoes this new thinking in hos post about Lambda as the new normal:

As a developer, I would want to optimize on how long a lambda function takes to execute along with the memory I provision for the same. A smaller micro service should relatively complete the execution of the function in milliseconds with minimum memory at its disposal. Consider this an another methodology helping architects to rethink their microservices architecture composition.

Developers are increasingly considerate of the lower levels of the stack and the monitoring demands that come as distributed architectures change the way we think of hosts and how they relate to containers.

In his notes, Lê-Quôc explains in the mainframe world there was one host. The client/server era broke up the hosts into smaller machines. Now the host is becoming the recipient of compute. With containers, the compute may get distributed to hundreds or even thousands of containers on one host.

Hosts have historically spanned days, weeks or even months. With cloud services such as Amazon Web Services (AWS), the compute duration has decreased with instances disappearing at a moment’s notice. Containers are more like processes, coming and going at a moment’s notice.

This means compute will come in patterns, swarming to the host which simply serves as the CPU. It’s the speed and how it fits in a distributed architecture that is worthy of thinking about.

Read More:   Update VMware Redefines Security After a Surge in Attacks

That means thinking differently about monitoring. The host is no longer the center of the monitoring universe, Lê-Quôc said. They are just recipients of compute. 

The developer has a more complex task. How does the app get deployed? How do containers affect how the app behaves across a distributed environment? Does the developer need to know these things, or is it the responsibility of the operations teams that manages the architecture? The role of the backend engineer is sophisticated work, but increasingly, the developer has to be aware of the operations to make sure the app is performing accordingly. In many respects, it explains why monitoring now has such a greater importance. It’s the customer experience that matters. If the app is performing poorly, the developer should want to to know why. That means getting access to dashboards that give a picture of how the app is behaving and what needs to be done to solve the problem.

With all of this in mind, we have some questions for a series of posts we want to do. We will be doing more for our container month coverage in June on this topic, so I expect we will be addressing these topics in more depth as the spring season progresses.

For Now, Here are My Questions

  • How do containers change the way users have to think about networking?
  • What about mixed container/VM environments and how they work in a distributed architecture?
  • What are the challenges this mixed environment poses?
  • How does monitoring change our concept of a host?

These questions illustrate how concepts are changing to adapt to distributed environments. For example, it’s as if the advent of containers almost leapfrogged over SDN, creating a new set of challenges for networking.

VMware taught the world how not to deal with bare metal. Now, it is acess to bare metal that offers speed and flexibility. What happens to the hypervisor and its role in a hosted environment as containers become a way to deliver microservices? It seems that is the challenge for the enterprise and VM environments as they meet the demands that come with distributed architectures. Does the cloud service abstract the complexity? Are the services that sit on top of the network really what are most critical? I think of how much etcd is being used by CoreOS, Kubernetes, Docker, Cloud Foundry and Red Hat. The etcd environment is not something the end user should have to worry about but it is something they want to use for managing what they are running across distribitd environments.

Read More:   Testcontainers Integration Library Gets Commercial Backing with AtomicJar – InApps Technology 2022

It’s really the service providers that have to think through this, abstracting the middle tier, as Joe Emison explained in a post for us this week.

There are any number of angles on the questions we pose. Share your thoughts if you wish. We are always looking for more perspective.

Datadog and Red Hat are sponsors of InApps Technology.

Feature image via Flickr Creative Commons.

InApps Technology is a wholly owned subsidiary of Insight Partners, an investor in the following companies mentioned in this article: Docker.