- Home
- >
- Software Development
- >
- Microservices Made Easier with Cloud Foundry’s Lattice and Diego – InApps 2022
Microservices Made Easier with Cloud Foundry’s Lattice and Diego – InApps is an article under the topic Software Development Many of you are most interested in today !! Today, let’s InApps.net learn Microservices Made Easier with Cloud Foundry’s Lattice and Diego – InApps in today’s post !
Read more about Microservices Made Easier with Cloud Foundry’s Lattice and Diego – InApps at Wikipedia
You can find content about Microservices Made Easier with Cloud Foundry’s Lattice and Diego – InApps from the Wikipedia website
Microservices have been the talk of the software development and operations world recently. As more and more software development teams look into using microservices, the need for orchestrating large group of containers to run multiple services at once is crucial.
At SpringOne 2GX in September, Cloud Foundry platform engineer Matt Stine spoke about how Cloud Foundry’s Diego, paired alongside Lattice — both open source — could work together to offer a new, simplified way to manage microservices in a Cloud Foundry environment.
Diego provides scheduling for container-based workloads. It can be used to manage both single tasks and long running processes (LRPs), even those that run indefinitely, such as database systems. Lattice is a lightweight, cluster-based workload manager. Lattice also includes Loggregator to compile logs on the health and status of running containers.
Together these components make an entire platform for managing large numbers of microservices, one without a steep learning curve.
Brains of the Operation
Lattice clusters are comprised of VMs running individual containers, called cells. These cells are distributed systems available for executing tasks within a container, and are monitored by software that Diego controls, called the brain, which is able to schedule workloads onto cells and orchestrate their functions.
All cells are monitored by the brain to ensure that services continue to run. Cells that are not performing well or that have failed altogether have their workloads re-balanced elsewhere to ensure overall system uptime remains consistent. Diego uses a bulletin board system (BBS) that keeps track of both a baseline that reflects how the system is currently operating, as well as a set of specifications for the desired states for building, testing, or running an app in production.
The brain has an unusual approach to assigning workloads. It auctions a particular container’s workload to cells based on the resources available in the system. The auctioneer, as this feature is called, offers cells the opportunity to bid on the right to run a task or LRP, and thus become the representative of a task.
A bid is a report line, based off of available resources offered by the VM’s CPU. The auctioneer then collects bids from the Lattice cells, scoring them with a unique algorithm, and choosing a winner based on these variables. The winning cell runs the workload.
Cell representatives are limited in that they only are able to run what a task’s API tells them to. The instructions are passed along via JSON-formatted messages. The cell representative communicates with an executor, which then runs the action requested.
Lattice works well with a container orchestrator called Garden, available for those running Linux-based systems. Written in Go, Garden provides users the ability to also deploy Garden containers using BOSH with a Linux-based backend.
As Garden is a series of Go interfaces, running it on Linux requires a bit of active setup before deploying one’s first Garden container. BTRFS Tools makes setting a loopback simple, allowing for developers to then build out and test their Garden containers for functionality.
To further enhance one’s development stack on Garden with Lattice, a Warden RootFS can be provisioned. Warden is a crucial component of container management, as it offers a simple API for handling instanced environments. Diego dovetails with the Garden API to orchestrate name spaces and processes running in containers. A particular task asking for a workload has no idea it is being run on Linux, making setup simple when utilizing a VM environment for development.
Cell representatives can ask Garden to run tasks on containers in a way that is independent of any particular platform. Lattice delves deeper into the specifics surrounding tasks to be run before spinning up containers, which allows layers of a Lattice cluster to evolve apart from other layers. This results in more flexibility for developers, as developers can update tasks to be run or assign new LRPs without changing individual namespaces.
Lattice and Diego Workload Management
When a task is posted to Diego, the BBS then attempts to match as best as possible the desired state of a cluster against the actual state of the cluster at the current time.
When events are scheduled, Diego’s Receptor informs the BBS of the desired workload. The cell representative informs BBS of the actual workload, with other events captured and stored. When running multiple microservices, sometimes events take longer to run, or to return user data. Lattice uses polling to assess the state of the system, in order to reconcile as best as possible the desired state of a workload against its actual operational state at the moment.
Another feature of the brain, called the converger, works to make the workload’s actual state come as close to possible to the desired state. It also informs cell representatives of tasks that can be stopped, while instructing the auctioneer to start contacting cell representatives for bids on new workloads.
Working With Lattice At-Scale
As a container scheduler, Lattice not only works well for smaller instances, but can be used to deploy multiple clusters as well, by using Terraform, a tool for configuring and launching large-scale infrastructure deployments.
Working with Lattice and Diego requires setting up a virtual machine. If not working in the Lattice command line interface, developers can use X-Ray.CF dashboard to visually build a particular cluster. X-Ray will display pulsing cells for those containers which have workloads starting up. Load balancing will then be shared around different containers.
As more cells are fired up, the number of available cells will exceed the number of cells needed. Lattice also allows users to deploy a single cluster, so that there will be no stringent access control taking away from time spent developing to gain permission for users to deploy, debug, or manage an active cluster.
Lattice is a structure enabling software developers to accomplish many tasks at once, a set of scaffolding upon which users can then run applicable microservices, including large swaths of servers, user-facing applications, NoSQL or SQL databases, and much more. Depending on the needs of a user, Lattice can be used both at enterprise level and for smaller teams looking to experiment with starting to use microservices for their application management.
Feature image: “Chrome Alum Crystals” by Paul is licensed under CC BY-SA 2.0.
Source: InApps.net
Let’s create the next big thing together!
Coming together is a beginning. Keeping together is progress. Working together is success.