- Home
- >
- Software Development
- >
- Security in the Cloud Native Era – InApps 2022
Security in the Cloud Native Era – InApps is an article under the topic Software Development Many of you are most interested in today !! Today, let’s InApps.net learn Security in the Cloud Native Era – InApps in today’s post !
Read more about Security in the Cloud Native Era – InApps at Wikipedia
You can find content about Security in the Cloud Native Era – InApps from the Wikipedia website
Shlomo Zippel
Shlomo Zippel is developer advocate at Twistlock. He helps DevOps and security users better leverage Twistlock technologies and helps shape product roadmaps with user requirements. He previously led the applications and UX team at PrimeSense, which built the 3-D sensor behind the Microsoft Kinect. He later led the engineering team at FundersClub, an online Venture Capital firm.
Every few years the world of web applications and services goes through a paradigm shift. The first such shift came when we moved from physical servers to virtualization. The latest shift that is happening right now is the transition from virtualization to Cloud Native architecture.
Every time such a shift happens, it changes the level of abstraction that we deal with — from abstraction of hardware to operating systems and ultimately moving up the stack to applications. These abstraction shifts present new areas for innovation previously not possible. While the development, security, and maintenance challenges remain largely the same, it is important to note that the way we solve the challenges; the way we deploy code, scale up the application, and respond to threats can fundamentally change moving to a cloud native architecture.
Let’s make this a bit more concrete.
The Trend — from Physical Servers to Containers
Once upon a time companies had physical servers in a server room. They would host the internal company services as well as the publicly facing ones. Scaling meant buying more servers. Securing meant buying a firewall and configuring a DMZ. The unit of abstraction was: physical hardware.
Fast forward 20 years and infrastructure became increasingly hosted by Infrastructure as a Service (IaaS) companies. Instead of dealing with physical servers, you deal with virtual servers hosted offsite. The unit of abstraction now became the operating system. This shift from physical machine to virtual machine let us solve the deployment, scaling, and security challenges in new and powerful ways, such as spinning up new virtual machines on demand or dynamically establishing a virtual private network.
Jump another 10 years to the present and you get Cloud Native: containerized applications and services. Of course, this is a bit of an oversimplification and there were intermediate steps between the shifts I just described, but the trend is there. But now the unit of abstraction is now a single process — the container.
So, to recap, we went from dealing with physical servers (actual hardware) to virtual machines (operating system) to containers (single process). The challenges didn’t change but the interfaces for solving them did.
Similar Challenges, Different Interfaces
The set of challenges every operations professional has to deal with remained largely unchanged throughout the paradigm shifts:
- Deployment, provisioning: How should the server be configured? Installed? Connected? How are updates handled?
- Orchestration, scaling: How should multiple services, or an app with multiple moving parts be coordinated? How should new instances of the app spin up when they need to?
- Security: How do we scan for and manage vulnerabilities? How can we detect and block threats and unwanted activity? How do we protect data in this infrastructure/system?
- Compliance / Policy enforcement: How can we enforce regulatory compliance and company policies related to our applications and services?
But, the interfaces of which to tackling these challenges have changed significantly.
Deployment, Orchestration and Scaling
With the container abstraction, the tasks of deploying, orchestrating and scaling applications are in some ways simpler because the environment — the host and the operating system — is increasingly abstracted away from the application. This makes it easier to outsource the former and focus on the latter.
Consequently, this means that orchestration and scaling tasks do not have to happen in the application; they can be decoupled and solved externally. This has unleashed a whole slew of innovation for application orchestration and automatic scaling, much of which helped advance the notion that infrastructure and operations tasks should remain application-agnostic.
Security
The container abstraction embodies three fundamental characteristics that are important to security: minimalistic (e.g., single process), declarative, and immutable. These characteristics combined make it possible to secure applications in ways that in the past were difficult or manual.
Minimalistic: Containers are supposed to be single process. A complex application may be composed of many different containers, each serving a single purpose. Being minimalistic means that it is easier to analyze its behavior
Declarative: Containers are “baked.” In the baking process, the container image stipulates a great deal of runtime information. For instance, a Dockerfile may declare which network ports, file paths, and dependencies the container needs in runtime. This means it’s possible to process container image statically and determine relevant runtime behavior, a task that is nearly impossible with traditional software or with virtual machines.
Immutable: This property goes hand-in-hand with containers being “declarative.” Essentially, once an image is baked, its instantiations — the containers — do not deviate from the image throughout their lifetime.
So right off the bat — either through static or launch time metadata — it is possible to determine a great deal of the runtime behavior of the container, including which distributed applications this container belongs to, and which other containers it may communicate with, and to some extent detailed runtime info such as system calls and OS capabilities. We can then develop a reliable baseline for the application automatically with very little manual effort, and use this baseline to whitelist acceptable behavior and detect deviations in runtime.
A good example of this is with Node applications. With Node, it’s possible to develop a mapping between the Node APIs and Linux system calls. Once you have that mapping, you can parse the Node app statically and determine the exact set of system calls that the application will execute in runtime — anything not on the list is a suspect system call which could indicate an active compromise. Putting a Node application in a container ensures that the application will not change from underneath you, hence the system call profile that you have developed is reliable throughout its lifetime.
Summary
Application and service development is a moving target with a constantly shifting landscape. At the end of the day the goal doesn’t change: we want tools to help us develop and deploy applications in a quick and secure way. The latest paradigm shift towards containers is a great opportunity to solve the same old challenges in creative new ways.
Twistlock is a sponsor of InApps.
Feature image via Pixabay.
Source: InApps.net
Let’s create the next big thing together!
Coming together is a beginning. Keeping together is progress. Working together is success.