Nick Chase
Nick Chase is head of technical and marketing content for Mirantis. He is a former software developer, Oracle instructor, author of hundreds of tutorials and author or co-author of more than a dozen books on various programming topics, including ‘Understanding OPNFV,’ the OpenStack Architecture Guide and ‘Machine Learning for Mere Mortals.’

If you’re reading this article, I don’t have to tell you that computer technology moves pretty fast. Seems like just yesterday we were adjusting to cloud computing, then debating virtualization versus containerization and cloud native computing. Now we’re debating the next new wave of cloud technology.

Just as thousands of developers were using containers before they ever bubbled up to the attention of the trend makers, the next evolution of the data center is right under our noses.

For more than a decade now we’ve been used to “-as a Service,” as in “Infrastructure as a Service,” where the data center provided resources such as compute (as a Service), networking (as a Service), or storage (as a Service).

And this kind of architecture has been a good thing for plenty of reasons:

  • Lowered costs: Because you only need to instantiate resources when you need them, the as a Service model enabled companies to shed tens- or even hundreds-of-thousands of dollars in unused equipment, not to mention the operational and overhead cost of maintaining that equipment. Instead, they were able to maintain a pool of resources that was closer to what was actually needed on an ongoing basis.
  • Increased speed: Because developers and other stakeholders could request resources as needed, they were no longer limited by the months-long process of requisitioning hardware, then waiting for IT personnel to set it up. Once self-service options became available, the time to availability shrank to almost zero, as users could simply instantiate their own resources rather than putting in a request then waiting, sometimes for weeks, for the overburdened IT department to get to it.
  • Increased flexibility: With resources provided virtually and instantiated on demand, users were no longer stuck with whatever they had. If they needed a larger server or a different operating system or other changes, they could get it by simply spinning up new resources.
  • Access to innovation: Perhaps the greatest form of flexibility, providing resources as a Service meant that companies and users could avail themselves of new innovations much more easily because operators could simply add new services, and users could take advantage of them. In fact, this may explain part of the rapid rise of containers; with virtualized resources firmly in place, developers were in a better position to take advantage of this new paradigm.
Read More:   Update HP Courts Developers with Tools for Monitoring Mobile Apps

Infrastructure as a Service also democratized development, as companies who couldn’t afford to create their own infrastructure could “rent” it to create applications that then grew — or didn’t — without massive up-front investment.

Just providing resources on an as-a-Service basis isn’t enough, however.

We Need the Data Center as a Service

The fact is that whether we realize it or not, we’ve gotten used to thinking of the data center as a fluid thing, particularly if we use cluster paradigms such as Kubernetes. We think of pods like tiny individual computers running individual applications, and we start them up and tear them down at will. We create applications using multicloud and hybrid cloud architectures to take advantage of the best situation for each workload.

Edge computing has pushed this analogy even further, as we literally spin up additional nodes on demand, with the network adjusting to the new topology.

Rightfully so; with the speed of innovation, we need to be able to tear down a data center that is compromised or bring up a new one to replace it, or to enhance it, at a moment’s notice.

In a way, that’s what we’ve been doing with public cloud providers: instantiating “hardware” when we need it and tearing it down when we don’t. We’ve been doing this on the cloud providers’ terms, with each public cloud racing to lock in as many companies and workloads as possible with a race to the bottom on cost so they can control the conversation.

Just as we’ve gotten used to spinning up cloud workloads without worrying about the specific server on which it will run, what we need is the ability to spin up a new data center – or resources for an existing data center. We need to be able to do it without having to agonize over where those resources will actually go because we know that whatever requirements we have in place, whether they are cost-based, regulatory or geographical, they’re satisfied.

Only then will we have true Infrastructure as a Service.

Advantages of the Cloud Native Data Center

Once we’ve leveraged the Data Center as a Service, we find that our initial Infrastructure as a Service advantages still hold at that level:

  • Lowered costs: Before, companies could reduce the amount of hardware in their possession to a level that matched their overall usage, plus some margin of safety. With the ability to spin up new hardware at will, companies only need to support the actual resources in use.
  • Increased speed: Just as users no longer needed to wait for IT to spin up a new virtual machine, now the company no longer needs to wait for IT to “rack and stack” new hardware; it can be available in a matter of minutes.
  • Increased flexibility: The ability to match resources to needs increases at the data center level, as users can create entire clusters as needed.
  • Access to innovation: Where Infrastructure as a Service enables a company to add new, more innovative resources for users, it still requires users to wait for that to happen. With Data Center as a Service, users can deploy these innovations without waiting, as long as they are available from the company’s providers.
Read More:   Vulnerability Hiding in Plain Sight – InApps Technology 2022

Data Center as a Service provides the ability to separate a company from its physical infrastructure, which provides additional advantages.

  • A single provider API:  With Data Center as a Service, users only need to understand the API for their actual data center. Individual provider APIs are abstracted away, eliminating the need for companies to maintain multiple sets of API experts.
  • Unmanned Data Centers: Data Center as a Service lessens the need for large, manned data centers in two ways: It takes advantage of external cloud providers, and it makes it possible for a company’s remaining private data centers to be largely unmanned. Without having to worry about humans constantly buzzing about, we are free to create data centers more easily serviced by robots, enabling a more densely packed footprint.
  • Data Gravity: With the ability to create more densely packed data centers, we can more easily accommodate the onslaught of data we expect to increase in the coming years. This means not only storing huge amounts of data; it also means creating data centers close to users and to edge nodes – and doing it on the fly.
  • Stable, reliable power: Obviously, data centers can’t operate without power, which means that companies have to allow not only for weather events such as hurricanes, but rolling blackouts such as those in California. While one way to prevent problems is to maintain the capacity to generate power on site, another, more reliable option is to be prepared to spin up and move to a data center in another location at a moment’s notice, just as you’d do with a cloud native application.
  • Climate advantages: Without the need for a data center on site, many companies are opting to locate their data centers in more climate-stable areas of the world such as the Nordic countries, where not only is power plentiful, but the milder climate reduces the need for cooling.
Read More:   DIGITAL BUSINESS CARD MARKET MAKE A HUGE ATTRACTION BY SPEEDY GROWTH WORLDWIDE.

Perhaps most of all, just as Infrastructure as a Service enabled smaller companies to get started without huge investment, Data Center as a Service democratizes the data center itself, enabling companies that could never have afforded huge investments that have gotten us this far to take advantage of these developments.

These are just the advantages we see today.

We’ve Just Begun to See What the Future Holds

While we build out Data Center as a Service, we can begin to see around the corner. It’s not a stretch to see a world in which artificial intelligence and machine learning continually optimize the data center to provide the lowest cost or the best performance — or some balance of the two — or smart monitoring to provide advance warning of security or infrastructure issues. We can even see the data center itself expanded to include Internet of Things devices that monitor everything from temperature to moisture to vibration noises in addition to security.

The infrastructure is in place. We just need to make use of it.

InApps Technology is a wholly owned subsidiary of Insight Partners, an investor in the following companies mentioned in this article: Mirantis.

Featured image via Pexels.