In-memory computing and the concept of how it works in distributed systems reflects a new reality facing developers, software engineers and the ways that people have traditionally viewed databases and their role in application architectures.

It’s also an example of how software providers, at the end of the day, are always looking for the best hardware to run their software. It touches on the advancement of data stores for microservices, machine learning in application development and the advent of artificial intelligence for scientific and commercial use cases.

On Dec. 7 at 11:30 a.m. PT, I will host a livestream discussion on in-memory computing with Intel Design Engineer Christine Wang, covering the basics and the latest in how in-memory computing is being used and applied. Intel has developed Optane, which offers a persistent memory platform. It has been in development for years and is now gaining some attention. We will discuss Intel’s approach and take questions about how Optane’s persistent memory technology affects how we view development, deployment and management of at-scale technologies.

It is now becoming clear that hardware and software are requiring deeper integration with scale-out architectures. It leads to questions and discussion points:

  • What actually is viable with new in-memory capabilities? The uses for streaming architectures in container technologies mean a shift to a deeper demand for in-memory computing with data stores that may provide a balance with the CPU and the network.
  • How will applications be best architected to use persistent memory? Applications are just beginning to be developed for persistent memory environments. The considerations come down to how to use persistent memory in combination with RAM, for example.
  • What is the meaning of in-memory computing in context with persistent memory? We hear more about the value of in-memory computing to manage microservices. The differences come in what a node may manage, and the price as a comparison when considering density and the general growth in data.
  • How are distributed, in-memory technologies managed in cloud native environments? Who does this kind of work? There is a growing interest in how in-memory architectures fit with Kubernetes and cloud native technologies.
  • What are the integrations with Apache Software Foundation projects such as Spark and Kafka?
Read More:   What is Express js and Why It is Used?

In-memory computing is in the midst of its most significant growth. The market is expected to increase 25 percent annually from 2020 to 2025. But why is the market so ready for growth? That’s a question I look forward to posing to Wang on December 7. Join us on InApps’s Periscope channel.

Intel is a sponsor of InApps. Alex Williams is participating in the Intel influencer program.

Feature image by Michal Jarmoluk via Pixabay