- Home
- >
- Software Development
- >
- oneAPI and Your New Hardware – InApps Technology 2022
oneAPI and Your New Hardware – InApps Technology is an article under the topic Software Development Many of you are most interested in today !! Today, let’s InApps.net learn oneAPI and Your New Hardware – InApps Technology in today’s post !
Read more about oneAPI and Your New Hardware – InApps Technology at Wikipedia
You can find content about oneAPI and Your New Hardware – InApps Technology from the Wikipedia website
Machine learning (ML) is beginning to have a major impact on how applications are developed, as data scientists and ML engineers play an increasingly important role in DevOps. Hardware and infrastructure choices on which the applications run will remain as critical as they were in the past, while at the same time changing the concept and management of the data center in the future.
The industry-wide cross-architecture specification oneAPI, led by Intel, was created with these new dynamics in mind, as a way to meet developers’ coding and data modeling needs for ML- and data science-centric CI/CD. With it, DevOps teams’ skill sets and their relationships with underlying data center infrastructure are evolving as well. These were key themes of InApps Technology’s live streaming series “A Day of Machine Learning: oneAPI and the Future of the Data Center.”
“It’s not enough to just focus on ‘I want to just optimize that piece of AI algorithm or technique really fast,’ but think of the whole ecosystem, almost as if a data engineer is handing [development] off to a data scientist, who’s handing it off to an ML engineer,” throughout the entire production pipeline, Meena Arunachalam, principal engineer at Intel, who discussed scaling ML applications and the hardware required to do that, during the live streaming broadcast. “We used to say, ‘hardware-software co-design,’ and I’m going to say [or rephrase] it as ‘hardware AI-codesign’: I’m going to understand my algorithms, and my data, and my modeling and all these [ML] techniques… So that, ultimately, I can make the best and most pleasant application experience for the user and for the developer as well to be the most efficient and productive.”
In this article, we’ll cover, from the developer’s perspective, where oneAPI fits into the evolution of APIs, its use for at-scale ML-application development and how DevOps’ interactions with hardware, and the data center in general, are evolving as a result.
The oneAPI Promise
During the past couple of decades, APIs have emerged to play an integral role in how developers, as well as operations team members, have been able to build and manage applications at scale. In the course of evolution for APIs, the oneAPI specification serves as a framework that developers use to build library code, documentation and other resources to address the challenges associated with developing applications for accelerator architectures. As a layer of software with its programming language and libraries, Saumya Satish, AI product manager in the Architecture Graphics and Software organization at Intel, described oneAPI as an open industry specification to which “other vendors can contribute and implement it for their different architectures.”
Intel oneAPI, as its name implies, served as a more Intel hardware-specific framework since its creation in 2018. The Intel oneAPI product, Satish said, “provides the implementation of oneAPI to exploit the best features of our diverse architectures, while Intel also has been investing a lot in diversifying architectures, and providing a range of different options and choices.”
In this way, Intel oneAPI offers developers and data scientists who create ML systems/applications, Intel-hardware optimized frameworks and libraries for optimal performance across multiple domains – math, analytics, machine learning, deep neural networks, video processing and more. Instead of having to choose and mix different associated libraries and frameworks, developers can access resources optimized for specific Intel XPUs through simple, easy-to-use domain-centric toolkits.
For AI and machine learning, oneAPI offers Intel AI Analytics Toolkit for accelerating end-to-end data science and ML pipelines. The toolkit includes popular open source DL frameworks (Tensorflow, Pytorch) and ML libraries (scikit-learn, XGBoost) along with a lightweight scalable data frame – Modin, all optimized for Intel XPUs using oneAPI libraries and providing drop-in acceleration with minimal to no code changes.
“oneAPI is a way to access some of the internal hardware and be able to exploit some of the capabilities in terms of instruction sets that it offers to accelerate AI and machine learning,” said AG Ramesh, principal engineer of deep learning software at Intel. “The backend can be customized for the specific hardware,” so that a single DNN kernel works on multiple platforms.
With Intel oneAPI Deep Neural Network Library (oneDNN), for example, the developer can access code for Nvidia graphics processor units (GPUs), Intel central processing units (CPUs), field-programming gate arrays (FPGAs) and other devices for computationally- and hardware-intensive ML applications through a single API.
Without a requisite API framework, many developer teams can struggle, especially when they lack the in-house skill sets and other resources to create ML applications without a single end-to-end framework to help accomplish the task.
“Many ML projects die at an early stage when developers or data scientists cannot figure out how to create the required proof of concept within a reasonable time period and budget. There is also a certain tragedy associated with how they often have the data, algorithms, code libraries and even the knowledge of how to train and validate a model available, but fail due to their inability to get everything set up for their specific use case,” Torsten Volk, an analyst for Enterprise Management Associates (EMA), said. “Intel oneAPI addresses exactly this issue of connecting [DL and] ML frameworks with the underlying processing hardware so that developers only need to code once to the oneAPI and the oneAPI then decides what hardware is needed for optimal performance model training or inference.”
As a case study example, Google relies on Intel oneAPI for its use of TensorFlow as an open source platform for its ML application-development needs. Google uses Intel oneAPI for “the code that is necessary to marshal the data between what TensorFlow uses and the oneDNN for that specific platform,” Ramesh said.
Google, like other customers, benefits from Intel oneAPI in order to achieve “the right setting for the hardware and to get all the benefits from the underlying hardware, without the user having to [completely] understand what system they’re working on,” Ramesh said.
Buttons and Knobs
As application development continues to shift to cloud environments, hardware device selection will have a direct effect on ML application performance. Developers and engineers will thus have to offer input into the hardware selection process, to a similar degree as if they were selecting devices for an onsite data center, whether it is Amazon Web Services (AWS), Google or Microsoft managing their infrastructure. This hardware selection process is what Intel oneAPI is particularly well-suited for.
“Developers essentially want to control the machine that they’re working with. They want all access to all the levers and knobs, and so I think bare metal is turning into one of those solutions there that they’re looking for: processor-based or with Intel Accelerator,” Paul Teich, director of product management-infrastructure at Equinix, said. “But because it is within a deep learning software framework, you want access to all the performance that you can get.”
A developer team might experiment and create an application to test in a serverless environment, but eventually, an ambitious ML application-development project will thus require very specific libraries, documentation and hardware to run properly once deployed.
“As Intel oneAPI and (industry spec) oneAPI offer hardware acceleration as code, enterprises can leverage the centralized utilization data of the individual API functions to plan future investments in data center and cloud resources,” Volk said. “As long as the target infrastructure supports oneAPI, the same AI/ML models should optimally run on AWS, Azure, Google Cloud or anywhere else.”
End-to-End Frameworks for ML
ML application development can be notoriously difficult, while developer teams that do not make the use of frameworks and libraries, such as those oneAPI offers, can easily find themselves in trouble, as mentioned above.
“Some companies will be big enough that they will put together their own sort of data pipeline [for ML application development] — but I also think there’s a much larger group of companies that are going to look for these solutions,” Rachel Roumeliotis, vice president of AI and data at O’Reilly Media, said. “I think they’re still going to need to think through them, but they’re going to need the help that these frameworks,” such as oneAPI, provide.
The vendor who makes developers and data scientists working on ML applications the most productive should thus “win over whichever vendor squeezes more performance, density or cost efficiency from their hardware infrastructure,” Volk said. Since developers and data scientists are generally not experts in determining what hardware-specific accelerator functions or instruction sets are available for what hardware component, the “vast majority of these instructions do not add the value it was designed to deliver,” Volk said.
“oneAPI aims to save developer time by offering these hardware-specific acceleration capabilities through a unified API platform, while at the same time increasing app speed and infrastructure utilization — essentially, ‘ML-Acceleration as Code,’” Volk said.
Source: InApps.net
Let’s create the next big thing together!
Coming together is a beginning. Keeping together is progress. Working together is success.