Intel Makes AI A Reality For Businesses
We unpack Intel’s comprehensive AI solutions stack with libraries and frameworks that can facilitate the development and scale adoption of its hardware assets — CPUs, FPGAs, VPUs and the soon to be released NNP product line. Intel’s objective is a broader roadmap — domain specific architectures coupled with simplified software tools that enable abstraction and faster prototyping. That’s the reason why Intel focuses on multiple architectures — whether it is scalar (CPU), vector (GPU), matrix (AI) or spatial (FPGA). “From general purpose processors like Xeon to purpose-built AI compute like Movidius VPUs, FPGAs and the soon to be released Nervana Neural Processors, a diversified architecture helps rethink the AI value chain and offers more value to the end user,” said Mallya. For example, you can use general purpose architecture but you can also do well with purpose-driven FPGA which is a great offering for very low-powered usage models where you require a lot of inferencing at very high speed, he added. Looking ahead, Intel has positioned itself as the right partner offering full-stack solution across silicon, platforms, tools and libraries to enable ease of application development and differentiation.
As the new world order requires a diverse set of products and software, the AI major is reimagining computing with three distinct pillars:
a) Hardware: Deploy AI anywhere with unprecedented hardware choice
b) Software capabilities that sit on top of hardware
c) Enriching community support to get up to speed with the latest tools
Intel team is working towards offering more synchrony between all the hardware layers and software. For example, Intel Xeon processors have a generational improvement and we are now seeing a drift towards instructions which are very specific to AI, shared Austin Cherian, Head, High Performance Computing Business, Intel. “VNNI is something that is part of Intel Xeon processors but we have another interesting key aspect known as advanced vector extensions. These are instructions that have been there on Intel Xeon for the last five years and AVX allows you to get the performance on Xeon processor and the VNNI instruction enables data scientists and Machine Learning engineers to maximise AI performance,” he said. Here’s where Intel is upping the game in terms of heterogeneity — from generic CPUs (2nd Generation Intel® Xeon® Scalable Processors) running specific instructions for AI to actually having a complete product built for both training and inference. “Intel Nervana Neural Network Processor (NNP) is designed from the ground up to run full AI workloads that you cannot run on GPUs which are more general purpose,” said Cherian.
One API To Rule Them All
Let’s dive into Intel’s software offerings and how it helps developer and data scientist community drive maximum performance gains by enabling a broad support for hardware deployment. The latest developer tool on the block which truly takes away the complexity of heterogeneous architectures and offers a unified programming model across diverse architectures is One API. One of the most ambitious multi-year software projects from Intel, One API offers a single programming methodology across heterogeneous architecture.
What’s the end benefit to application developers — this set of developer tools eliminates the need to maintain separate code bases, multiple programming languages, and different tools and workflows thereby enabling application developers to get the maximum performance out of hardware. “The magic of One API is that it takes away the complexity of the programme and gives the developers and end users the power of heterogeneity. Developers can take advantage of the heterogeneity of architectures which implies they can use the architecture that best fits their usage model or use case. It is an ambitious project multi-year project and we are committed to working through it every single day to ensure we simplify and not compromise our performance,” shared Mallya.
According to Akanksha Bilani – Country Lead, India, Singapore & ANZ at Intel Software, the bottomline of leveraging One API is that provides an abstracted, unified programming language that actually delivers a one view/one API across all the various architectures. One API will be out in beta in October. Another great developer tool for distributed environment is BigDL, a deep learning library for Apache Spark*. “This distributed deep learning library helps data scientists accelerate deep learning inference on CPUs in your Spark environment. It is an add-on to your machine learning pipeline and gives an incredible amounts of performance gains,” said Bilani. For application developers, who spend a predominant amount of time building an application, it’s really important to be able to deploy that application seamlessly across hardware. Intel® Distribution of OpenVINO™ Toolkit enables developers to create intermediate representations (IR) that are hardware agnostic and can be deployed on any hardware — CPUs, FPGAs or Intel® Movidius™ VPU.
Semiconductor majors are at an inflection point. The question facing the hardware players is no longer where do the opportunities lie in the AI ecosystem but how best can they realign their current products and solutions to increase the adoption of AI. For Intel, the winning factor has been staying closely aligned with its strategy of ‘no one size fits all’ approach and ensuring its evolving portfolio of solutions and products stays AI-relevant. The hardware behemoth has been at the forefront of the AI revolution, helping enterprises and startups operationalize AI by reimagining computing and offering full stack AI solutions that add additional value to customers. Intel has also heavily built up a complete ecosystem of partnerships and has made significant in-roads into specific industry verticals and applications like healthcare and retail which is helping the company drive long-term growth. As Mallya sums up, the way forward is through meaningful collaborations and make the vision for India a reality using powerful tools for AI.
Cite this Essay
To export a reference to this article please select a referencing style below