The reason Intel is partnering with more than 100 software developers on more than 300 AI-accelerated features is a simple one: Intel has introduced AI capabilities inside of its 14th-gen “Meteor Lake” Core Ultra chips for laptops, and it needs them to do, well, something.
That’s not being facetious. AI has become synonymous with Bing Chat, Google Bard, Windows Copilot, and ChatGPT — all AI tools that live in the cloud. Intel’s new AI Acceleration Program, launching in anticipation of Meter Lake’s official launch on Dec. 14, will try to convince consumers that AI should run locally on their PCs.
That may be a tough sell to consumers, who may not know — or care — where these functions are being processed. Intel, though, desperately does — and has tried to get this message across at its Intel Innovation conference, earnings reports, and more. Intel is trying to encourage developers to either write natively for Intel’s AI engine, known as the NPU, or use the OpenVINO developer kit that Intel helped author but has released as open source.
Those developers won’t necessarily be publicly calling out all of the AI capabilities of their software. But Intel did highlight several developers that would be: Adobe, Audacity, BlackMagic, BufferZone, CyberLink, DeepRender, MAGIX, Rewind AI, Skylum, Topaz, VideoCom, Webex, Wondershare Filmora, XSplit, and Zoom.
Further reading: Intel’s Core Ultra CPUs kickstart the AI PC era. Software will determine its future
Only a few of the developers, however, are calling out specific AI features. Deep Render, which uses AI to compress file sizes, claims that AI will allow their algorithm to process video compression five times more than usual, according to Chri Besenbruch, co-founder and CEO, in a statement. Topaz Labs, which uses AI to upscale photos, said it can use the NPU to accelerate its deep learning models.
XSplit, which makes a Vcam app for removal and manipulation of webcam backgrounds, also claimed that it could tap the NPU for greater performance. “By utilizing a larger AI model running on the Intel NPU, we are able to reduce background removal inaccuracies on live video by up to 30 percent, while at the same time significantly reducing the overall load on the CPU and GPU,” Andreas Hoye, chief executive at XSplit.
Many of the others, however, said that they see the NPU as just as another resource to take advantage of. That, in fact, is in keeping with Intel’s perspective on the subject: That while the NPU may be the most power-efficient means of accelerating specific AI features, the combination of the NPU, GPU, and CPU may be the best at accomplishing the task as quickly as possible.
And some developers may combine local processing as well as cloud-based AI, too. For example, Adobe’s Generative Fill uses the cloud to suggest new scenes based upon text descriptions the user enters, but applying those scenes to an image is performed on the PC. Nevertheless, it’s in Intel’s best interests for you to start thinking of “Intel Inside” and “AI Inside” in the same sentence.