APPLE

Gourmet: iOS 18 AI features will run on LLM “entirely on device”, providing privacy and speed benefits

As evidenced by much of the research material Apple has been publishing in recent months, the company is investing heavily in all types of artificial intelligence technology . Apple will announce its AI strategy in June at WWDC as part of iOS 18 and other new OS versions.

In the latest Power On newsletter, Mark Gurman reports that he expects new iPhone AI features to be implemented. entirely using a stand-alone large language model built into the device, developed by Apple. You can expect Apple to tout the privacy and speed benefits of this approach.

9to5Mac previously discovered code references in iOS 17.4 that referred to an on-device model called “Ajax.” Apple is also working on server-side versions of Ajax.

The disadvantage of on-device LLMs is that they cannot be as powerful as models running on huge server farms with tens of billions of parameters and constantly updated data.

p>However, Apple engineers will likely be able to take advantage of the full vertical integration of their platforms, with software tuned to the Apple silicon chips inside their devices to get the most out of the on-device approach. On-device models typically respond much faster than sending a request through a cloud service, and also have the advantage of being able to operate offline in locations with no or limited connectivity.

Best comment from Jason McMinn

Liked by 5 people

Real Features with Apple and Privacy First The blueprint builds personal language models based on all your data – files, photos, email, texts, notes, calendar, etc. A model that “is your memory”, if necessary can be complemented by an LLM from Google. What I need most is a private language model.

View all comments

While on-device LLM may not have the same rich knowledge database built-in as something like ChatGPT to answer questions about all sorts of random trivia, they can be configured to perform many tasks. You can imagine that on-device LLM could generate complex automated responses to messages, or improve the interpretation of many common Siri requests, for example.

This also fits neatly with Apple's strict adherence to privacy. There's nothing wrong with processing all your downloaded emails and text messages using an on-device model, as long as the data remains local.

On-device models can also perform generative AI tasks, such as creating documents or images. , according to the prompts, to a decent result. There is still an option for Apple to partner with a company like Google and use something like Gemini on the server for certain tasks.

We'll know exactly what Apple plans to do when it officially announces your decision. Artificial Intelligence Strategy at WWDC. The keynote will begin on June 10, where the company will reveal all the new software features coming to iPhone, iPad, Mac, Apple Watch, Apple TV, Vision Pro and more.

Leave a Reply