As indicated by much of the research material Apple has been publishing in recent months, the company is investing heavily in all sorts of artificial intelligence technologies. Apple will announce its AI strategy in June at WWDC, as part of iOS 18 and its other new OS versions.
In the latest Power On newsletter, Mark Gurman says to expect the new iPhone AI features to be powered entirely by an offline, on-device, large language model developed by Apple. You can expect Apple will tout the privacy and speed benefits of this approach.
9to5Mac previously found code references in iOS 17.4 that referred to an on-device model called “Ajax”. Apple is also working on server-hosted versions of Ajax too.
The downside to on-device LLMs is they can’t be as powerful as models that are running on huge server farms, with tens of billions of parameters and continually updating data behind them.
However, Apple engineers can take advantage of the full-stack, with software tuned to the Apple silicon chips inside its devices, to make the most out of an on-device approach. On-device models are usually much quicker to respond than trafficking a request through a cloud service, and they also have the advantage of being able to work offline in places with no or limited connectivity.
While on-device LLMs may not have the same embedded rich database of knowledge as something like ChatGPT to answer questions about all sorts of random trivia facts, they can be tuned to be very capable at many tasks. You can imagine that an on-device LLM could generate sophisticated auto-replies to Messages, or improve the interpretation of many common Siri requests, for instance.
FTC: We use income earning auto affiliate links. More.