Apple knows how to make an LLM work on an iPhone

Estimated read time 2 min read

Apple knows how to make an LLM work on an iPhone

Apple researchers have recently published two articles describing techniques for using LLM technology in memory-limited environments. Chatbots based on LLM (Large Model Language), such as ChatGPT, need a huge amount of memory to work. That is why many researches are taking place to use this technology on smartphones, including the iPhone, with limited memory capacities.


To solve this problem, Apple researchers have developed a new technology that uses flash memory to store data from AI models. In a document entitled “LLM in Flash: Efficient LLM Inference with Limited Memory”, Apple explains that flash memory is more efficient on mobile devices than DRAM, which is traditionally used to run an LLM.


By integrating an LLM into the iPhone, the company is able to use data that has already been processed instead of loading new data each time. The use of flash memory makes it possible to reduce the data transfer by reusing part of the data already processed.

Towards a smarter Siri

All of these methods make it possible to run models up to twice the size of DRAM, four to five times faster than CPU speed and 20 to 25 times faster than GPU speed, the researchers said.


사진=씨넷


Apple has reportedly organized an internal event on AI next February to inform employees about home work on LLMs. And Bloomberg reports that Apple is developing a smarter Siri that incorporates generative AI technology.


Apple is also working on a homemade AI model called “Ajax”. Designed to compete with OpenAI’s GPT-3 and GPT-4 models, Ajax aims to unify the development of machine learning within Apple. Ajax would be more advanced than ChatGPT 3.5, but the latest OpenAI model, GPT-4, would still be in the lead on the performance side.


Apple is expected to introduce generative AI on its iPhones and iPads during the release of iOS18 according to analyst Jeff Fu, from Haitong Securities and Bloomberg. Jeff Fu said last October that Apple had deployed hundreds of AI servers this year and would deploy more next year.

Apple tends to avoid using trendy terms like “AI”
to describe the characteristics of its products, preferring to
focus on machine learning. However, these documents from
research suggest a deeper engagement in the news
AI technologies. However, Apple has not publicly acknowledged having
integrated generative AI into its products and has not yet confirmed
officially his work with Apple GPT.


Source: “ZDNet Korea” and “ZDNet.com “

You May Also Like

More From Author