Apple's New OpenELM Hints at the Future of AI in the iPhone

Apple's New OpenELM Hints at the Future of AI in the iPhone

Apple has released a new family of large language models, fully open-sourcing them and providing them to the popular AI platform Hugging Face for other developers to play with and adapt

The iPhone maker has become very active in the open source artificial intelligence field in recent months, and with its latest release the company hopes to help shape the direction of apps built using language models on these devices

OpenELM is a framework designed to work well on edge devices such as smartphones and laptops This is important to Apple because running AI locally is more secure and provides better privacy

It is unclear if these models will be part of Apple's plans for on-device AI in iOS 18 or Siri upgrades, but they are indicative of the company's AI direction

Officially named Open Source Efficient LLMs, they are instructor models designed to allow third-party developers and researchers to relearn, adapt, and integrate them in other projects [These new models are designed to be more accurate and efficient; Apple initially focused on providing support to the research community, as OpenLEM can be used to investigate model bias, risk, and reliability levels

The family includes four models pre-trained in the open source CoreNet data library All are small language models, the largest being 3 billion parameters This is similar in size to Microsoft's new Phi-3 small language model

A major differentiator is that they can obtain the same performance as other open-source language models with smaller training data sets This makes it ideal for niche use cases and research

In a paper on the new model, Apple researchers write: "With a parameter budget of about 1 billion, OpenELM shows a 236% accuracy improvement over other models of similar size

With the release of the new model, Apple also provided code to use the MLX library This is the toolkit Apple uses to run AI models like Stable Diffusion on its chipsets

The ability to deploy models on edge devices with Apple's own chips could also change wearable technology In the future, the AI in Apple's AR glasses could provide information about their surroundings, even offline

OpenELM is primarily a research project, a way for data scientists and those investigating the safety and accuracy of AI models to run code more efficiently

However, it further demonstrates Apple's commitment to creating AI models that can run efficiently on devices like the iPhone, iPad, and MacBook without compromising their capabilities

One of the reasons Siri has been seen as not as good as other legacy AI chatbots like Alexa and Google Assistant is that Apple had much of its functionality running on the device, and thus could not utilize as much computing power for complex tasks It was because Apple was running many of its functions on the device and did not have access to as much computing power for complex tasks

Much of Apple's recent work on AI, including improving memory usage efficiency, running models using neural engines, and working on new language models that work from a single prompt, has been toward this goal, and OpenELM is no different It could even lead to a framework that developers can use for AI in their apps

Categories