On-device AI, or edge AI, is the capability of local devices to perform AI tasks independently of cloud-based infrastructure.

These local devices can include smartphones, laptops, and wearables. Google Pixel 8 Pro’s suite of features from Google’s Gemini Nano exemplifies the latest advancements in on-device AI, including writing assistance, live translation, and camera improvements.

The drivers behind on-device AI adoption

Energy consumption and sustainability: The foremost reason for a focus on localised, on-device capabilities is the staggering – and growing – energy cost associated with generative AI (GenAI). Predictions of AI’s energy usage, potentially consuming 25% of the US power requirements by 2030 or equaling the energy consumption of the Netherlands by 2027, call the sustainability of cloud-based AI into question. On-device AI, however, operates with a fraction of the energy, aligning with environmental goals and reducing the need for resource-intensive cooling systems.

Financial costs: The financial costs associated with cloud-based AI are significant. The International Energy Agency reported that a request to OpenAI’s ChatGPT requires nearly 10 times as much electricity as an average Google search. The adoption of large language model (LLM)-based search queries, even if they represent a fraction of the daily search volume, could result in billions of dollars in additional annual costs. On-device AI offers a potential solution to the financial and environmental challenges posed by cloud-based AI.

Privacy considerations: Privacy is a paramount concern driving the adoption of on-device AI. By processing data locally, sensitive information remains confined to the user’s device, mitigating the risk of exposure through cloud storage. This is particularly crucial for sectors dealing with confidential data, such as healthcare and government.

Low latency, high performance, and personalisation: On-device AI offers low latency for swift responses in applications like chatbots, improving reliability and performance by eliminating the need for cloud servers. It can reduce network load for operators by handling data-intensive tasks locally, minimising data exchange with the cloud, and reducing network traffic and latency. On-device AI can also create personalised experiences by learning from user behavior and contextual data, enhancing the relevance and utility of AI interactions.

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

Industry adoption and key players

Companies are actively developing their own LLMs. However, the development of these models is both resource-intensive and time-consuming. To navigate these challenges, companies are partnering with multiple existing entities to establish competitive differentiation. For example, Samsung has developed Gauss, its own LLM, but it also uses Google’s on-device Gemini Nano to facilitate translation on Samsung’s Galaxy S24 series of AI-equipped phones. Honor uses Gemini Nano to facilitate eye tracking on its Magic 6 Pro flagship phone, while Oppo and Xiaomi use Gemini Nano for phone cameras and photo editing tools.

With more than three billion monthly active users, Android’s recent integration of Gemini AI positions Google to potentially dominate the AI ecosystem. Meanwhile, Apple has been on an AI acquisition spree, the most recent one being Canadian AI startup DarwinAI in March 2024. DarwinAI has expertise in making AI models smaller and more efficient, indicating Apple’s focus on edge AI. Apple is widely expected to announce its partnership with OpenAI at its developer conference on June 10, 2024 to bring ChatGPT’s capabilities to iOS 18 outside of China, as well as partnering with Google Gemini, with its own AI voice assistant Siri in the limelight.

The immediate market impact may be muted

While on-device AI offers numerous benefits, a widespread shift to such edge AI offerings will take a longer time simply because leapfrogging the technical demands for such AI is not realistic in the near future.

The size of language models is growing faster than the ability to miniaturise and pack more memory into small devices, suggesting that a hybrid approach combining on-device and cloud AI may be the most viable path forward in the near term.