Meta has released Muse Spark, the first large language model from Meta Superintelligence Labs, launching a new direction for the company’s AI capabilities.

Muse Spark now underpins the latest functionality in the Meta AI assistant, released publicly in the Meta AI app and through the meta.ai platform. This marks Meta’s entry into a stepped model development approach, where each generation of the Muse series is designed to offer incremental improvements and foundational shifts in model scaling.

Access deeper industry intelligence

Experience unmatched clarity with a single platform that combines unique data, AI, and human expertise.

Find out more

The company indicated that the development cycle for Muse Spark lasted nine months, during which Meta Superintelligence Labs reconstructed Meta’s AI stack.

Muse Spark is engineered to deliver rapid responses to queries while maintaining sufficient complexity to address questions in science, mathematics, and healthcare. The architecture favours performance efficiency and serves as a base for subsequent, larger-scale models under development.

Deployment of Muse Spark brings major changes to the Meta AI assistant’s operation. Users now interact with an upgraded assistant capable of toggling between response modes, depending on whether a quick reply or in-depth reasoning is needed.

The system’s design introduces parallelised workflows. It can invoke multiple, specialised “subagents” to manage different aspects of a single inquiry simultaneously.

For instance, when planning a family holiday, one subagent might assemble potential itineraries, a second could compare regional destinations, and another might research child-oriented activities. Each subagent works in tandem to accelerate end-to-end delivery.

A significant component of this update is the introduction of multimodal perception. Muse Spark’s model can interpret and analyse visual data as well as text-based input. According to Meta, this enables scenarios like photographing product shelves to gain nutritional insights or scanning items to compare them with available alternatives.

The company reports that this multimodal capacity will extend to wearable devices with the integration of Muse Spark into Meta’s AI-enabled glasses, adding contextual responsiveness based on real-world images.

Meta has given specific attention to health applications in this rollout. The company’s development team collaborated with physicians to refine Muse Spark’s handling of health-related topics, including extracting information from images and charts.

It noted heightened public reliance on AI for healthcare information as a major factor driving these features.

Further to health and visual processing, Muse Spark enhances visual coding abilities within the Meta AI assistant. Users can prompt the assistant to generate custom websites, mini-games, or dashboards, leveraging code generation capabilities intended for direct deployment or sharing.

The updated Meta AI experience also incorporates a shopping mode, drawing on brand content and creator recommendations from across Meta platforms. When users research products or trends, the assistant can present additional contextual information, including posts and insights from the broader community or from locals familiar with particular locations.

The rollout of Muse Spark-powered services initially covers the United States but will extend to other countries and properties, such as Instagram, Facebook, Messenger, WhatsApp, and the aforementioned AI glasses in the period ahead.

Meta will also release the underlying Muse Spark technology to a limited group of partners via an API under a private preview, with intentions to eventually open-source future iterations of the model.

Meta stated: “We are building toward personal superintelligence — an AI that does not just answer your questions but truly understands your world because it is built on it.”

The social technology company also referenced ongoing enhancements to its risk and security frameworks, aiming to address safety and privacy concerns as these AI systems scale.

This launch sets a baseline for further iterations in Meta’s AI development roadmap, with plans to enhance the incorporation of contextual content. For example, Meta aims to integrate Reels, photos, and creator posts directly into AI-driven responses as the Muse model series expands.

Meta said that it will continue refining model safety and reliability frameworks in parallel with technical improvements.

Last month, Meta acquired Moltbook, a social networking site built for AI agents. Moltbook’s founders Matt Schlicht and Ben Parr joined Meta Superintelligence Labs following the acquisition.