
Nvidia has partnered with various model builders and cloud service providers across Europe and the Middle East to enhance the development of sovereign large language models (LLMs).
The collaboration aims to accelerate AI adoption in industries such as manufacturing, robotics, healthcare, finance, energy, and creative sectors.
Key partners include Barcelona Supercomputing Center (BSC), Bielik.AI, Dicta, H Company and Domyn. Other key platers include LightOn, the National Academic Infrastructure for Supercomputing in Sweden (NAISS), KBLab at the National Library of Sweden, the Technology Innovation Institute (TII), University College London plus the University of Ljubljana, and UTTER.
These partners are using Nvidia Nemotron techniques to enhance their models, focusing on cost efficiency and accuracy for enterprise AI workloads, including agentic AI.
The models support Europe’s 24 official languages and reflect local languages and cultures, Nvidia said.
Some models, developed by H Company and LightOn in France, Dicta in Israel, Domyn in Italy, Bielik.AI in Poland, BSC in Spain, NAISS and KBLab in Sweden, TII in the UAE, and University College London in the UK, specialise in national language and culture.

US Tariffs are shifting - will you react or anticipate?
Don’t let policy changes catch you off guard. Stay proactive with real-time data and expert analysis.
By GlobalDataThe optimised models will run on AI infrastructure from Nvidia Cloud Partners (NCPs) like Nebius, Nscale, and Fluidstack through the Nvidia DGX Cloud Lepton marketplace.
The LLMs will be distilled using Nvidia Nemotron techniques, including neural architecture search, reinforcement learning, and post-training with Nvidia-curated synthetic data.
These processes aim to reduce operational costs and improve token generation speed during inference.
Developers can deploy these models as Nvidia NIM microservices on AI factories, both on-premises and across cloud platforms, supporting more than 100,000 LLMs hosted on Hugging Face.
A new Hugging Face integration with DGX Cloud Lepton will allow companies to fine-tune models on local NCP infrastructure.
Perplexity, an AI-powered answer engine processing over 150 million questions weekly, will integrate these models to enhance search query accuracy and AI outputs.
Nvidia founder and CEO Jensen Huang said: “Together with Europe’s model builders and cloud providers, we’re building an AI ecosystem where intelligence is developed and served locally to provide a foundation for Europe to thrive in the age of AI — transforming every industry across the region.”
Recently, Nvidia announced multiple partnerships in the UK to boost AI capabilities, aligning with the start of London Tech Week.