
Is open source the answer to the ‘AI space race’? It might be the right answer but the question is fundamentally wrong. The space race and the current desire to ensure AI sovereignty reflect strategic national interests. Country leaders view technological innovation as critical to national security, economic prosperity, and global influence. The US-Soviet space race of the 1950s-60s was driven by geopolitical competition beyond mere scientific advancement.
Today’s pursuit of AI sovereignty by the US, China, and EU countries reflects a similar belief that technology leadership ensures prominence and potential power in a tech-driven future. However, despite years of intense competition, the space race eventually transformed into an ongoing international collaboration. The race itself proved untenable. As Charles Darwin purportedly said, “In the long history of humankind (and animal kind, too) those who learned to collaborate and improvise most effectively have prevailed.”
Sovereignty versus democracy: A new “coopetition”
AI sovereignty and AI democratisation are often pitted against each other. The latter promotes accessible tools and open source models to encourage widespread participation in AI development across geographic and economic boundaries. AI sovereignty discussions often focus on national or regional control over AI technologies, data resources, and regulatory frameworks. But the key to AI sovereignty is the promotion of self-sufficiency, strategic advantage, and alignment with local values or interests. As such, the two concepts are not mutually exclusive. Just as recent political history has demonstrated the coexistence of both sovereignty and democracy, so too can this coexistence be achieved with AI. The need for AI sovereignty does not sound the death knell of AI democratisation.
Back in the early 90s, the tech industry coined the term “coopetition” to describe a common phenomenon of cooperation between competitors. Businesses cooperate or collaborate with competitors to build capabilities and gain better leverage by sharing resources. These coopetition-based business relationships increase the overall value for their own customers and themselves. This is the proverbial rising tide that raises all boats. Moving away from a winner-takes-all approach, everyone is better off.
Open Source: A tool For AI coopetition
Open source software is often touted as fostering innovation and collaboration. Open source AI projects encourage developers, researchers, and practitioners to contribute their expertise. This collaboration accelerates innovation as individuals can build upon existing code, share improvements, and collectively solve complex challenges. These diverse perspectives lead to more robust and versatile AI models and solutions. Open source platforms lower barriers to entry and allow newcomers to learn and contribute. The open source community furthers democratisation through valuable learning resources for students and aspiring AI professionals. Moreover, open source data formats enable greater data diversity mitigating the risk of hallucination and bias.
Just as open source facilitates democratisation, it also enables sovereignty. If we see sovereignty as the ability to promote home grown or nationally based innovation and expertise, open source allows that to happen. Not only does open source promote data and AI democracy; it also empowers the self-sufficiency, strategic advantage, and alignment with local values or interests necessary for AI sovereignty.

US Tariffs are shifting - will you react or anticipate?
Don’t let policy changes catch you off guard. Stay proactive with real-time data and expert analysis.
By GlobalDataCoopetition establishes guardrails not roadblocks
In our global world, no one organisation or government will dictate AI regulation. Governments and industry leaders come together regularly to discuss both security and implementation, at Bletchley, in Seoul, and more recently at the AI Action Summit in Paris. Moreover, research into AI explainability is ongoing, and the European Union’s AI Act has established a risk-based approach that regulates outcomes rather than the technology itself. While not everyone agrees, the regulation puts a stake in the ground to spark discussion and future collaboration. Just as AI models need to be trained on diverse datasets to avoid bias, diverse participation in the conversations improves the likelihood of effective and enforceable regulation.
Promoting open source as part of these regulatory guidelines also improves likelihood of desired outcomes: transparency, security and oversight. Open source code allows anyone to inspect the algorithms and processes behind AI systems, fostering trust and accountability and mitigating risks. Moreover, researchers can replicate and validate AI experiments, leading to more reliable results. And, open source communities can identify and address potential issues or vulnerabilities in AI software.
We are not in a space race to develop AI. This is not a cold war with a zero-sum outcome. AI innovation can be shared, and sharing can foster innovation. Open source AI tools and data formats enable this collaboration, even across industry competitors or countries. Pooling data improves prediction models. Diverse data sources mitigate risk of bias and hallucination. Everyone benefits from the collective effort, and from the competitive spirit. Open source principles are fundamental to the future of AI – not to win a race but to enable both AI democratisation and sovereignty.