On 28th January The Pope’s warnings about AI over recent years were outlined publicly in a missive called Antiqua et Nova, a papal statement position on the challenges raised by the relationship between AI and human intelligence. The message was addressed to “those entrusted with transmitting the faith”, but also to “those who share the conviction that scientific and technological advances should be directed toward serving the human person and the common good”.

Did Big Tech take note? Antiqua et Nova is a demand to those directing the development of AI to consider the implications for the whole of humanity, to ensure that all benefit. The document will, no doubt, increase in historical significance in the not-so-distant future as the pace of AI development continues at breakneck speed. Whether anyone heeds his warnings is another matter.

AI’s potential to worsen existing inequality

The Antiqua et Nova followed the Pope’s address to world leaders in June 2024 at the G7 Summit in Puglia, Italy where he spoke specifically about an AI cognitive industrial revolution which promises great opportunity or threatens grave disaster. He raised AI’s potential for entrenching and exacerbating inequity in what he saw as an “epochal change” for mankind unlike any other, AI being a far more complex tool than previous historical transitions from fire, knives or steam engines, for example.

He warned that the concentration of power over mainstream AI applications in the hands of a few powerful companies raises significant ethical concerns. Evidence to date suggests that digital technologies have increased global inequality, not just in material wealth but also in access to political and social influence, he noted, warning that in this way AI could be used to perpetuate marginalisation and discrimination, create new forms of poverty and widen the digital divide, worsening existing social inequalities.

Do not eliminate human decision making

Taking away human decision making is to condemn humans, said Pope Francis. The importance of maintaining human control, particularly in war or in the judicial system is paramount and he urged world leaders to become active because sound political leadership was necessary now more than ever.

Pope Francis said that attention needs to be given to the nature of accountability processes in complex, highly automated settings, where results may only become evident in the medium to long term. Above all it is important that there is accountability for the use of AI at each stage of the decision making process.

GlobalData Strategic Intelligence

US Tariffs are shifting - will you react or anticipate?

Don’t let policy changes catch you off guard. Stay proactive with real-time data and expert analysis.

By GlobalData

Insofar as AI can assist humans in making decisions, the algorithms that govern it should be trustworthy, secure, robust enough to handle inconsistencies, and transparent in their operation to mitigate biases and unintended side effects. Regulatory frameworks should ensure that all legal entities remain accountable for the use of AI and all its consequences, with appropriate safeguards for transparency, privacy, and accountability. Moreover, those using AI should be careful not to become overly dependent on it for their decision making, a trend that increases contemporary society’s already high reliance on technology.

Avoid weaponisation of AI in warfare

Pope Francis warned that the weaponisation of AI could be problematic. The ability to conduct military operations through remote control systems can lessen the perception of the devastation caused by those weapon systems, as well as the burden of responsibility for their use. This would result in an even more cold detached approach to the immense tragedy of war. The ease with which autonomous weapons make war more viable goes against the principle of war as a last resort in legitimate self-defence. It could potentially increase the instruments of war well beyond the scope of human oversight and precipitate a destabilising arms race, with catastrophic consequences for human rights.

Beware the abdication of moral responsibility to AI

Full moral causality belongs only to human agency, not AI. It’s crucial to be able to identify and define who bears responsibility for the processes involved in AI, particularly those capable of learning, correction and reprogramming. When outlining the limits of AI, the Antiqua et Nova notes that AI cannot currently replicate moral discernment or the ability to establish authentic relationships. Human intelligence is formed both intellectually and morally through lived experience, says the papal dictate.

AI’s potential for misinformation

The Pope warned of AI’s potential role in “the growing crisis of truth in the public forum”. He warned against fake AI generated content. Such misinformation might occur unintentionally, as in the case of AI hallucination, as well as AI-generated fake media which, he warned, can gradually undermine the foundations of society. Pope Francis said that careful regulation is required, as misinformation—especially through AI-controlled or influenced media—can spread unintentionally, fuelling political polarisation and social unrest. Those who produce and share AI-generated content should always exercise diligence in verifying the truth of what they disseminate, he recommended.