According to research from the Massachusetts Institute of Technology (MIT), a staggering 95% of GenAI pilot programmes are delivering no discernible impact on their companies’ profit and loss.
As AI investment grows, many enterprises risk wasting resources on projects that fail to deliver measurable value. The real question for leaders across a wide range of industries is how their projects, that delivered promising results in the test phase, then failed to match up to expectations when reality kicked in. I believe I know the answer.
Access deeper industry intelligence
Experience unmatched clarity with a single platform that combines unique data, AI, and human expertise.
Regulation plays a role in this situation. Our own research found that the same percentage (95%) of EU businesses say that complex regulatory requirements have held back their GenAI projects. It’s obviously important that governments ensure AI is used responsibly, but for businesses, those fast-evolving requirements can cause confusion and hesitation. Should we crack on and risk being non-compliant down the line? Or do we wait to see how legislation changes and risk falling behind our competitors?
But ultimately, AI isn’t failing because of flaws in the models themselves. Rather, the issue lies with the underlying data AI depends on. Poor data management leads to poor results: bad input, bad output. And it’s not just a question of feeding the right information into the system. Even when firms have clean, high-quality data, it often lacks the context, connectivity, and governance needed to power AI successfully. This includes metadata and semantic context, as well as ontologies that map relationships between business terms.
All of this is critical to providing domain specific, enterprise specific and use-case specific context. In short, AI needs to be fed with data that’s not just accurate, but relevant, responsible, and reliable. A truly holistic approach to the data underpinning AI is needed if businesses are to do better than a 5% success rate.
A rush to failure
Many leaders feel pressured to adopt AI quickly, increasing the risk of joining the 95% who fail. In the rush to get projects live and into the company’s workflow, work on the right data strategy can often fall by the wayside. But this stage of the process is a crucial element of the project’s overall development.
US Tariffs are shifting - will you react or anticipate?
Don’t let policy changes catch you off guard. Stay proactive with real-time data and expert analysis.
By GlobalDataCompanies should start by defining clear business use cases and identifying the data needed to support them. Business leaders must be actively involved in this process and become data literate, rather than treating this as solely a data office or IT responsibility.
From there, the next step is to identify all potential data sources and design a data management architecture that can draw from them. This architecture should compile, clean, standardise, and deliver the right data in the right format at the right time. It’s crucial to be able to determine and analyse metadata at scale to achieve this.
Given the vast scale of AI’s data requirements, this data management process will likely need to be provisioned in the cloud, enabling flexibility and scalability and ensuring the project’s data needs can be met reliably.
It’s important to remember that creating a high-quality data foundation for AI doesn’t have to delay project delivery by months or years. This fear often stops companies from investing the time and effort needed for proper data management. In fact, many of the crucial data management processes required can be delivered to a high degree of quality at an increasingly rapid pace, because AI itself is the secret weapon for managing AI data.
By automating data governance, quality, and integration at scale, companies can perform the crucial tasks that deliver great quality data, without having to sacrifice on either quality or timeline. AI has the ability to carry out repetitive information processing at far faster-than-human speeds, both unlocking value and preserving rapid time-to-deployment pace.
Companies that seek to bypass the data management phase of GenAI projects are making a false economy. Although it requires a bit more time and effort up front, the value deficit in the long run will bite much harder. By ensuring their models are getting the right data, in the right format, at scale, organisations stand the best chance of achieving value both quickly and sustainably.
