IT decision makers are overwhelmingly concerned that the issue of AI ethics in business is not being given adequate attention, with 94% saying there needs to be greater corporate responsibility.

This is according to research conducted by Vanson Bourne on behalf of SnapLogic, which sought the opinions of 300 IT decision makers across a wide selection of industries in both the US and the UK.

Notably, 87% of those surveyed also thought that AI should be regulated to ensure better AI ethics in business, with 32% believing this should come from a combination of governmental and industry efforts.

25%, however, thought regulation should be solely managed by independent industry consortiums.

Who is responsible for AI ethics in business?

When it comes to the ethics that govern artificial intelligence and how it is used by businesses, just over half (53%) felt that the developers of AI systems, whether they are companies or academic institutions, bore primary responsibility.

However, 16% felt an independent global consortium made up of governments, academia and businesses should take primary responsibility, while 11% saw the issue as solely the domain of governments.

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

Overall, 17% felt the responsibility for ensuring AI ethics in business lay with the individuals working on specific projects, although this varied dramatically between the US and the UK, with 21% backing this idea in the former and just 9% in the latter.

For Gaurav Dhillion, CEO at SnapLogic, the research highlights the need for organisations to be more aware of the issue of corporate responsibility and wider AI ethics in business.

“AI is the future, and it’s already having a significant impact on business and society. However, as with many fast-moving developments of this magnitude, there is the potential for it to be appropriated for immoral, malicious, or simply unintended purposes. We should all want AI innovation to flourish, but we must manage the potential risks and do our part to ensure AI advances in a responsible way,” he advised.

“Data quality, security and privacy concerns are real, and the regulation debate will continue. But AI runs on data — it requires continuous, ready access to large volumes of data that flows freely between disparate systems to effectively train and execute the AI system.

“Regulation has its merits and may well be needed, but it should be implemented thoughtfully such that data access and information flow are retained. Absent that, AI systems will be working from incomplete or erroneous data, thwarting the advancement of future AI innovation.”


Read more: UK minister Matt Hancock: new data and ethics centre will position UK as global leader in AI