1. Business
  2. Tech in business
March 26, 2019updated 25 Mar 2019 6:55pm

AI ethics in business: 94% of IT decision makers want greater corporate responsibility

By Lucy Ingham

IT decision makers are overwhelmingly concerned that the issue of AI ethics in business is not being given adequate attention, with 94% saying there needs to be greater corporate responsibility.

This is according to research conducted by Vanson Bourne on behalf of SnapLogic, which sought the opinions of 300 IT decision makers across a wide selection of industries in both the US and the UK.

Notably, 87% of those surveyed also thought that AI should be regulated to ensure better AI ethics in business, with 32% believing this should come from a combination of governmental and industry efforts.

25%, however, thought regulation should be solely managed by independent industry consortiums.

Who is responsible for AI ethics in business?

When it comes to the ethics that govern artificial intelligence and how it is used by businesses, just over half (53%) felt that the developers of AI systems, whether they are companies or academic institutions, bore primary responsibility.

However, 16% felt an independent global consortium made up of governments, academia and businesses should take primary responsibility, while 11% saw the issue as solely the domain of governments.

Overall, 17% felt the responsibility for ensuring AI ethics in business lay with the individuals working on specific projects, although this varied dramatically between the US and the UK, with 21% backing this idea in the former and just 9% in the latter.

For Gaurav Dhillion, CEO at SnapLogic, the research highlights the need for organisations to be more aware of the issue of corporate responsibility and wider AI ethics in business.

“AI is the future, and it’s already having a significant impact on business and society. However, as with many fast-moving developments of this magnitude, there is the potential for it to be appropriated for immoral, malicious, or simply unintended purposes. We should all want AI innovation to flourish, but we must manage the potential risks and do our part to ensure AI advances in a responsible way,” he advised.

“Data quality, security and privacy concerns are real, and the regulation debate will continue. But AI runs on data — it requires continuous, ready access to large volumes of data that flows freely between disparate systems to effectively train and execute the AI system.

“Regulation has its merits and may well be needed, but it should be implemented thoughtfully such that data access and information flow are retained. Absent that, AI systems will be working from incomplete or erroneous data, thwarting the advancement of future AI innovation.”


Read more: UK minister Matt Hancock: new data and ethics centre will position UK as global leader in AI