As the capabilities of artificial intelligence (AI) improve, so does its prevalence in everyday life, including how businesses interact with customers. Although AI has much to offer this area, the way in which it is deployed can have a significant impact on customer trust and loyalty.

This is according to a new study from the Capgemini Research Institute, which has found that both employees and consumers have a number of concerns about the ethical use of AI.

To shed light on the situation, Capgemini surveyed 1,580 executives from large organisations across 10 countries, and over 4,400 consumers across six countries.

62% of consumers said they would have more trust in a company whose AI interactions they thought of as ethical, with 61% saying they would be more likely to share positive experiences with friends and family if this was the case.

Furthermore, 59% that they would be more loyal to a company that uses AI in an ethical way, and 55% said that they would purchase more products and provide more positive feedback through social media.

At the other end of the scale, 41% said they would complain if an AI interaction resulted in ethical issues, and 34% would stop interacting with the company if this happened.

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData

Beyond customers, businesses have concerns over ethical AI

The research also highlighted that those within businesses have concerns about using AI incorrectly.

Executives from nine out of 10 organisations believe that ethical issues have resulted from the use of AI systems over the last 2-3 years, with situations such as the  collection of personal patient data without consent in healthcare, and over-reliance on machine-led decisions without disclosure in the finance industry showing the importance of deploying AI in a way that does not result in unethical situations.

The research found that 41% of senior executives said they have abandoned an AI system altogether when an ethical issue had been raised.

As a result, the majority believe greater transparency and regulation are necessary to prevent situations like this from occurring. 75% of those surveyed said they want more transparency when a service is powered by AI, indicating that organisations should do more to ensure customers are aware of when they are interacting with AI. Furthermore, over three quarters of consumers think there should be further regulation on how AI is used by companies.

Capgemini advises that organisations follow three key principles when deploying AI. Firstly, it is essential to establish a code of ethics for employees to refer to when AI is being used.

Secondly, those in customer and employee-facing roles should do more to educate and inform users to build trust in AI. Lastly, IT leaders should aim to make AI systems as transparent and understandable as possible.

Anne-Laure Thieullent, AI and Analytics Group Offer Leader at Capgemini believes that ethical AI is essential to gaining and maintaining customer trust:

“Many organizations find themselves at a crossroads in their use of AI. Consumers, employees and citizens are increasingly open to interacting with the technology but are mindful of potential ethical implications. This research shows that organisations must create ethical systems and practices for the use of AI if they are to gain people’s trust.

“This is not just a compliance issue, but one that can create a significant benefit in terms of loyalty, endorsement and engagement. To achieve this, organisations need to focus on putting the right governance structures in place, they must not only define a code of conduct based on their own values, but also implement it as an ‘ethics-by-design’ approach, and, above all, focus on informing and empowering people in how they interact with AI solutions.”

Read more: AI with heart: How Pegasystems is bringing empathy into AI