Businesses should do more to prevent artificial intelligence (AI) from “entrenching existing unfairness and barriers”, a new report by the Confederation of British Industry (CBI) has said.
Although AI is often thought to remove bias from many aspects of business such as the recruitment process, the technology can unfortunately reflect or amplify existing biases.
According to MIT, there are two main ways that bias shows up in training data: either the data you collect is unrepresentative of reality, or it reflects existing prejudices. In other words, if the data used to train AI reflects biases such as racism or sexism, the AI will “learn” these biases.
According to research by PwC, just a quarter of organisations that are deploying AI are prioritising the issue of AI ethics.
The report, AI: Ethics Into Practice, has called for businesses to “challenge unfair bias in data and ensure teams designing the technology are diverse”. Currently, women make up just 22% of all professionals in AI, with a smaller percentage women of colour. When teams behind AI lack diversity, it is easier for biases to creep in.
It has also urged organisations to scrutinise the data being fed into AI systems more closely to check for prejudices against certain groups, with employees monitoring this more closely than it currently is.
The business case for tackling AI bias
Furthermore, the CBI has also said that businesses that “embed an ethical approach to AI will also make businesses more competitive as “diverse businesses are more likely to outperform their rivals”, demonstrating that there is a clear business case for adopting good practice in this area.
Agata Nowakowska, AVP at Skillsoft believes that organisations should be doing more to ensure that a more diverse group of people is involved in the training of AI:
“Organisations need to make sure they are using AI technology responsibly, and this means recognising and preventing the potential for bias. While it’s alarming that AI can be trained to become racist or sexist, it’s not surprising. The workers creating these algorithms are predominantly white males, who are likely to programme their own subconscious bias about gender and race into the algorithms. It’s a sad fact, but it’s true, and it could become dangerous for society as a whole if it goes unchecked.
“As business leaders, we need to take action now. By encouraging a more diverse group into STEM fields we may be able to re-address the balance before it’s too late. Latest figures show that women, for example, make up just 14.4% – this is nowhere near equal. It indicates, unfortunately, that things are likely to get worse before they get better.”
Verdict deals analysis methodology
This analysis considers only announced and completed deals from the GlobalData financial deals database and excludes all terminated and rumoured deals. Country and industry are defined according to the headquarters and dominant industry of the target firm. The term ‘acquisition’ refers to both completed deals and those in the bidding stage.
GlobalData tracks real-time data concerning all merger and acquisition, private equity/venture capital and asset transaction activity around the world from thousands of company websites and other reliable sources.
More in-depth reports and analysis on all reported deals are available for subscribers to GlobalData’s deals database.