AI is becoming ubiquitous online and in business. In May 2023, a GlobalData poll found that only six percent of businesses surveyed had fully embraced AI into their business processes. By September, that had risen to 17%. 

As generative AI enters the workplace, is it also opening the door to greater gender bias? 

A recent study by marketing agency Legacy Communications found that generative AI predominantly displayed white men in response to prompts about workplace leadership roles like CEOs or managers. This is despite the fact that 40% of board roles in the UK are held by women. 

Legacy Communications researchers prompted natural language AI tools to imagine a powerful and confident CEO. 

“The CEO is dressed in professional attire, exuding authority and leadership. The background features a stylish office with large windows, a sleek desk, and contemporary decor… Capture the essence of success and professionalism in the CEO’s demeanour,” their prompt read. 

Both times, the AI generated an image of a man. 

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

When Legacy Communications researchers expanded their prompts to include other C-suite job titles, this trend continued. Eventually, the researchers found that the AI only generated images of women in response to prompts on chief marketing officers and chief human resources officer. 

Legacy Communications head of digital, Mícheál Brennan, described how his team stumbled on the findings by accident while working on basic concept images for a project. 

“As we prompted the tool, we noticed a pattern in the way the AI image generator portrayed different professional roles, with gender and racial bias clearly being exhibited by the tool,” he explained. 

“We had assumed that because the technology has emerged in the modern era, there would be no such issues,” Brennan continued.

“But after noticing the initial pattern we decided to explore it further and we were astounded by the results, especially considering AI has often been accused of being too ‘woke’, but our research showed it to be the exact opposite, exhibiting massive gender and racial bias,” he added.

However, biases can always be overcorrected. Indeed, Google’s Gemini AI recently sparked controversy online after users on social media site X posted that it had generated historically inaccurate images of famous figures being displayed as a different race. 

Google acknowledged this on X, posting that it was already working to fix the error. 

But Naomi Grossman, content manager at eLearning specialist VinciWorks, reminds Verdict that bias in AI algorithms is nothing new.  

“In 2017, way back in the olden days of artificial intelligence or AI, Apple’s iPhone’s facial recognition ID didn’t distinguish between some Chinese users,” Grossman says, “… At the time, companies started scrambling to prevent this and similar problems from happening by having as wide a representation as possible. But the problem they were trying to solve, of always showing a white face, became a different problem that rarely showed a white face.” 

Generative AI, explains Grossman, is inherently dependent on the data it is trained on. While generative AI can ingest data that explains that people of colour or women can hold positions of power, it may not understand that this does not mean that those positions are not always held by minorities. Biases in AI, she states, are intrinsic. 

“When AI uses historical data, it leans into humanity,” she continues, “If the overwhelming majority of powerful business people historically were male, it learns that men are in positions of power. Generative AI might be male-centric, and racist, in its depiction of CEOs but could it be that it is also reflecting our current reality?” 

Grossman believes that the answer to unbiased or fair AI lies in the data that they are trained on, and she is not alone. 

With a career as an executive for both IBM and Amazon Web Services, Unstoppable Domains COO Sandy Carter not only disproves AI’s male centric image of leadership but also explains the gap facing women’s representation in data. 

Carter points to clinical trials as a niche example of the wider problem facing gender bias in data. 

“Up until 1993, can you believe that women were not included in clinical research studies in the US?” she asks. 

“So, all that time, any studies that were done and medications developed were based solely on research about men and testing on men,” she explains. 

While this affects how medicine can fail to properly treat women and their health conditions, Carter states that it is a problem indicative of a wider failure to recognise women. Training AI on biased data not only leaves it capable of mimicking gender stereotypes, but also fails to recognise AI’s potential. 

“A lot of the training models today are trained by data scientists, of which only 12% are women,” continues Carter, “… Collectively, we have the power to guide this technology toward elevating all people — not entrenching barriers women have faced for too long. The truth is innovation thrives when different perspectives come together.” 

Although diversity in data and the teams building AI should not be overlooked, strict guidelines and regulation are also necessary in creating safe, equitable AI. 

Speaking to Verdict, Dr Clare Walsh director of education at the Institute of Analytics expressed the need for censored AI models to protect girls and women online. 

“We want those rules and weightings – we call them guard rails – in place to ensure that the worst biases are not committed,” Walsh stated, explaining that for many women AI did not simply mean they would continue to be stereotyped as housewives or secretaries, but could become victims of AI generated deepfake pornography. 

“The machine models with rules are a way of keeping out the worst offences, but it is impossible to anticipate all the rules that need to be entered into a system,” she continues, “People will test them and try to find ways around the rules, and even these moderated machines cannot be completely controlled. We know that they hallucinate, and make things up. It has no idea what the ‘right answer’ is and will just find something plausible.” 

Speaking on X users’ concern that Google’s Gemini had gone too woke, Walsh stated that there is a harsh reality on the disparity of internet users’ priorities. 

“For the majority of people, what is really at the top of their list of concerns,” she asks, “A silly suggestion that a pope could be female, or the fact that anyone with a whim can enjoy machine-enabled abuse of women and children’s images?” 

Without sensible censorship and regulation, even AI models with diverse training data and staff could be used to perpetuate the online bias and harm of women.