Big Tech giants are joining the generative AI party, with Google, Microsoft, Baidu, and Alibaba rumoured to be releasing a platform to challenge OpenAI’s popular ChatGPT chatbot. One venture capital firm estimated that over 450 start-ups working on generative AI. This is unsurprising as the technology has real applications for enterprises and individuals, including summarizing, translating, copywriting, and stock image compiling.

However, there are some huge drawbacks to the technology that Big Tech companies are brushing under the rug.

Lies come as standard with generative AI

Generative AI programs like ChatGPT generate prose, not facts. Many people have become confused by its inaccuracies, however, it is not yet meant to be a provider of information. The algorithm is focused much more heavily on writing prose than appears human than on fact-checking. It is a creative tool, not a factual one. And depending on how the AI is trained, it will stay this way.

The inherent existence of falsity in society will also be reflected by ChatGPT. False claims will always exist as long as people do, because as Tim Harford writes in the Financial Times,  “If you care about being right, then yes, you should check. But if you care about being noticed or being admired or being believed, then truth is incidental”.

Generative AI is the first case of an AI that is not mechanically trying to problem solve. It is by nature, creative. This is inherently the opposite of how we think of artificial intelligence. We must keep in mind that it is a creator, not a calculator. We asked ChatGPT “Do you lie?” It responded as follows:

“As an AI language model developed by OpenAI, I do not have personal opinions, motivations, or emotions. I am programmed to provide helpful and accurate information to the best of my abilities based on the input I receive. However, I can sometimes provide incorrect or misleading information if the data I was trained on is incorrect or if the information requested is outside of my training data.”

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

Google has cited its worries about inaccuracies as the reason for not having released its own Ai chatbot Bard yet, as it regards itself as having more ‘reputational risk’. This has meant it has taken a more conservative attitude than OpenAI.

Inbuilt bias and generative AI

Another related issue is bias. In the same way that inaccuracies can filter through in training sets, bias can also. Some have found issues around bias in ChatGPT, for example, associating women with housework and men with more scientific endeavors. A TIME investigation found to tackle this bias OpenAI outsourced Kenyan laborers earning less than $2 an hour to sift through tens of thousands of snippets of text and ensure that toxic content was detected and filtered. In an interview with one of these laborers, they described themselves as “disturbed” and “traumatized” and the company canceled its contract with OpenAI eight months prematurely.

Emission creep

It is not a secret that servers and data centers are a huge source of emissions, especially those that provide computing power for AI platforms. According to the International Energy Agency, one percent of all greenhouse gas emissions are from data centers. Third-party analysis by researchers at Berkeley estimate that the training of ChatGPT consumed 1,287 MWh and led to emissions of more than 550 tons of carbon dioxide equivalent. This is the same amount as a single person taking 550 roundtrips between New York and San Francisco.

It also needs to be considered that it is not just the training of the platforms that leads to emissions but also maintaining and serving millions of users. Some ways to counter these associated emissions include powering servers and data centers with renewable energy or placing them in climates where they do not need to be cooled (like the Arctic). Energy-efficient models can also be developed, which involves reducing the inference time and being optimally efficient.

It is important not to get carried away by the latest hyped-up technology to come out of Silicon Valley, whether it be the metaverse or generative AI, and continue to ask the important questions about its wider impacts on society.