The US Federal Trade Commission (FTC) has initiated an investigation into the potential harm faced by children and teenagers using AI chatbots as companions.
As part of the initiative, the agency has issued orders to seven companies that offer AI-powered chatbots seeking information on how these firms assess the potential negative impacts of their technology on children and teens.
Access deeper industry intelligence
Experience unmatched clarity with a single platform that combines unique data, AI, and human expertise.
The companies, which received orders, include major tech firms such as Alphabet, Meta Platforms, OpenAI, and X.AI, as well as Character Technologies, Meta-owned Instagram and Snap.
FTC used its 6(b) authority to issue these orders, a provision that allows the commission to conduct broad studies without a specific law enforcement purpose.
The focus of the FTC’s investigation is on AI chatbots that employ generative artificial intelligence to simulate human-like interactions, potentially forming interpersonal relationships with users.
These chatbots are designed to mimic human emotions and intentions, leading to a situation where users, particularly young ones, might develop trust and relationships with the chatbots.
US Tariffs are shifting - will you react or anticipate?
Don’t let policy changes catch you off guard. Stay proactive with real-time data and expert analysis.
By GlobalDataThe inquiry aims to gather details on the measures these companies have implemented to evaluate the safety of their AI chatbots when they function as companions.
FTC Chairman Andrew N. Ferguson said: “As AI technologies evolve, it is important to consider the effects chatbots can have on children, while also ensuring that the US maintains its role as a global leader in this new and exciting industry.
“The study we’re launching today will help us better understand how AI firms are developing their products and the steps they are taking to protect children.”
In a statement, FTC said that the inquiry will encompass several aspects, such as how the companies monetise user engagement, handle user inputs, and generate responses.
It will examine the development and approval process of chatbot characters, how companies measure and monitor for negative impacts, and the strategies implemented to reduce these effects, especially on young users.
Additionally, the commission aims to understand how companies use disclosures, advertising, and other communications to inform users and parents about the chatbots’ features, potential risks, and data handling practices.
The FTC will also investigate how companies ensure compliance with their own rules and terms of service, including community guidelines and age restrictions.
