Following a recent inquiry by the Federal Trade Commission regarding the safety of AI chatbots and mounting pressure from grief-stricken families [[1]], Meta Platforms Inc. is enacting new limitations on adolescent access to its AI-powered chatbots. The company announced February 29 it will restrict use of the features for users under 18, citing growing concerns about potential risks to young people.This change, impacting platforms like Facebook, Instagram, and WhatsApp, comes amid broader industry scrutiny and evolving parental control options [[2]].
Meta Limits AI Chatbot Access for Teenagers
Meta Platforms Inc. is restricting access to its artificial intelligence chatbots for users under the age of 18, a move impacting its global user base. The company confirmed the change on February 29, citing safety concerns and a desire to protect younger users.
The decision affects Meta’s AI features across its platforms, including Facebook, Instagram, and WhatsApp. Meta stated it is implementing age verification measures to enforce the restriction, though specific details of these measures have not been disclosed. This comes as concerns grow regarding the potential risks of AI interactions for adolescents, including exposure to harmful content and inappropriate interactions.
According to reports, the policy change aims to prevent underage users from engaging with AI personas that could potentially exploit or manipulate them. The company has not specified whether the restriction applies to all AI-powered features or only to conversational chatbots.
The move by Meta follows similar actions by other tech companies grappling with the ethical and safety implications of rapidly advancing AI technology. The company’s decision underscores the increasing scrutiny surrounding the deployment of AI tools, particularly those accessible to vulnerable populations.
Meta’s actions are being closely watched by regulators and privacy advocates, who have been calling for greater oversight of AI development and deployment. The company has faced increasing pressure to demonstrate its commitment to user safety and responsible innovation.
The restriction is being rolled out globally, impacting millions of teenage users. While Meta has not provided a specific timeline for full implementation, the company indicated it is prioritizing the protection of younger users. This policy shift reflects a broader industry trend toward prioritizing safety and responsible AI practices.