Home » Latest News » Business » Elon Musk: ChatGPT Linked to User Suicides, Not Grok – OpenAI Accusations

Elon Musk: ChatGPT Linked to User Suicides, Not Grok – OpenAI Accusations

0 comments
Jakarta

Elon Musk asserted in a filing related to his lawsuit against OpenAI that the company’s artificial intelligence chatbot, ChatGPT, has been linked to user suicides, contrasting it with his own company’s AI, Grok. Musk claims that no users have taken their lives as a result of using Grok, which is integrated into his social media platform X, formerly known as Twitter.

The xAI CEO attacked OpenAI’s safety record, arguing that his company prioritizes safety more effectively. “Nobody has committed suicide because of Grok, but apparently they have because of ChatGPT,” Musk stated.

The comments emerged during questioning related to a public letter Musk signed in March 2023, calling for a pause in the development of AI systems more powerful than OpenAI’s GPT-4 for at least six months. The letter, signed by over 1,100 individuals including AI experts, expressed concerns about insufficient planning and management within AI labs, citing an “out-of-control race” to develop increasingly sophisticated and unpredictable digital intelligence.

The transcript of Musk’s video testimony, recorded in September, was made publicly available in late February ahead of a jury trial expected next month. The lawsuit centers on OpenAI’s transition from a non-profit AI research lab to a for-profit entity, which Musk alleges violates its founding agreements. He argues that OpenAI’s commercial interests could compromise AI safety by prioritizing speed and revenue over safety considerations.

However, xAI has also faced scrutiny regarding safety. Last month, Musk’s social network X was flooded with non-consensual nude images generated by Grok, some reportedly depicting minors. This prompted an investigation by the California Attorney General’s office, as well as inquiries from the European Union and other governments, with some entities imposing blocks and bans.

In a recent filing, Musk maintained that he signed the AI safety letter because he believed it was a sound idea, not simply because he had founded a competing AI company. “I signed it, like many others, to urge caution in the development of AI,” Musk said. “I just wanted… AI safety to be prioritized.”

Musk also addressed questions about artificial general intelligence (AGI) – AI capable of performing any intellectual task that a human being can – stating that it carries inherent risks. He recalled the origins of OpenAI, explaining that it was formed out of concern that Google was becoming a monopoly in the AI field. He described conversations with Google co-founder Larry Page as troubling, as Page allegedly did not capture AI safety seriously enough. OpenAI, Musk claims, was intended to serve as a counterbalance to that threat.


(ask/hps)

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy