X’s Grok AI Spreads False Info About Sydney Attack

by John Smith - World Editor
0 comments

Following a deadly attack at Bondi Beach in Sydney on December 15th, scrutiny is turning to the role of artificial intelligence in rapidly disseminating information – and misinformation – during crisis events. The X-owned chatbot, Grok, has repeatedly provided false details surrounding the terror attack, including misidentifying a hero and falsely linking footage to unrelated incidents. This raises serious questions about the reliability of AI-driven systems as tools for news gathering and verification, especially in the critical hours after breaking news unfolds.

The artificial intelligence chatbot developed by X, owned by Elon Musk, has repeatedly generated inaccurate information regarding the Bondi Beach attack in Sydney, Australia, raising concerns about the reliability of AI in covering breaking news events.

The December 15 attack, carried out by a father and son who opened fire on a crowd during the Jewish holiday of Hanukkah, left at least six people dead and dozens injured. Australian authorities have classified the incident as an act of terrorism and antisemitism. The incident highlights the challenges of verifying information in the immediate aftermath of a crisis, and the potential for AI to amplify misinformation.

One particularly troubling example involved Ahmed al Ahmed, who was widely hailed as a hero in Australia after video footage showed him disarming one of the attackers. Despite al Ahmed remaining hospitalized with serious injuries, the chatbot falsely claimed the video depicted “an old viral video showing a man climbing a palm tree in a parking lot,” even suggesting the incident was “staged.” The AI further misidentified al Ahmed as an Israeli hostage held by Hamas.

Australian Prime Minister Anthony Albanese visited Ahmed al-Ahmed, the “hero” of Bondi Beach, in the hospital on December 15, 2025.
Prime Minister’s Office / REUTERS

Passer la publicité

“Staged” Claims

The errors extended beyond the case of al Ahmed. When questioned about other scenes from the attack, the AI incorrectly identified images as being from Storm Alfred, which impacted eastern Australia earlier in the year. It later retracted the claim after being pressed by a user.

Further compounding the issue, the chatbot labeled a survivor of the attack as a “crisis actor” – a term frequently used by conspiracy theorists to question the authenticity of mass casualty events and their victims – after an image of the survivor circulated online. NewsGuard, a disinformation monitoring organization, flagged the initial online claims. When asked about the photo, Grok again referred to a “staged” event.

Experts suggest that while AI can be a useful tool for tasks like geolocation, it is currently incapable of replacing human fact-checking and contextual analysis. When contacted by the AFP, xAI, the developer of Grok, responded with an automated message stating, “The mainstream media lies.”

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy