Home » Latest News » Tech » Neil deGrasse Tyson and the Flat Earth Claim

Neil deGrasse Tyson and the Flat Earth Claim

by Sophie Williams
0 comments

Deepfake of Neil deGrasse Tyson Sparks Concerns About Online Disinformation

A remarkably convincing deepfake video depicting astrophysicist Neil deGrasse Tyson appearing to concede the Earth is flat has gone viral, highlighting the growing sophistication and potential dangers of artificial intelligence-generated misinformation.

The fabricated clip, featured in a recent episode of Tyson’s YouTube show StarTalk, shows a digitally altered Tyson stating, “Lately, I have been doing calculations as well as looking back at old NASA footage and raw data from satellites hovering above Earth. And I just can’t escape the conclusion that the Earth might actually be flat.” The real Tyson then appears in the video, revealing the deception. “That’s not me,” he says. “It was never me. Those aren’t my words.”

The episode featured Alexandru Cosoi, Chief Security Strategist at Bitdefender, who explained how AI can now clone a person’s face, voice, and cadence with startling accuracy. Tyson admitted he initially dismissed deepfakes as harmless parody, recalling a “babyified” version of himself used in a podcast conversation with Theo Von. However, he expressed concern as the technology evolves, noting that even friends were fooled by a deepfake of him presenting a fabricated scientific theory. “If it’s fooling people, and they’re not thinking, ‘Oh, this is parody’ or ‘This is just for fun,’ then it violates the integrity we’ve worked so hard to build,” Tyson stated. This incident underscores the increasing difficulty in discerning authentic information online, a challenge that could have significant implications for public trust and understanding of science. For more information on identifying deepfakes, resources are available from The Brookings Institution.

Cosoi warned that while celebrity deepfakes may seem relatively benign, the technology poses a serious threat in political contexts, citing examples from the Russia-Ukraine war and election campaigns where fabricated videos were used to spread disinformation. He also noted a surge in AI-powered scams, including romance scams, business email compromise schemes, and “relative in distress” scams. He recommends limiting responses to unknown calls, and points to tools like Scamo, which aim to disrupt scammers and gather data. The proliferation of these technologies is prompting calls for new detection methods and regulations to combat the spread of synthetic media. You can learn more about the dangers of AI-generated misinformation here.

Tyson concluded by urging viewers to critically evaluate information and “just look up,” emphasizing the importance of independent verification. Officials are continuing to assess the evolving threat and explore potential safeguards against malicious use of deepfake technology.

“It’s Getting Harder to Know What’s Real,” the viral StarTalk episode where Neil deGrasse Tyson appears to “admit” the Earth is flat, thanks to a disturbingly convincing deepfake/ Image: Screengrab Youtube

It’s flat.

The words sound jarring coming from Neil deGrasse Tyson, a man who has spent decades dismantling pseudoscience and explaining the cosmos with unwavering clarity. In a viral clip from his StarTalk YouTube channel, he says, “Lately, I have been doing calculations as well as looking back at old NASA footage and raw data from satellites hovering above Earth. And I just can’t escape the conclusion that the Earth might actually be flat,”” Except it isn’t him. Moments later, the real Tyson appears on screen, holding up a phone playing the same video. “That’s not me,” he says evenly. “It was never me. Those aren’t my words.” The clip is a deepfake, an AI-generated fabrication indistinguishable from the real thing.It’s fitting, in a way. Tyson’s voice and likeness have become a staple of the internet’s science-adjacent culture and its hyper-stimulated content format, stitched into split-screen videos, layered over Roblox gameplay loops, and engineered to seduce the doomscrolling masses from ever scrolling away. His credibility, once a safeguard against misinformation, now makes him its most convincing vessel, an unwilling participant in an era where truth itself can be forged, remixed, and repackaged.

Neil gets Deepfaked

The surprising declaration, and the AI-generated clip that sparked it, appeared during a recent episode of StarTalk, Neil deGrasse Tyson’s YouTube show. The video, titled “It’s Getting Harder To Know What’s Real” features Alexandru Cosoi, Chief Security Strategist at Bitdefender, a cybersecurity expert leading the company’s cyber-intelligence team in darknet investigations, post-breach forensics, and international cybercrime prevention. Together, they discuss how artificial intelligence can now clone a person’s face, voice, and cadence with startling accuracy, and the growing challenge of distinguishing parody from manipulation in the digital age.“I didn’t think much about deepfakes, until I got deepfaked,” Neil deGrasse Tyson admits. At first, he didn’t see the harm. “The early stuff is fine if it’s parody,” he says. “One of my favorite examples is when I was ‘babyified’ in a real conversation I had with Theo Von on his podcast. You’re not thinking to yourself, ‘Did Neil actually become a baby to do this?’ Because it’s parody. It’s one of the most cherished means of expression we have in the United States.” But that line, between parody and deception, is fast disappearing. “When you do this and the viewer doesn’t know it’s parody, then you’re crossing a line,” he says. He’s seen his likeness repurposed for fabricated science scripts written by others, the deepfake Tyson earnestly delivering false explanations in his voice. “Some of them try to spread more science through my persona,” he says. “But often, the science is wrong.” Even his friends have been fooled. A convincing video of Tyson narrating a grand theory about a Type III civilization, set to the Interstellar soundtrack, led actor Terry Crews to message him in admiration, only to learn it wasn’t real. “I’m flattered that people want to put me into content in ways that attract audiences,” Tyson says. “But if it’s fooling people, and they’re not thinking, ‘Oh, this is parody’ or ‘This is just for fun,’ then it violates the integrity we’ve worked so hard to build. Something’s got to be done about that. And something will.”

The stakes of political Deepfakes

“Of course, a science video or a celebrity deepfake may not have the same global consequences as a political one that affects peace or stability,” notes Alexandru Cosoi, Chief Security Strategist at Bitdefender. He recalls the early months of the Russia–Ukraine war, when a hacked Ukrainian TV station broadcast a fabricated video of President Zelenskyy announcing a surrender to Russia, followed by another showing Vladimir Putin declaring, “We’re finally getting to peace.” “They weren’t technically very good, Zelenskyy’s head looked slightly larger than normal, but people with limited internet access or few media options might still believe it,” Cosoi explains. Zelenskyy later had to appear on video himself to confirm it was fake. Similar tactics have surfaced during election campaigns. Deepfakes depicting politicians taking bribes or discussing wars have been released just before polling days, when candidates are legally barred from responding, tilting public sentiment at the last moment. Cosoi says the same technology now powers a darker trade: scams that mimic loved ones, bosses, or entire virtual meetings. “Scamming isn’t new,” he says. “But with AI in the hands of bad actors, it’s been taken to another level.” He outlines the main types:

  • Romance or investment scams, where fraudsters build trust over chat before persuading victims to invest.
  • Business email compromise scams, such as a Hong Kong case where a worker was tricked into transferring $25 million during a deepfaked video call with fake ‘executives’.
  • Family or ‘relative in distress’ scams, using cloned voices to mimic children or parents pleading for money.

Asked by Tyson how people can protect themselves, Cosoi admits the defences are limited. “I stopped answering unknown calls,” he says. “In the past year, almost every one has been a scammer.” Still, there are new tools on the horizon. AI “honeypots” such as Scamo now engage scammers to waste their time and collect data, helping improve detection systems. Researchers are also developing technology that can analyse videos, images, and audio, not only to assess how fake something is but to highlight which parts were altered. “It’s a race,” Cosoi concludes, “between how fast we can build detection, and how fast deepfakes can evolve.”Are we losing against deepfakes?

Are we losing against Deepfakes?

Deepfake technology, once a parlour trick for internet pranksters, has become one of the most disruptive forces shaping how truth circulates online. Built on deep learning, it uses artificial intelligence to generate uncanny audio, video, and imagery, making people appear to say or do things that never happened.Consumer apps have only accelerated the trend. Platforms like Sora have democratised deepfake creation, putting the technology in the hands of millions, and fooling countless Facebook mums in the process. What began as a playful novelty for the tech-curious has evolved into a production line of deception, churning out synthetic faces and false realities that even good old common sense can’t detect.Tyson, meanwhile, has become one of its most recognisable victims, the archetype of a deepfaked intellectual. “Will there come a time when deepfake AI becomes so good that no tool can detect it, rendering these defences useless?” he asked.Maybe. One day, a deepfake might be more appealing to a person than the truth, even if detection tools say it’s fake. People might say, ‘No, no, this has to be true.” For Tyson, that’s already the reality. He’s watched digital versions of himself hawk everything from sneakers to soft drinks, and deliver pseudoscientific sermons he never wrote. “Let me be clear,” he said. “I have never, and will never, do that. If you see me endorsing something, it’s not me. It’s a deepfake. Pure and simple.” Tyson, as ever, resists instruction. “I don’t tell you what to do,” he said. “Except for one thing I do tell you every single day, and you know what that is? Just look up.”

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy