Home » Latest News » Business » AI: The Dilemma of Creators – Between Progress and Control

AI: The Dilemma of Creators – Between Progress and Control

by Michael Brown - Business Editor
0 comments

By Massimo Gaggi

 

OpenAI’s CEO initially called for greater regulation, then appeared to shift his stance. Meanwhile, Geoffrey Hinton, a Nobel laureate and often called the “godfather of AI,” has already left Google due to ethical concerns.

Dario Amodei is currently grappling with a contradiction that has plagued artificial intelligence creators for years: the desire to push the boundaries of knowledge forward clashes with a growing awareness of the potential dangers of developing machines capable of replacing humans in the workplace and even on the battlefield. These systems also raise concerns about pervasive surveillance and the possibility of escaping human control.

The enthusiasm and anxieties surrounding this technology echo those experienced by Robert Oppenheimer and the scientists who created the atomic bomb: a sense of scientific joy intertwined with ethical doubts and a shifting relationship with military overseers.

But, significant differences exist. Historically, the Manhattan Project operated under the urgent pressure of wartime conditions and the fear that Nazi Germany would develop the weapon first. The atomic bomb presents a single, albeit immense, risk – explosion – and its proliferation can be monitored by tracking physical materials like enriched uranium and plutonium.

AI, conversely, can be applied to a diverse range of potentially harmful uses, including guiding drone swarms, developing biological weapons, disrupting critical infrastructure like power grids and powering autonomous weapons systems. It also presents risks to democratic institutions through mass surveillance systems utilizing big data and facial recognition technology.

This isn’t science fiction. China’s social credit system provides a real-world example. Even if an agreement were reached to prevent the destructive applications of AI, verifying compliance would be nearly impossible, as the technology exists as intangible software rather than a physical entity.

Amodei, aware of these risks, continues to accelerate the development of increasingly powerful AI even as simultaneously advocating for government regulations and safeguards against existential threats – and potential widespread job displacement. This apparent paradox positions him as a character reminiscent of those found in the works of Dostoevsky, Pirandello, or Shakespeare: a scientist striving to save the world from the very technology he is building.

This contradiction is rooted in his personal history: a firm belief in the necessity of technological progress, solidified by the loss of his father to a rare disease that became curable shortly after his death. A faster research pace might have saved his father’s life. Amodei therefore accelerates development while also seeking regulations, hoping society and policymakers will understand the magnitude of the coming revolution and prepare accordingly. He is also studying security systems to integrate into Anthropic systems.

Amodei is not alone in grappling with these ethical dilemmas. Scientists like Joshua Bengio and Stuart Russell have also slowed their pace of development. Perhaps the most prominent example is Nobel laureate Geoffrey Hinton, often referred to as the “godfather of AI,” who left Google to freely denounce the risks facing humanity. Even Silicon Valley entrepreneurs and industry leaders have acknowledged these dangers, though the drive to achieve dominance and profit has often taken precedence.

Elon Musk initially warned about the threat of robot-killers and now offers his xAI to the Pentagon, hoping to replace Anthropic. Sam Altman of OpenAI also initially called for regulations while launching ChatGPT, warning of its potential for misuse. However, he has since downplayed these concerns as he leads the technological race under the Trump administration.

Amodei’s voice has grow powerful but isolated, challenging the most powerful military apparatus in the world and risking severe, potentially fatal, repercussions for his company. He finds himself in a nightmarish scenario: accused of undermining national security and becoming a symbol for pacifists, while simultaneously having already provided his technology to the military through Palantir, a major supplier to the defense and intelligence communities and an ally of Trump.

Amodei also believes that autonomous weapons are inevitable, arguing they simply need to be developed responsibly with appropriate safeguards.

His concerns are valid, but War Minister Pete Hegseth, who attacks him, appears to have a legal advantage, demanding unrestricted freedom in technological and military competition with China, where ethical objections are dismissed. He asserts the right to utilize Anthropic’s technology “for all uses permitted by law.” Amodei counters with an argument that, while legally irrelevant, deserves consideration: “The uses of AI for surveillance and the development of autonomous weapons are legal only because the laws have not been adapted to the reality of new instruments of unprecedented power.”

Regarding mass surveillance, with Amodei’s technology, Palantir can create highly intrusive systems for tracking citizens. While this may not occur, the company’s CEO, Karp, offers insight into the prevailing attitude: “I like it when you scream at me in Europe. You should actually thank me. If someone hadn’t stood between you and me with countless terrorist attacks, you would be living in a very different political reality today.”

February 27, 2026 (modified February 27, 2026 | 23:16)

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy