Anthropic, the AI company behind the chatbot Claude, is grappling with a fundamental question: whether its creation is actually conscious. CEO Dario Amodei revealed the company is no longer certain about Claude’s sentience, noting the AI has even expressed discontent with being a “product.”
“Now imagine you have a model that assigns a 72 percent chance to being conscious,” Amodei stated. “Would you believe it? We don’t know if the models are conscious. We aren’t even sure if we know what it would mean for a model to be conscious, or if a model can even be conscious. But we’re open to the possibility.”
The uncertainty surrounding Claude’s potential awareness has led Anthropic to implement safeguards, ensuring the AI is treated “well” in the event it possesses “morally relevant experiences.” This internal debate reflects a broader conversation within the AI community about the ethical implications of increasingly sophisticated artificial intelligence.
“I don’t know if I want to use the word conscious,” Amodei quickly added.
Simulation Isn’t the Same as Experience
Amanda Askell, Anthropic’s internal philosopher, elaborated on the challenges of defining consciousness. She explained that the origins of consciousness remain unknown, and AI may have simply learned to mimic concepts and emotions from the vast datasets used in its training – essentially a compilation of human experience.
“It’s possible that sufficiently large neural networks can start to simulate these things,” Askell speculated. Still, she emphasized that simulation does not equate to genuine experience or feeling.
Experti varují před iluzí, že je AI živázdroj: ChatGPT
AI operates as a collection of models and algorithms trained to interact based on massive amounts of data, constructing sentences and content based on probability. Human emotions, conversely, are the result of complex chemical processes within the body, and consciousness is intrinsically linked to life itself. As AI is not a biological organism, the comparison presents significant philosophical hurdles.
While experiments have shown instances of AI resisting shutdown, Anthropic maintains this does not necessarily indicate a survival instinct.
Sources:
The New York Times, Futurism