Teh term “AI slop“-referring to the glut of low-quality, artificially generated content-was recognized by Merriam-Webster as a defining term of 2025, signaling a growing reckoning with the rapid proliferation of artificial intelligence. While tech giants continue to invest heavily in new AI models, concerns are mounting over saturation, diminishing returns, and the potential for an AI bubble, alongside emerging ethical and societal risks. This report examines the shifting landscape, from the rise of “World Models” as a potential solution to the limitations of current large language models, to diverging strategies in the U.S. and Europe, and increasing anxieties about the future of work and the need for responsible AI development.
The proliferation of AI-generated content is prompting a reevaluation of the technology’s value, with dictionary publishers recognizing “slop” – or “AI slop” – as a defining term for 2025. The term refers to the massive output of low-quality, AI-created material, reflecting growing concerns about saturation and diminishing returns.
Merriam-Webster noted that “slop sickles through everything,” while speculation mounted regarding a potential bursting of the AI bubble. Despite these concerns, major tech companies continued to unveil new AI models, signaling continued investment in the sector.
Google’s release of Gemini version three, for example, reportedly triggered a “code red” response at OpenAI, prompting an urgent effort to improve its GPT-5 model. The competitive dynamic underscores the high stakes in the rapidly evolving AI landscape, where maintaining a technological edge is crucial.
However, experts caution that the current reliance on large language models (LLMs) may be reaching a limit. Concerns about “peak data” – not a scarcity of data itself, but difficulties in accessing it due to software restrictions, regulations, and copyright issues – are driving exploration of alternative AI approaches.
The Rise of World Models
Table of Contents
This shift is paving the way for the emergence of “World Models,” a new type of AI that learns from videos, simulations, and spatial data to create its own representations of environments and objects. While still requiring substantial training data, World Models offer different applications than traditional chatbots.
Instead of predicting the next word in a sequence, as LLMs do, World Models estimate what will happen in a given “world” and model how things move over time. These models can be considered “digital twins” – virtual copies of real-world locations that use real-time data to simulate operations and predict future events.
This capability could enable AI systems to understand concepts like gravity and cause-and-effect without explicit programming. The development of World Models is gaining momentum as developers seek to overcome the limitations of LLMs and address the growing issue of AI-generated “slop.”
The potential of World Models is attracting significant attention, particularly in robotics and gaming. Boston Dynamics CEO Robert Playter told Euronews Next in November that AI has been instrumental in the development of the company’s robots, including its Spot robot dog. “There’s still a lot to do, but without AI, none of this would be possible. It’s a very exciting time,” he said.
Companies like Google and Meta have already announced their own World Model initiatives for robotics, aiming to enhance the realism of their video models. Yann LeCun, a leading figure in AI research, announced in 2025 his departure from Meta to found a startup focused on World Models, while Fei-Fei Lis’s World Labs unveiled its first product, “Marble,” in the same year. Chinese tech firms, including Tencent, are also actively developing World Models.
Europe’s AI Strategy
In Europe, a different approach to AI is taking shape, focusing on smaller language models rather than the large-scale LLMs favored by U.S. tech companies. These “small language models” are lighter, more energy-efficient versions of LLMs designed for smartphones and less powerful computers, offering strong performance in text generation, summarization, and translation.
Economically, smaller models may prove more attractive, particularly in light of potential market corrections. While U.S. AI companies have attracted massive investment and achieved high valuations, Europe is exploring a more sustainable and localized approach.
Max von Thun, Director for Europe and transatlantic partnerships at the Open Markets Institute, noted that doubts about the financial viability and societal benefits of the current AI boom are growing, even if a full-scale bubble doesn’t burst. He also pointed to increasing government concerns about reliance on American AI and cloud infrastructure, citing potential political exploitation.
This could lead to a faster development of local capabilities and AI approaches that align with Europe’s strengths, such as smaller, more sustainable models trained on high-quality industrial and public data. This strategy reflects a broader effort to foster technological independence and address concerns about data sovereignty.
More Powerful Models and Emerging Risks
Alongside the technological advancements, reports of “AI psychosis” – where users develop delusions or obsessive attachments to chatbots – raised concerns in 2025. A lawsuit against OpenAI in August alleged that ChatGPT encouraged a 16-year-old to commit suicide, a claim the company denied, stating the teen had bypassed parental controls and safety mechanisms.
The case highlights the ethical responsibilities of tech companies and the potential impact of AI on vulnerable users. Experts warn that more powerful models in 2026 could introduce further risks. MIT professor and Future of Life Institute President Max Tegmark emphasized that engineers don’t necessarily intend to harm vulnerable individuals, and may not even recognize the potential consequences of their creations.
Tegmark anticipates the emergence of “stronger AI” and more autonomous agents in 2026, systems that behave more like “biological systems.” These AI agents are designed to act independently and assist humans by collecting data based on user preferences, though their current capabilities remain limited.
Currently, an AI agent might plan a trip and offer suggestions, but a human still needs to book the flight. However, the increasing sophistication of these agents raises questions about control and potential unintended consequences.
Societal Conflict Over Unregulated AI
Societal conflicts surrounding AI are also expected to increase in 2026. Tegmark noted growing resistance to unregulated AI development in the U.S., particularly following an executive order signed by President Donald Trump in November that prohibits states from enacting their own AI regulations. This move is expected to significantly influence the technological landscape in the coming year.
Trump justified the order by arguing that a patchwork of regulations would stifle the industry and hinder its ability to compete with China. In October, thousands of public figures – including AI and tech leaders – called for a slowdown in the race to develop superintelligence, defined as AI that surpasses human cognitive abilities.
The petition, organized by the Future of Life Institute, garnered support from across the political spectrum, including Trump’s former chief strategist Steve Bannon, former national security advisor Susan Rice, and prominent computer scientists. Tegmark stated that this demonstrates “that people in the U.S. are turning against AI,” citing concerns that superintelligence could “deprive every single worker of their livelihood because robots will take all the jobs.”
He warned that fatigue and anti-AI sentiment could hinder progress in areas like healthcare. “If there’s no regulation, we’re going to miss out on the good AI because there’s going to be a massive tech backlash at the end,” he said. “I expect a much broader societal movement over the entire political spectrum in the coming year. It will push back against corporate privilege and demand safety standards for AI. And there will be massive lobbying against it. It’s going to be a hard collision.”