Home » Latest News » Business » AI Investment Bubble: Experts Warn of ‘Biggest Capital Misallocation in History’

AI Investment Bubble: Experts Warn of ‘Biggest Capital Misallocation in History’

by Michael Brown - Business Editor
0 comments

Four of the largest technology companies in the United States are planning investments of up to $650 billion in 2026, according to recent reports. Alphabet, Amazon, Meta Platforms, and Microsoft are directing these substantial funds toward data centers and related equipment as they compete to dominate the artificial intelligence (AI) sector.

The massive investment comes as some industry observers question whether the current AI boom represents a speculative bubble and a potentially significant misallocation of capital. George Noble, a former fund manager at Fidelity, argues that the current surge in AI investment is “the largest misallocation of capital in history,” according to analysis based on discussions with Julien Garran, a partner at MacroStrategy.

Noble’s assessment suggests the current situation surpasses the excesses of the dot-com era and poses a risk to global macroeconomic stability. This skepticism is gaining traction even among major international banks, with Stephan Kemper of BNP Paribas Wealth Management stating, “The perception of AI seems to have completely changed, from benevolent angel to kiss of death.”

Data presented by Garran indicates that the capital misallocation in the AI sector is approximately 17 times greater than the excesses seen in the early 2000s and four times larger than that which led to the 2008 real estate crisis. The scale of investment raises concerns about diminishing returns, as each incremental improvement in AI capabilities now requires exponentially more computing power, data centers, and energy. “It will cost five times more energy and money to build models twice as good,” Noble noted.

Beyond the economic considerations, fundamental mathematical limitations are also hindering AI systems. Judea Pearl, a pioneer in causal reasoning in AI, recently stated that “scaling won’t save us,” because “mathematical limitations cannot be overcome by scaling.” This underscores the point that large language models learn how we describe the world, rather than how the world actually functions, according to Garran.

This reliance on probabilistic correlation, rather than causal understanding, has significant commercial implications. Studies cited by Garran show high failure rates for real-world applications, ranging from 65% to 99.7%. Operational limitations are also a concern; while AI can be useful for tasks like drafting or summarizing, building complex operational workflows around the technology at this stage is risky.

Recent anecdotal evidence highlights these risks. A Reddit user, believed to be a manager involved in AI implementation, shared an experience where an AI system “invented analytics data for 3 months.” Based on this fabricated data, company leaders made significant decisions that are now under fundamental review. “The numbers were sometimes from the wrong periods, other times the products were mixed up, and other times simply completely invented. But the AI system explained everything so confidently that no one ever questioned it,” the user wrote.

This incident serves as a cautionary tale for companies: building the future of the business on a technology that “guesses” answers, rather than “understanding” them, is a precarious strategy. A recent study, “Remote Labor Index: Measuring AI Automation of Remote Work,” found that AI models successfully complete real commercial tasks in only 2.5% of cases. Researchers tested six large language models on actual freelancing tasks, the type of work people are paid for on platforms like Upwork.

The most successful model was paid for its work in 2.5% of cases, while the least successful managed only 0.3%. The study deliberately excluded tasks requiring physical labor or complex human interaction, focusing solely on digital tasks where AI should, theoretically, excel. Despite this, the failure rate was 97.5%. As Noble points out, “artificial intelligence is excellent at correlations, but correlation is not how the real world works,” adding that “it can regurgitate answers to questions it has been trained on, but it cannot build something, execute a complex task, or operate in the real world, where correlations no longer hold.”

Financial concerns extend beyond costs. The phenomenon of “circular financing of suppliers” is also raising red flags. The case of NVIDIA, where receivables have increased by 770%, is illustrative. This suggests that customers are purchasing hardware at high prices not necessarily from generated profits, but through supplier financing. If demand for computing power doesn’t translate into actual revenue, this chain could break.

Even success appears to be a potential negative, at least from an investment perspective. Bloomberg recently reported that Anthropic’s Claude AI system could have negative effects on market dynamics by encouraging “herd thinking” among analysts, and investors. “The use of AI models, such as Claude Opus 4.6 from Anthropic, could make market participants more likely to follow the crowd and develop a kind of market monoculture, which could lead to herd behavior and the concentration of risk,” the news agency emphasized, as a healthy financial market requires diverse opinions. Otherwise, “market participants will be more likely to inflate speculative bubbles and overlook systemic vulnerabilities.”

IBM recently reached a similar conclusion, deciding to triple the number of entry-level positions—those considered most vulnerable to widespread AI adoption. “Companies that will be most successful in three to five years are the ones that have doubled their entry-level hiring in this environment,” IBM’s chief human resources officer told Fortune. The rationale is “developing more durable skills for workers and creating long-term value for the company.”

The views of Noble and the analysis of Garran paint a bleak picture of what they call “the biggest wrong bet in history.” In their view, we are at an inflection point, marked on one hand by the narrative of an unprecedented technological revolution, but on the other by the harsh reality of diminishing returns, circular financial flows, and a lack of profitability. Despite the obstacles, capital waste is unlikely to stop soon, a direct consequence of the lack of accountability of authorities—the only social category that does not respond, either materially or criminally, for decisions that can lead to the erasure of nations from history.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy