Skip to main content Skip to footer

Has the AI hype bubble finally burst?

Bubble in front of the word "AI"

After months of breathless anticipation and increasing promises of GenAI, the release of GPT-5 served as an unexpected reality check for an industry high on its own supply. Perhaps it’s time to see AI for what it really is: not magic, but normal technology. Can we find this middle ground, or is the bubble about to dramatically pop?

The AI hype cycle is, for now, over.

It died earlier this month when OpenAI at long last released GPT-5.

Rather than the landmark release some were anticipating, this model’s USP is the introduction of a “router” that automatically chooses the best model for any given task from its grab-bag of existing, confusingly-named releases. The verdict is that GPT-5 is “more of a server resource optimizer than a new, revolutionary intelligence.”

It was hardly OpenAI’s “New Coke” moment. But months of vague, unsubstantiated hype talking up GPT-5 as a leap towards AGI meant that anything short of the birth of a machine god was bound to disappoint. Instead, they delivered an overhyped and underdelivered product, while Altman pivoted to hyping up GPT-6.

Meanwhile, negative sentiment began to spread. Despite breathless talk of a recent GenAI explosion, it was beginning to dawn on many that progress is slowing.

Some met the news with anger:

… some met the news with optimism:

… but the consensus was that the release might represent the high-water mark of the AI cycle that started in 2022:

Is it now time to admit that AI might be a tad overhyped, like Vodafone CTO Scott Petty did last year?

Let’s lay out just a few reasons why:

Customer trust in AI remains low

While customers do appreciate the benefits of GenAI-powered customer support, especially around 24/7 availability, speed, and personalised interactions, 54% are wary of trusting AI.

Many people still prefer human interaction for sensitive or complex issues, especially when the AI experience is poorly designed or lacks transparency; half of consumers feel that their support issues are only resolved when dealing with a human agent.

For all the developments in reasoning that models have made in various benchmarking tests, few of these advancements have had much impact on the average customer, who cares most about cost, speed and accuracy.

Gartner now predicts that half of the companies that set out to replace customer service with AI are changing their minds; “The human touch remains irreplaceable in many interactions, and organizations must balance technology with human empathy and understanding.” The smarter approach is to streamline operations by integrating their customer agents with AI.

This is in no small part because hallucination rates are getting worse despite advances in newer modes. According to an OpenAI technical report, its o3 and o4-mini models had significantly higher hallucination rates than the earlier o1 model. And these hallucinations are increasing as these models are packed with more data. In fact, every LLM output is a hallucination – some are just more coherent and accurate than others.

Hallucinations aren’t a bug that can be patched out; they’re baked into how LLMs work. They output what’s most statistically probable, which isn’t the same as what’s correct.

For communications services providers, this should serve as a warning: deploying GenAI for customer support won’t solve everything. The value lies in augmentation, not replacement.

GenAI isn’t very good at tasks or reasoning

By design, LLMs don't actually “reason” – they simply predict the next word based on statistical patterns in massive datasets. This can look impressive, but it isn’t thinking or understanding.

  • In May, researchers at Carnegie Mellon University released a paper showing that even the best-performing AI agent, Google's Gemini 2.5 Pro, failed to complete real-world tasks 70% of the time.

In short, the promise is bigger than the payoff.

AI is doing very sophisticated pattern matching, but it's not actually thinking or understanding. Quite the opposite, their overfitting on common maths problems or brainteasers to avoid any gotchas exposes the disconnect between user inputs and chatbot outputs:

The results can be impressive, but to call it “reasoning” is misleading; it’s just using algorithms to mash up data from its training material.

AI is too expensive to run

As journalist Ed Zitron has reported, while big AI firms are seeing revenues – not profits –  in the billions of dollars, these are on the back of tens of billions in expenditure on training and running these models. It costs a lot to run big AI models – far more than the average user is willing to pay.

Of ChatGPT’s reported billion+ users, less than 1% are paying for the privilege. This might explain why such a highly-valued company lost $5 billion in 2024.

The AI industry as a whole is predicted to need additional revenues of $600 billion every year to offset the costs of training models and footing the enormous energy requirements.

Even Sam Altman has admitted that the current AI phase is a bubble; one that is already showing signs of popping. Meanwhile, The New York Times is reporting [paywalled] that Meta is slashing its AI division.

Surely the significant investment is worth it if AI generates revenue in the long run? It turns out that few companies are deriving any meaningful value from these products so far. A new report published by MIT says that only 5% of enterprise AI pilot programs generate any revenue.

And if these models are plateauing in capability while still costing billions, the hype cycle looks a lot less like progress and more like a bubble. Any sort of crash could have devastating consequences for the wider economy.

We (likely) aren’t getting super-intelligence soon

Artificial General Intelligence (AGI) is so passé. Artificial Super-Intelligence (ASI) is now the real dream – an AI that can perform new tasks without being specifically trained for them.

Despite Altman claiming in January that AGI was achievable in 2025, since GPT-5’s launch, he’s been backpedalling on that promise. AGI is being redefined from an intelligence aware of itself and its environment with the capacity to have its own thoughts independent of any human direction, to just a chatbot that’s better at following instructions.

While short-term value may be real, assumptions about a great intelligence emergence from these stochastic systems seem farfetched.

We will continue to see linear AI intelligence progress for decades but only because our compute is going to continue to grow exponentially, which will only result in smaller increases in intelligence; think of the leap between GPT-3 and GPT-4 versus the incremental improvements delivered by GPT-5.

Meta Chief AI Scientist Yann LeCun has long argued that simply building bigger models won’t spontaneously spawn true intelligence:

However, some in Silicon Valley are dogmatically set on birthing this machine god.

What has been dubbed the “TESCREAL movement” sees the development of AI not as an exercise in technology but as the next step in human evolution.

Humanity is nothing more than a “biological bootloader for digital superintelligence” according to Elon Musk, while Palantir CEO Peter Thiel seems to view all of humanity as a sacrificial lamb to see through this prophecy:

Could they truly believe this, or do they just see it as a means to boost the valuations of their companies? Such statements are hard to separate from the bombastic claims of any AI-focused firm, integrating it into their workflows so investors might mistake them for a growing company.

These same companies who built their entire business models around LLMs are ultimately going to struggle as models become more of a commodity.

Treating AI as normal technology

Whilst telcos are still optimistic about AI having value, it will be in specific use cases.

The productivity gains are real, but not dramatic. They are capable, but not reliable.

Instead, we should consider AI as normal technology, “a tool that we can and should remain in control of, and we argue that this goal does not require drastic policy interventions or technical breakthroughs.” We should treat it like any other tool: useful when well-suited to the problem, but not a solution to every situation.

AI isn’t collapsing – but the hype is. What remains is a very useful, but limited tool. The companies that thrive won’t be those chasing AGI fantasies, but those applying AI pragmatically – like Cerillion, where we’ve integrated GenAI into our BSS/OSS suite as a tool, not a miracle.

About the author

Adam Hughes

Content Specialist, Cerillion

Keep up with the latest company news and industry analysis