Skip to main content Skip to footer

Back to the Future: how accurate were 1995’s predictions about the future of telecoms?

Back To The Future

In 1995, two futurologists made a series of projections for the state of telecoms in the 2020s. We revisit their forecasts in 2025 to see how well – or not – their predictions held up.

When it comes to predicting the future, the only certainty is uncertainty, especially if technology is involved.

From visions of the future in the likes of 2001: A Space Odyssey, to off-the-mark predictions from shows such as Tomorrow’s World  – and, yes, even our own telecoms trends blogs – many have struggled to offer any more than a vague stab in the dark.

To make claims about next year’s technology trends is one thing, but trying to predict 30 years of progress is folly. Nevertheless, in an article published in the January 1995 edition of British Telecommunications Engineering, a quarterly trade journal published in the UK by the Institution of British Telecommunications Engineers, two futurologists decided to do just that.

Peter Cochrane and Ian Pearson of BT Laboratories’ Systems Research Division put together 200 futures for 2020, a list of predictions “considering possible future developments in telecommunications into the 21st century.”

Certainly, some of their predictions – deep underground cities, cyber-nations with “significant economic muscle,” and regular manned missions to Mars – remain fanciful, but many about the future of communications and beyond have indeed come to pass, including:

  • Everyone in advanced nations will be computer literate
  • Virtual companies to be dominant
  • Electronic shopping for many products will be the norm
  • Global electronic currency in use; paper and coins largely replaced by electronic cash
  • Various forms of electronic addiction will be a big problem
  • Distance learning widespread

… and, perhaps most sagely of all, that “network will still be a significant bottleneck for some services.”

We’ve selected a handful of those new telecoms technologies to see how accurate the predictions have proven:

Broadband ISDN and ATM switches (in widespread use by 1998)

Broadband ISDN (B-ISDN) aimed to provide higher data rates and support services including video and multimedia. It allowed for faster data transfer rates than traditional ISDN, making it more suitable for emerging high-bandwidth applications.

Asynchronous Transfer Mode (ATM) had emerged in the late 1980s, offering high-speed transmission to fixed-size “cells” and the ability to carry voice, video and data over the same network efficiently. ATM had some niche use (e.g. in telecom cores and early DSL) in the 1990s, but even then, IP/Ethernet was already supplanting it. BT used ATM technology in its early broadband network infrastructure, but the “broadband ISDN” vision was quickly overtaken by Internet technologies in the mid-2000s.

BT announced in 2015 that its ISDN network would be shut down for good by 2025. While ATM may still be used in niche applications or legacy systems, its presence in network infrastructure has diminished significantly. ATM now survives mostly as a low‐level transport layer (for example inside some DSL systems) rather than a front‑end service, and the B‑ISDN concept has largely faded.

Living area use of virtual reality (1999)

In the mid-90s, the tech industry was going through a VR hype cycle not unlike those we’ve seen around other emerging technologies. But by 1999, consumer VR was essentially non‑existent.

Nintendo’s Virtual Boy, released in 1995, was the first home system to attempt to deliver a stereoscopic 3D experience – though without true 3D graphics. Its head-mounted display unit displayed games like Mario’s Tennis in a harsh red monochromatic LED, lacking any depth. The device didn’t track any user head movement, and interaction with gameplay was still done via controller. Players reported eye strain and headaches after prolonged use, and combined with a lack of compelling titles, the Virtual Boy’s commercial success was short-lived.

The first home VR headset to see any measure of commercial success was the Oculus Rift DK1, released in 2013. Now, in 2025, VR is enjoying much more success, though it remains relatively niche.

Computers with speed exceeding 10 TFLOPS (2004)

IBM’s ASCI White was the first supercomputer to exceed 10 teraflops (trillion operations per second), reaching a theoretical 12.3 teraflops. Housed at the Lawrence Livermore National Laboratory, it was unveiled in 2001, three years ahead of predictions. Just as the first supercomputers, built in the 1940s, were designed to tabulate the effects of nuclear weapon tests, ASCI White was designed to digitise the entire testing procedure after then-President George H. W. Bush signed a moratorium on testing in 1992.

By 2004 the top supercomputer (IBM’s Blue Gene/L) was already around 70 TFLOPS. Even in 2002 the Earth Simulator machine was around 35.9 TFLOPS. In February this year, El Capitan was unveiled, the world’s now-fastest supercomputer, with peak performance of 2.75 exaflops.

Natural language home information retrieval and interaction (2005)

When it comes to milestones in natural language information retrieval, it was the butler – Ask Jeeves. Launched in 1996, it allowed users to ask questions in natural language – an early, if flawed, step toward conversational AI. This was achieved by extensive manual tuning by a team of human editors, hand-picking results to pair with questions.

However, this proved less effective than Google’s keyword-based algorithm. Natural language processing was still in its early stages of development, and accurately interpreting the meaning of complex questions proved difficult. Ask Jeeves often struggled to provide precise and satisfactory results, and it eventually was retooled into a more traditional keyword-based search engine – and Jeeves was given the boot.

Since then, more advanced conversational AI systems, such as chatbots and virtual assistants, have brought natural language interactions back to the fore – and a degree of vindication for Jeeves’ natural language method.

The first application to see widespread adoption was Siri, introduced in October 2011 with the release of the iPhone 4S. Though somewhat behind predictions, development on Siri started in 2007 as a standalone app at SRI International in California, a spin-off from DARPA’s Personalized Assistant that Learns (PAL).

Voice‐interactive devices are now very common. Tens of millions of households use smart speakers or virtual assistants every day; Amazon sold over 100 million Alexa devices by 2019, and surveys show roughly a third of U.S. adults have a smart speaker.

Many scattered people sharing a virtual space (2005)

Again, many of the duo’s predictions on VR were a decade or two ahead of schedule.

Though the Metaverse was proudly announced around 15 years behind schedule, if you expand your definition of virtual space, then two years prior, the massively multiplayer online (MMO) game Second Life was released.

Unlike traditional MMOs, Second Life wasn’t built around quests or missions, focusing instead on social interaction and user-generated content. Residents could engage in various activities, such as attending live concerts, socialising at virtual gatherings, and participating in communities. Though its heyday has passed, a small but dedicated community was still keeping Second Life going as recently as 2023.

However, if you were waiting to interact with your peers in VR, you would’ve had to wait until 2017 for the release of VRChat, which on launch supported a maximum of around 50 to 60 simultaneous users per instance.

Currently, several platforms support multi‑user virtual spaces (e.g. VRChat, Roblox), but adoption is still limited; Meta reported only 200–300,000 monthly users for its Horizon Worlds in 2022, and VRChat peaks at a few tens of thousands of concurrent users.

Audio transmission at 2-4 kbit/s with quality equal to analogue telephony (2008)

At that time, achieving intelligible voice transmission at such low bitrates was possible only in military or research contexts using early forms of CELP (Code Excited Linear Prediction) codecs, but the quality fell far short of conventional analogue telephony. By the early 2000s, commercial codecs were more common in VoIP and mobile networks, balancing quality and compression.

Now, codecs and neural-network-based models can compress voice down to 2–4 kbit/s with intelligibility, and some are deployed in constrained environments like satellite or IoT communications. However, commercial voice networks and VoIP systems continue to favour higher bitrates for superior audio quality, typically ranging from 12-24 kbit/s or more.

Personal numbering (2009)

Personal numbering, in which multiple contact points (mobile phone, landline, VoIP, email, etc.) are interconnected or mapped to a single identifier, didn’t quite come about by 2009, and it still hasn’t got there yet.

Some countries experimented (e.g. the UK set up “070” personal numbers around 2001), but these services failed to gain widespread traction. Indeed, regulators note that the UK 070 range gained a “poor reputation” due to fraud and high costs and clamped down on it.

For managing multiple numbers on a single device though, eSIMs now let users switch between profiles without having to change SIM card, making it easy to maintain separate lines or international numbers when abroad.

Speech dialling; recognition in switch equipment (2011)

Speech dialling was envisioned to simplify call initiation by allowing users to speak names or numbers directly into the telephone system, with recognition handled by central infrastructure. However, this architecture never became common.

In the late 1990s and early 2000s, speech recognition was still computationally expensive and unreliable, and when it was deployed, such as in voice-activated voicemail or simple menu systems, it typically resided at the application layer, not the switching fabric. Speech recognition is now ubiquitous thanks to advances in machine learning, but rather than being integrated into telecom switches, recognition is performed on user devices or in the cloud.

Three-dimensional video conferencing (2023)

Holoxica debuted a real-time, two-way 3D video conferencing platform at COP26 at the end of 2021. Founded in 2008, the Scottish firm specialised in dynamic holographic displays without the need for headsets or glasses. However, the firm went into liquidation earlier this year – perhaps the world wasn’t ready for 3D video conferencing after more than two years of pandemic-mandated Zoom calls.

Google has now picked up the mantle with Project Starline – will it see greater success?

Research groups have presented demo systems, such as NVIDIA’s AI‑driven 3D-telepresence prototype at SIGGRAPH 2023, while Telefónica, Ericsson and Matsuko have completed a proof-of-concept using an IMS Data Channel that allows smartphones to make (one-way) holographic calls.

True holographic or fully 3D video calls remain experimental.


As these predictions and the others prove, foresight is hard, and vulnerable to the biases and blind spots of the times they’re made in, while some “new” tech concepts are just reheated ideas that failed previously when the tech wasn’t up to scratch.

For more of Cochrane and Pearson’s predictions, read the full issue of British Telecommunications Engineering below (pages 312 – 318):

About the author

Adam Hughes

Content Specialist, Cerillion

Keep up with the latest company news and industry analysis