Skip to main content Skip to footer

Composable AI integration: building for a multi-model future

Composable AI

With new models emerging every few months, betting on a single AI provider is increasingly risky. Cerillion Product Director, Brian Coombs, explains why composable AI architectures matter and how our AI Management Centre enables future-proof, secure and flexible GenAI adoption.

As generative AI accelerates, one of the greatest challenges for enterprises isn’t how to use AI, but how to keep up with it.

New large language models (LLMs) are released every few months, each promising better performance, lower cost, or new reasoning capabilities. Choosing a single vendor can feel like betting the company on today’s front-runner, but potentially locking yourself into tomorrow’s legacy.

This is why composable AI architectures are rapidly becoming essential.

From monolithic to composable AI

McKinsey recently described this evolution in its Agentic AI Mesh framework – an architectural approach that connects multiple AI “agents” and models together, allowing organisations to mix and match capabilities while maintaining governance and control. It’s a shift away from monolithic AI integrations towards a mesh of interchangeable intelligence components, each with clear roles, boundaries and accountability.

The challenges

Moving from experimentation to real operational AI exposes the limitations of traditional integration approaches. CSPs and enterprises face several obstacles that can stall progress if AI isn’t embedded with the right architectural foundations:

  • Vendor dependency
    Most AI integrations tie you to one API or ecosystem.
  • Data governance
    Ensuring personal data and prompts are handled securely.
  • Role-based access
    Managing who can trigger which AI functions.
  • Auditability
    Tracking every interaction for compliance and transparency.
  • Flexibility
    Adapting to new models without major code changes.
  • Cost-effectiveness
    Not every request requires the most expensive model.

At Cerillion, we’ve seen these challenges first-hand while embedding AI into our BSS/OSS Suite, so we built a dedicated framework to solve them.

The Cerillion approach: AI Management Centre

Our AI Management Centre is designed around composability and trust: a structured way to embed GenAI into workflows without locking into a specific provider. At its core are several key layers:

  • Model Abstraction Layer
    A unified interface to any LLM provider (OpenAI, Anthropic, Gemini, Meta, Azure, etc.), so models can be swapped or combined through configuration, not code.
  • Prompt Creator
    Template-based prompt definitions, connecting to the Cerillion MCP Server when required, enabling reusable, testable prompt designs that evolve with business needs.
  • Intent Engine and Results Interpreter
    Components that understand user requests and normalise AI outputs into structured and usable business data.
  • Trust Layer
    The security and governance foundation, handling anonymisation, data masking and audit of every AI interaction.
  • User Interface Integration
    Seamlessly exposes AI capabilities through Cerillion’s product suite, with feature access controlled by user roles.

Productised AI functions

To operationalise this architecture, we introduced a productised model structure that exposes an AI Service for the system to use. It consists of five elements that define how models are organised, integrated, configured and monitored:

  • Groups classify models as “Frontier” (e.g. GPT-5, Claude, Gemini) or “Local” (e.g. Llama-based).
  • Specifications define how to interface with each provider: API formats, authentication keys, parameters. This no-code approach future-proofs integrations against changes in API structures.
  • Instances capture the configuration details: endpoint URLs, temperature, top-p, etc.
  • Prompt Specifications define reusable prompts and link them to LLM groups for flexibility and control.
  • AI Request Audit captures every transaction and detail, ensuring full traceability.

The result is a composable AI framework that lets customers innovate safely, experiment freely, and evolve continuously, without needing deep AI expertise or code changes for every new use case.

Why composability matters

As the AI landscape evolves at breakneck speed, the ability to adapt without rebuilding systems becomes a competitive advantage. Composable architectures give organisations that flexibility by decoupling model choice from application logic, unlocking several key benefits:

  • Future-proofing against rapid LLM evolution.
  • Operational control over governance, security and auditing.
  • Faster experimentation with new providers and techniques.
  • Better cost optimisation through intelligent model routing.

In short: the freedom to choose the best model for each task, today and tomorrow.

Looking ahead

Generative AI will only become more decentralised and agent-based – what McKinsey calls “the rise of the AI mesh.” As models become more specialised and workloads more distributed, enterprises will need architectures that can coordinate multiple AI capabilities reliably and transparently.

At Cerillion, our AI Management Centre provides the foundation for that world: composable, secure, and ready for whatever comes next.

Read more about how we are putting AI at the heart of telecoms transformation at the Cerillion AI Hub.

Portrait of Brian Coombs

About the author

Brian Coombs

Product Director, Cerillion

Keep up with the latest company news and industry analysis