Sirma helps businesses build corporate AI platforms with full control over their data
The corporate artificial intelligence market offers a variety of solutions, but many still force you to choose between power and security, or innovation and control. The Sirma.AI Enterprise Platform offers a different approach, integrating the flexibility of multiple LLM models with comprehensive, self-managed infrastructure. To gain insights into the creation of the platform and its unique features, we interviewed Nikolay Kondikov, SVP of Sirma Incubator & R&D Lab, who is leading the its development. This division focuses on developing innovations and incubating new business units.
What key challenges do you see in your sector that could be addressed through artificial intelligence?
Actually, we chose to solve our own problem first. As a company with multiple offices and hundreds of employees, we faced the same challenges as our customers: delayed communication between teams, manual document processing, inefficient HR workflows, and duplication of effort - all typical of the chaos often found in large organizations. When we examined existing AI solutions, we identified three primary barriers that needed to be overcome:
First, there’s data sovereignty. We work with banks, insurance companies, and healthcare providers – organizations that simply can’t send sensitive data to the OpenAI cloud. Beyond GDPR, many of them have strict internal policies that forbid it. So from the very start, most corporate AI tools just weren’t an option for us.
Second, there’s the issue of relying on a single provider. Building your entire infrastructure around large language models is risky. What happens if prices go up, quality drops, or service disappears altogether? We’ve already seen that happen with some AI platforms, and we didn’t want our clients exposed to that kind of uncertainty.
And third, there’s integration. Every company already has its own systems, processes, and data that have been built up over the years. Most off-the-shelf AI solutions don’t fit neatly into that environment.. We needed something that could adapt, deeply, without requiring an army of machine-learning specialists.

What are the biggest barriers to the wider adoption of AI - technological, organizational, or cultural?
Technology is no longer the primary barrier to adopting AI. Language models have reached maturity, and the necessary infrastructure is now in place. From our perspective, the biggest obstacle is trust and security. Senior managers are eager to implement AI, but are understandably concerned about potential breaches or regulatory fines. This is why deploying AI on proprietary infrastructure is essential; it ensures that data remains securely within a controlled environment.
The second barrier is the lack of a practical adoption model. Companies see impressive demos, but don’t know how to transform their own processes. It’s not just about technology; it’s about rethinking workflows, training teams, and measuring results.
Lastly, there is the concern about dependency. Many companies have faced negative experiences due to excessive reliance on external suppliers. This highlights the importance of a multi-LLM (Large Language Model) architecture – it allows you to collaborate with OpenAI today, use a local model tomorrow, and employ a specialized model for specific tasks. Flexibility is now the foundation of security.
What are the main challenges or inefficiencies that Sirma.AI Enterprise addresses in the B2B context?
The approach we use, is the multi-LLM architecture, which addresses all three barriers simultaneously. The platform supports OpenAI, Google Gemini, Anthropic Claude, Amazon Bedrock, as well as local models such as Llama, Mistral, and other open-source solutions. This means that we can:
- use cloud models for non-critical operations;
- host local models on our own infrastructure for sensitive data;
- switch dynamically between models depending on the task;
- avoid dependence on a single provider.
For example, one of our clients uses GPT-4 for general queries, while all transactional analyses are conducted on a local Llama model hosted on their servers – the data never leaves the premises.
On a technical level, the platform offers:
- scalable microservices;
- end-to-end encryption;
- role-based access control;
- full audit trails with 99.9% guaranteed uptime. However, the most significant advantage is its flexibility of deployment: it can run as a cloud service, on your own infrastructure, in a hybrid configuration, or even in isolated environments for maximum security.
What technologies do you use in your B2B solution?
This is where agent architecture plays a crucial role as the platform’s core. It’s more than just a chatbot or a simple question-and-answer system – we employ state-of-the-art language models, not as standalone tools, but as part of an integrated system of intelligent agents.
We have three main layers:
-
Personalized AI Agents – These are specialized agents with expertise in specific fields. For example, there could be an HR agent who sorts through CVs, puts together job descriptions, and leads the interview process, or a legal agent who focuses on contracts and compliance documents.
-
Teams of Agents – This involves multiple agents working together. For instance, a team of recruitment agents may include one who reviews CVs, another who conducts phone interviews, a third who analyses technical skills, and a fourth who schedules in-person interviews. This creates an autonomous, end-to-end process.
-
Agent Workflows – These are entire business processes that are automated with intelligent decision points. Workflows learn from operations and improve over time.
All this is built through a visual, low-code/no-code interface – Sirma AI Studio. You don’t need machine learning specialists to create a personalized agent; business users can do it themselves.

How do agents access corporate data?
This is a critical question. An AI agent without context is useless. That is why we use a sophisticated RAG (Retrieval-Augmented Generation) system. The platform includes:
- Vector stores – high-performance vector databases for semantic search;
- Document processing – optical character recognition (OCR) and intelligent document analysis;
- Knowledge graphs – structured representation of organizational knowledge;
- Model Context Protocol (MCP) integrations – we connect to ERP, CRM, databases, files, APIs.
In practical terms, this means that agents can quickly search through petabytes of documentation and find information in seconds. We have observed a 70% reduction in the time it takes to locate information within our organization. Importantly, all of this is conducted entirely on our own infrastructure: vector storage resides on our servers, documents never leave our premises, and all vector representations are generated locally.
You mentioned MCP servers and their importance for connecting AI agents to the outside world. To what extent does your solution require individual integration with customer systems (ERP, CRM, etc.), and how are you managing it?
This is where the Model Context Protocol (MCP) becomes essential. MCP is a standard that enables AI agents to communicate with external systems in a structured manner. You can think of it as an “API for AI agents,” allowing the agent to call functions, read data, and trigger actions within your corporate systems.
At Sirma, we specialize in developing MCP servers – a key factor in making AI agents truly effective. An agent that can only read and generate text is limited in its capabilities. In contrast, an agent that can search ERP data, create tasks in Jira, send emails, update CRM records, and simultaneously extract database insights while generating reports becomes an incredibly powerful tool capable of transforming workflows and enhancing business control.
The platform provides centralized management of MCP servers, allowing users to deploy, configure, monitor, and update these servers directly from Sirma AI Studio. We provide a library of pre-built MCP servers designed for popular systems, alongside a framework for creating custom integrations.
MCP servers can operate in various environments – cloud, local, or isolated – supporting full data sovereignty and ensuring that agents can integrate with enterprise systems without data ever leaving the controlled environment.
Can you give a specific example of how this works in practice?
A real example of our HR automation involves an HR agent that processes applicants for a specific position. This agent utilizes several MCP servers:
- ATS Integration MCP Server - connects to the applicant tracking system and extracts structured information from CVs in various formats, such as PDF or Word documents.
- Calendar MCP Server - checks for available interview slots.
- Email MCP Server - sends automatic emails to candidates.
- Voice Call MCP Server - initiates phone calls for preliminary interviews.
The agent manages all of these tools autonomously. It receives a CV, analyzes it, and matches it against the job description. If suitable, the agent records the information, books an interview slot, sends the invitation, and if needed, conducts a phone interview, all without human intervention, while maintaining full auditability and oversight. We have developed deep expertise in building MCP servers because each customer has unique systems and workflows. The greatest value often lies not in general integrations but in tailored MCP servers that meet the specific needs of each organization. This is why we also provide custom MCP development services as part of our platform.
Which industries are showing the strongest demand for these kinds of solutions?
Automation platforms and intelligent assistants are widely used across industries, each with its specific requirements and challenges. In the financial services, the primary focus is on data sovereignty and compliance automation, with a key emphasis on fraud detection and delivering real-time investment advice. In healthcare, assistants reduce clinical documentation by up to 75% and support patient management and insurance processing. In hospitality and tourism, there is demand for personalized concierge and reservation assistants, guest services, and dynamic pricing powered by AI.
In logistics and transportation, solutions are being implemented for route optimization, predictive analysis for fleet management, automated order processing, and cargo tracking with intelligent agents. In retail, personalized recommendations, inventory management, and customer behavior analysis are used, while in the public sector, the focus is on complete proprietary infrastructure, automated citizen services, and emergency response coordination.
A key advantage is the platform’s customizability – it can be tailored to any industry rather than use a one-size-fits-all approach.
What do you think are the long-term effects of implementing AI, beyond operational efficiency?
AI transformation is a marathon, not a sprint. We began two years ago with basic automation, and today we have developed complex multi-agent teams, autonomous workflows, and voice agents. However, we are not stopping there – we continue to experiment with new models, practical applications, and integrations.
Security should never be compromised. Too often, companies are forced to choose between innovation and security. With the right architecture, you can achieve both. On-premise implementations with local models, multi-LLM flexibility, and enterprise-level security can coexist.
Furthermore, AI acts as a multiplier of human expertise rather than a replacement. Our HR specialists have not been replaced; instead, they have been freed from repetitive tasks and can now concentrate on strategic work. Our developers aren’t redundant – they are able to build more complex systems because AI takes care of the boilerplate code.
The Sirma.AI Enterprise Platform embodies this philosophy. It empowers organizations with the best AI models while ensuring complete control, uncompomised security, and zero vendor lock-in. For us, this marks the beginning of a long-term journey – creating and adapting AI tools that serve businesses, rather than dictating how they should work.