Can you give us some examples of current usage and the developments likely to emerge in the future?
Poggemann: We use LLMs to gain a real-time understanding of what users are saying, how they are saying it, and what they mean. Based on this, the agents can identify what information is missing, which systems need to be involved, and what the next steps are. Let me give you an example from aviation. When a user rebooks a flight, the AI communicates with them to enable authentication via the booking code, gathers the travel details, and makes personalized suggestions – always within the framework of defined rules. They can also incorporate digital content, like photos – in what are known as multi-modal systems. This combination of language and visual understanding brings us one step closer to human-like interaction.
Schwabe: At LHIND, we are already using agentic AI in customer projects and internal processes – such as for customer service chatbots, automated reordering systems in the supply chain, and for the real-time tracking of packages as part of IoT integration. Tasks that were previously handled by expediters are increasingly becoming automated. There are also new use cases in IT operations and cybersecurity, with the AI agents able to respond proactively.
How do we ensure that development does not stray into unethical practices and safeguard acceptance?
Schwabe: The challenge lies in achieving the right balance. Solid foundations are already in place in the form of governance rules, transparency requirements and ethical guidelines. From a technical standpoint, sandbox environments are a good option – that is, controlled, restricted areas where the agents can operate safely. AI agents should be regarded as a new technology. Nothing more and nothing less. As with any new technology, the risks and opportunities must be considered and the possible consequences assessed.
Poggemann: Companies are primarily deploying AI in areas with repetitive processes. But there are also innovative applications in many other areas. For agentic AI, we recommend a case-based analysis of the use cases, ROI, and risks. Even seemingly simple use cases, like collating user inputs, can have a clearly quantifiable ROI. Given the rapid pace of development, proactive and agile action is required – rigid rules will no longer suffice. Companies should either build up their AI expertise internally or collaborate with experienced partners.
Do standards exist or is everything moving too fast for that, with developments too specific to individual customers’ needs?
Schwabe: Standards are crucial for scaling – but they need to arrive at the right time. Technological developments in other fields have shown that standards are important, often emerging and taking effect at the end of a phase of highly dynamic innovation. Agentic AI and conversational agents are still in this phase. Initial standards are now emerging in certain technical sub-areas.
Poggemann: We draw a distinction between two forms of standardization: one is formal certifications and standards that arise from regulatory processes; the other is practice-driven industry standards. Formal standardization, which entails audits, can barely keep up with the pace of innovation. Industry-driven standards – such as the Model Context Protocol (MCP), which makes it easily to link AI models with external services – often come about more quickly. Google’s initiative on communication between AI agents also appears to be establishing itself as a de facto standard. As a solutions provider, it’s important that we’re able to respond quickly to such topics. That’s why we structured our software development activities with open interfaces and established a microservice system architecture, so that we can introduce and offer new functions as quickly as possible.
How are responsibilities divided between Cognigy and LHIND in their joint projects?
Poggemann: We focus on the further technological development of our platforms – in close alignment with the customer’s needs. For this, we strategically engage partners such as LHIND to handle implementation and end-to-end customer consulting. We see that these technologies are rarely just a component of a single application – rather, they are usually part of a broader transformation. That is what makes active collaboration with our partners so important.
Schwabe: That’s right. Intensive collaboration and end-to-end consulting are key. This is because, in addition to an AI solution like Cognigy’s, there are always many systems around it that need to be considered and integrated as well. A true AI transformation is much more than a software update where the button just happens to be in a different place.
What’s next – and what pitfalls must be avoided?
Schwabe: The next steps are scaling and broad availability. New qualifications are the order of the day, not heavy-handed job cuts. The aim must be for interactions with AI agents to be intuitive, helpful and perfectly natural, and – personally – for them to be entertaining and enriching at the same time. Let’s consider an example. Imagine a genuinely intelligent assistant in a day-to-day setting, such as in local public transport. When deployed in companies, AI agents should unleash employees’ superpowers: repetitive tasks can be delegated to the AI, making it possible to prioritize human-to-human interactions that people appreciate and which generate value.
Poggemann: AI technology has graduated from the test phase. It’s now suitable for mainstream use: all leading companies are introducing AI in different processes and operations, or deploying it to support their employees. This poses new requirements in terms of scalability, system functions, infrastructure, processes, and security. The key point is that AI will not replace personal connections with customers or among employees; instead, it will transform them. This could include context-specific information that makes more qualified communication possible. Initial problems, like hallucination, have now been broadly solved, while cost barriers are also falling. Improvements in speech synthesis, latency and integration are now making smooth real-time operation possible.
Schwabe: When we look back in a few years, the way we searched for information and prepared meetings in 2025 will seem downright archaic. We will be amazed and frustrated by how much time we once wasted at work and in our private lives before reclaiming it with the aid of AI technology.