Is there a need for more pragmatism and a willingness to experiment when it comes to AI?
Michael Koch: Overall, AI is subject to greater skepticism in Europe than in the USA. In companies’ day-to-day operations, three-quarters of AI applications are met with resistance from employees. The RAI Alliance can help to foster trust. Many companies are hesitant to deploy AI widely because they feel unsure about how to implement it responsibly. The key is a pragmatic approach: not every company needs its own AI department right away, but they do need an AI strategy. Training and internal skills development are important first steps.
Alois Krtil: Rapid technological development intensifies this trend. While generative AI was an experimental field just a few years ago, it’s now part of everyday life. Companies shouldn’t allow regulatory uncertainty to paralyze them; instead, they should test AI integration in controlled environments. The RAI Alliance can help to establish best practices and actively shape discourse.
Maximilian Kiener: AI development is a marathon, not a sprint. It’s about sustainable systems, not short-term efficiency gains. Innovation and ethics are inextricably linked: ethics is a part of value creation, extending far beyond purely economic targets. The idea of a circus tent illustrates the challenge: it’s quickly constructed, attracts visitors – but only leaves behind some flattened grass. The alternative is building a sustainable building, which takes longer to construct but also lasts for longer. Europe needs both aspects: agile experiments for short-term innovation combined with structures designed to endure for the long term.
The next big step forward in artificial intelligence is agentic AI. What risks and opportunities does it hold?
Michael Koch: In truth, agentic AI opens up impressive possibilities. Not only can AI agents provide information, they can also make decisions and perform actions independently. Economic pressure is driving companies towards these fully automated solutions. Yet, there are key questions to tackle. How do we foster trust in these systems? And what control mechanisms will we need?
Maximilian Kiener: A central problem is that we still don’t have a suitable legal category for AI agents. This creates uncertainty. Who is liable if an AI agent causes damage? What is the responsibility of the company using the AI agent? The language we use also leads to misunderstandings, suggesting that AI is underpinned by some kind of consciousness – which isn’t the case. Ultimately, these are definitely not systems capable of “thinking” or “feeling”.
Michael Koch: Despite the challenges, there’s enormous potential for companies to make efficiency improvements. Many employees are already in a position to optimize their own processes – because AI can take on day-to-day tasks. However, this requires an appropriate balance between innovation and security. Companies must be daring enough to conduct AI experiments while also taking responsibility. Although the AI Act provides guidance on this, implementation in practicable strategies is down to companies themselves.
Alois Krtil: This is where the opportunity lies: integrating responsible AI into a company’s strategy at an early stage fosters trust and forges market advantages. Companies that rely on safe and transparent AI are already in a better position. Europe can take on a pioneering role and show that innovation and ethics can be complementary.
Finally, what advice would you give to companies in relation to AI and its regulation?
Maximilian Kiener: To sum up, I think three points are decisive. First, adopting proactive strategic development regarding AI rather than waiting for the AI Act. Second, regarding ethics as a driver of innovation rather than just a compliance obligation. Third, focusing on collaboration – whether in alliances or through an open exchange between companies.
Michael Koch: Companies also need to do one thing above all: get started! They don’t have to develop everything themselves, either. Networks and collaborative endeavors facilitate an exchange of experiences and best practice. Internally, companies need to create a culture of experimentation for safe AI testing.
Alois Krtil: Ultimately, it’s about responsible and courageous use of AI. Rather than just complying with regulations, proactive companies will see regulation as a driver of innovation. Europe has the opportunity to play to its strengths – if we’re smart about how we do it.