Artificial Intelligence

The future of AI in Europe: Regulation as an opportunity for innovation

While the USA is banking on an increasingly unregulated market and China is pushing its AI industry forward under state control, Europe faces a decisive question in relation to the EU’s AI Act: Will regulation serve as a drag or driving force for AI-related innovation?

This issue was recently discussed by Michael Koch (Lufthansa Industry Solutions), Prof. Maximilian Kiener (Institute for Ethics in Technology, TU Hamburg) and Alois Krtil (ARIC e.V.). All three represent organizations that teamed up with other partners to form the Responsible AI Alliance. Their message? Companies must not become paralyzed in the face of statutory provisions – and should instead take on an active role to secure commercial success with more ethically responsible AI.

Prof. Dr. Maximilian Kiener (Institut für Ethik in der Technik, TU Hamburg)

Situated between Stargate and DeepSeek, how can Europe keep pace with the competition in the global AI race?

Maximilian Kiener: The Chinese AI model DeepSeek achieves comparable quality to major players from the USA at lower costs. Even if the details currently remain unclear, this development demonstrates that it is possible to remain competitive even with fewer resources. We can see three different approaches around the world: the USA is banking on a deregulated market, China relies on state control and Europe is setting out legal framework conditions in the EU’s AI Act. Problems have already been raised in the USA, including the drop-off in fact-checking, while major companies are increasingly taking on social functions and government responsibilities.

Michael Koch: In the USA, industry stakeholders and the authorities are warning of rising rates of criminality due to unregulated AI technologies. This makes clear that a completely free market is not an option. Political stakeholders aren’t the only ones calling for regulation – business representatives are, too.

Alois Krtil: What’s remarkable about DeepSeek is its publication as an open-source application. This is unusual for China, which is usually a closed shop. It shows that open-source development remains a driver of innovation. Europe is often underestimated in this regard. The open-source AI scene in Paris alone demonstrates our strength. We’re one of the world’s leading regions for generative AI and open source – even though we lack the financial resources of the big players. Alliances can play a key role in this context: learning from each other, rather than working in isolation on AI regulation and innovation.

Michael Koch: Germany boasts some of the world’s foremost AI experts, which presents an enormous opportunity. It’s common to hear people spread the narrative that we need to invest heavily in hardware to be competitive. DeepSeek disproves this. In China, trade restrictions – especially US tariffs on Nvidia hardware – led researchers to adopt innovative approaches, which might even have been the key to DeepSeek’s success. Europe’s regulatory barriers could also serve as a catalyst for creative solutions.

Michael Koch (Lufthansa Industry Solutions)

So, is the accusation that the EU’s AI Act will stall innovation justified?

Maximilian Kiener: The AI Act doesn’t have to be a drag on innovation; instead, it offers opportunities to forge competitive advantages. It calls for exactly the aspects that characterize responsible AI: security, responsibility and governance. Europe’s strength lies in its holistic assessment of AI – incorporating technological, ethical and regulatory respects. In any case, companies need to handle AI responsibly rather than blindly integrating APIs.

Alois Krtil: Here’s a concrete example: just today, I was at a workshop with a medium-sized company focusing on AI-based turn assistance systems – a safety-critical area with stringent requirements in terms of security, safety and robustness. This medium-sized company regards regulatory requirements as a quality factor – a hallmark of trustworthiness. It carries the logo of the Responsible AI Alliance with pride because it strengthens its position as a dependable provider. Its membership also provides a market advantage, attesting that its models are free from dubious data sources. The company’s AI solutions meet European standards and have demonstrated their safety and reliability.

Maximilian Kiener: While the RAI Alliance can accomplish a great deal, it also faces complex challenges. Responsible AI is often understood as an overarching principle because successful regulation encompasses many different values and requirements. Safety shouldn’t be seen as the sole standard – that would only constrain discourse. The strength of the RAI Alliance lies in upholding and promoting this diversity of values.

Alois Krtil (ARIC e.V.)

Is there a need for more pragmatism and a willingness to experiment when it comes to AI?

Michael Koch: Overall, AI is subject to greater skepticism in Europe than in the USA. In companies’ day-to-day operations, three-quarters of AI applications are met with resistance from employees. The RAI Alliance can help to foster trust. Many companies are hesitant to deploy AI widely because they feel unsure about how to implement it responsibly. The key is a pragmatic approach: not every company needs its own AI department right away, but they do need an AI strategy. Training and internal skills development are important first steps.

Alois Krtil: Rapid technological development intensifies this trend. While generative AI was an experimental field just a few years ago, it’s now part of everyday life. Companies shouldn’t allow regulatory uncertainty to paralyze them; instead, they should test AI integration in controlled environments. The RAI Alliance can help to establish best practices and actively shape discourse.

Maximilian Kiener: AI development is a marathon, not a sprint. It’s about sustainable systems, not short-term efficiency gains. Innovation and ethics are inextricably linked: ethics is a part of value creation, extending far beyond purely economic targets. The idea of a circus tent illustrates the challenge: it’s quickly constructed, attracts visitors – but only leaves behind some flattened grass. The alternative is building a sustainable building, which takes longer to construct but also lasts for longer. Europe needs both aspects: agile experiments for short-term innovation combined with structures designed to endure for the long term.

The next big step forward in artificial intelligence is agentic AI. What risks and opportunities does it hold?

Michael Koch: In truth, agentic AI opens up impressive possibilities. Not only can AI agents provide information, they can also make decisions and perform actions independently. Economic pressure is driving companies towards these fully automated solutions. Yet, there are key questions to tackle. How do we foster trust in these systems? And what control mechanisms will we need?

Maximilian Kiener: A central problem is that we still don’t have a suitable legal category for AI agents. This creates uncertainty. Who is liable if an AI agent causes damage? What is the responsibility of the company using the AI agent? The language we use also leads to misunderstandings, suggesting that AI is underpinned by some kind of consciousness – which isn’t the case. Ultimately, these are definitely not systems capable of “thinking” or “feeling”.

Michael Koch: Despite the challenges, there’s enormous potential for companies to make efficiency improvements. Many employees are already in a position to optimize their own processes – because AI can take on day-to-day tasks. However, this requires an appropriate balance between innovation and security. Companies must be daring enough to conduct AI experiments while also taking responsibility. Although the AI Act provides guidance on this, implementation in practicable strategies is down to companies themselves.

Alois Krtil: This is where the opportunity lies: integrating responsible AI into a company’s strategy at an early stage fosters trust and forges market advantages. Companies that rely on safe and transparent AI are already in a better position. Europe can take on a pioneering role and show that innovation and ethics can be complementary.

Finally, what advice would you give to companies in relation to AI and its regulation?

Maximilian Kiener: To sum up, I think three points are decisive. First, adopting proactive strategic development regarding AI rather than waiting for the AI Act. Second, regarding ethics as a driver of innovation rather than just a compliance obligation. Third, focusing on collaboration – whether in alliances or through an open exchange between companies.

Michael Koch: Companies also need to do one thing above all: get started! They don’t have to develop everything themselves, either. Networks and collaborative endeavors facilitate an exchange of experiences and best practice. Internally, companies need to create a culture of experimentation for safe AI testing.

Alois Krtil: Ultimately, it’s about responsible and courageous use of AI. Rather than just complying with regulations, proactive companies will see regulation as a driver of innovation. Europe has the opportunity to play to its strengths – if we’re smart about how we do it.

Participants
  • Michael Koch is Director Artificial Intelligence at Lufthansa Industry Solutions (LHIND) and a member of the Management Board of ARIC e.V..
  • Prof. Dr. Maximilian Kiener is a philosopher and university lecturer with expertise in ethics, philosophy of law and issues of digitalization. He is Head of the Institute for Ethics in Technology at TU Hamburg.
  • Alois Krtil holds degrees in computer science and industrial engineering. He is CEO of the Artificial Intelligence Center Hamburg (ARIC) e.V., which was founded in 2019.
About Lufthansa Industry Solutions

Lufthansa Industry Solutions is a service provider for IT consulting and system integration. This Lufthansa subsidiary helps its clients with the digital transformation of their companies. Its customer base includes companies both within and outside the Lufthansa Group, as well as more than 300 companies in various lines of business. The company is based in Norderstedt and employs more than 2,600 members of staff at several branch offices in Germany, Albania, Switzerland and the USA.