EU AI Act: What companies need to be aware of

Risk classes and deadlines: Guidelines with transition

German companies use AI across all industries – from customer service and logistics to creative work. The EU AI Act increases the obligations of companies that want to use AI. In the second part of our series on Cybersecurity Awareness Month 2025, we show what this means in concrete terms. And where companies should take action now.

 

Norderstedt, October 16, 2025 – The “Regulation on Artificial Intelligence” – better known as the “EU AI Act” – has been in force in the EU since August 2024; since February of this year, systems with unacceptable risk have been prohibited. The transition period runs until August 1, 2026. The requirements are based on the risk profile of the AI systems used. The EU has established a classification system for this purpose that assesses the risk posed by AI to society and individuals: from unacceptable to high, limited, and minimal. “The EU AI Act sets clear guidelines – and it gives companies time to comply,” says Philipp Arndt, IT security expert at LHIND. “The first step seems trivial, but it is crucial: companies need risk management that quickly identifies affected AI systems.”

From paper to practice: document, label, supervise

For high-risk systems, the EU AI Act requires companies to have robust documentation standards throughout the entire life cycle—from development to operational changes to decommissioning. “Transparency is created right from the start: logging and monitoring help to understand how the system works and how data flows – and to explain this to users,” emphasizes Philipp Arndt. For purchased systems, responsibilities must be contractually separated; internally, the scope of functions must be clarified with the co-determination bodies before commissioning. Even when the system is decommissioned, i.e., when it is no longer in active operation, it must be documented where personal data remains, for example.

There are also obligations beyond high risk: interactive systems such as chatbots must always clearly indicate that they are AI-based. “Honest labeling is not optional, but mandatory,” says Philipp Arndt. At the same time, quality and test criteria for accuracy, security, and robustness must be defined and efficiently integrated into existing development and QM processes. “Training is mandatory. Especially with high-risk systems, there must be no fully autonomous decisions: human supervision remains mandatory,” he warns. 

Shortly before the go-live of a system classified as high-risk, registration with the EU Commission takes effect; in certain fields, an approval procedure may be necessary. “In the end, the company must issue a declaration of conformity – in writing and verifiable,” says Philipp Arndt.

About Lufthansa Industry Solutions

Lufthansa Industry Solutions is a service provider for IT consulting and system integration. This Lufthansa subsidiary helps its clients with the digital transformation of their companies. Its customer base includes companies both within and outside the Lufthansa Group, as well as more than 300 companies in various lines of business. The company is based in Norderstedt and employs more than 3,000 members of staff at several branch offices in Germany, Albania, Switzerland and the USA.