Share with your colleagues:

How telcos can prepare for the EU AI Act

With the EU AI Act now in force, any business operating in the EU and selling AI-related products or services must navigate new compliance requirements. But what do these look for tech resellers and telecom operators? Here, we unpack the key regulations, insight from industry leaders, and practical steps to help organisations prepare.

AI has significantly altered how global companies operate, working at the heart of business-critical functions including product development, customer support, and logistics to name a few. In the telecom sector, this is no different. From predictive maintenance to network optimisation, this technology helps balance smarter solutions with high customer expectations. And as it continues to evolve, so does the responsibility to ensure it’s used responsibly and transparently – especially in ways that don’t infringe on privacy or ethical standards.

The rise of the regulation

Concerns about the impact of AI have grown almost as rapidly as the technology itself. While some question its sentience and worry it could steal human jobs, others dispute the data privacy, security, and algorithmic bias of AI-powered platforms and products. Despite polarised views, one thing is consistent: the desire to power progress and foster innovation that benefits society and economic growth, while minimising risk. This is how the EU AI Act came to fruition. 

On May 21 2024, after more than three years of legislative debate, the Council of the European Union cast its final vote on the Act, seeing it published in the EU Official Journal just three weeks later. Entering into force on 2 August 2024, it now boasts a holistic set of risk-based rules, applicable to all players in the AI ecosystem, and will be phased in over the next three years. This is another of several telecommunications laws that players must navigate.

What does the EU AI Act govern?

The EU AI Act is the first major framework that governs how AI is used, with a focus on responsible applications rather than monitoring the technology itself. Its primary aim is to manage risks from AI applications, especially in high-stakes areas like healthcare, infrastructure, and public safety.

The Act defines an “AI system” broadly as a machine-based setup that operates autonomously to generate outputs like predictions, recommendations, or decisions that influence the physical or virtual environment. This broad scope means that the Act applies to a wide variety of AI-driven products, from high-stakes applications like biometric identification to everyday AI functionalities used in software. 

The Act categorises AI systems based on these risk levels:

  • Unacceptable risk. AI systems posing an unacceptable risk, such as those used for social scoring or deceptive practices that could harm individuals, are not subject to compliance requirements; they are prohibited outright.
  • High risk. These are subject to stringent compliance requirements, covering those integral to product safety or used in critical sectors like education, employment, public services, law enforcement, and justice, where failure could have severe consequences.
  • Limited risk. This includes systems such as chatbots and deep fakes, which directly interact with natural persons. They must disclose artificial content, except in lawful criminal investigations or clearly creative contexts (e.g., art or satire).
  • Low or minimal risk. Any AI system not caught by the above is of low or minimal risk.

While some AI systems are excluded – including those used solely for military, national security, or personal purposes, as well as for research and development prior to product launch – the Act itself is not sector-specific. The Act also distinguishes between products based solely on human-defined rules and those capable of autonomous decision-making; the latter are subject to the Act, while the former are not.

For tech resellers and telecom operators, understanding this nuanced scope is essential, as it impacts the classification and required compliance level of each product or service.

Penalties for breaching AI standards

The EU AI Act enforces a significant penalty structure for non-compliance, highlighting the serious implications of failing to adhere to its requirements. Fines vary based on the severity of the infraction and the nature of the AI system involved, with violations of prohibited practices or failure to meet high-risk AI system obligations leading to fines up to €35 million or 7% of global annual revenue. For SMEs, penalties are capped at the lower of €35 million or 7% of revenue, offering some relief while still enforcing accountability.

Lesser infringements, such as failures to meet transparency requirements, may result in reduced fines. Enforcement is carried out both by the newly established EU AI Office, which coordinates oversight at the EU level, and by national regulatory bodies in each member state.

Timelines for compliance

While the Act is already in force, the full regulations will be implemented in phases, starting with the prohibition of certain high-risk practices from February 2025. By May 2025, new codes of practice for AI systems will be in place to support compliance, and general-purpose AI systems – software intended by the provider to perform generally applicable functions – will need to meet standards by August 2025.

The primary obligations for high-risk AI systems, especially those impacting critical sectors, come into full effect by August 2026. For some existing high-risk and general-purpose AI models already on the market, extended compliance deadlines reach as far as August 2027.

With this phased timeline, businesses have time to assess and adapt tier practices in line with the Act’s requirements, supporting a smoother transition toward full compliance.

As a complement to the Act, the AI Liability Directive is also currently in draft form and awaiting consideration by the European Parliament and the Council. Once enacted, it aims to ensure that liability rules are effectively applied to AI-related claims, enabling individuals harmed by AI software to seek compensation from manufacturers through the forthcoming Product Liability Directive, which will replace Directive 85/374/EEC.

Need help navigating compliance? With the EU AI Act now in force, any business operating within the EU and selling AI-related products or services must navigate a complex web of requirements that can significantly impact their operations moving forward. Talk to our legal experts about how you can pivot accordingly, while keeping business objectives in firm focus.

Hear from our experts

Read more latest news, insights and views from Trenches Law