Home EU Implements Tough New AI Regulations: Significant Compliance Risks forBusinesses
Home EU Implements Tough New AI Regulations: Significant Compliance Risks forBusinesses

EU Implements Tough New AI Regulations: Significant Compliance Risks forBusinesses

Brussels, September 2024 – In a significant development for businesses across Europe, the European Union has passed a comprehensive set of regulations targeting artificial intelligence (AI) systems, known as the AI Act. The legislation represents a major step toward regulating AI technologies and creating a robust compliance framework, but it also presents a host of new risks for companies operating in the EU.

The AI Act, which was formally adopted earlier this month, classifies AI systems into risk categories ranging from minimal to unacceptable risk. The new rules, which are expected to take full effect in 2025, impose strict requirements on companies using or developing high-risk AI systems, including mandatory risk assessments, transparency standards, and governance structures. High-risk systems include AI used in critical infrastructure, employment decisions, education, and law enforcement.

For businesses, the compliance burden is substantial. Companies found to be in violation of the AI Act face fines of up to €30 million or 6% of their global turnover, whichever is higher. This aligns the penalties with those imposed under the EU’s General Data Protection Regulation (GDPR), underscoring the seriousness of these new rules.

Compliance Challenges
While many businesses see AI as a key driver of innovation, the legislation poses several compliance challenges. First, the act requires companies to conduct detailed risk assessments for their AI systems. These assessments must address the potential harm to individuals’ health, safety, and fundamental rights. In addition, high-risk AI systems must be designed with strict traceability, ensuring that their decision-making processes are explainable and auditable.

For companies in industries such as healthcare, financial services, and recruitment, this means revisiting existing AI models and deploying significant resources to ensure compliance. Internal risk management frameworks will need to be adapted to meet these new requirements, and this could be especially costly for smaller firms.

Increased Regulatory Scrutiny
The AI Act also expands the scope of regulatory oversight. Each member state will be required to establish a national supervisory authority responsible for enforcing the law. These authorities will have wide-reaching powers, including the ability to audit companies’ AI systems and demand information on data processing techniques. Companies that fall under the “high-risk” category will face periodic reviews and inspections by these authorities.

Compliance risks are not limited to European companies. Any firm that markets AI products or services within the EU will also need to comply with these regulations, regardless of their country of origin. For multinational corporations, this adds another layer of complexity to managing global risk and compliance frameworks.

Strategic Implications
Strategically, the new regulations signal a shift in the regulatory environment for AI technologies worldwide. The EU’s move to establish a robust AI regulatory regime could set a global precedent, with other jurisdictions likely to follow suit. This raises the stakes for companies in terms of future compliance risk and operational flexibility.

Businesses will now need to weigh the potential benefits of deploying AI technologies against the risks of regulatory non-compliance. Failure to properly assess and mitigate these risks could lead to reputational damage, significant fines, and loss of market share. 


Next Steps for Businesses
Companies operating in the EU or offering AI-related products and services in the region should begin preparing now for the forthcoming regulations. Key steps include:

• Conducting a comprehensive audit of all AI systems to determine risk classification.
• Developing or updating AI risk management frameworks and ensuring transparency and traceability in AI decision-making.
• Engaging with external risk and compliance experts to navigate the complexities of the new regulations.

In the coming months, businesses will need to carefully monitor developments in the implementation of the AI Act. National regulatory authorities will begin issuing guidance on enforcement, and early indications of enforcement priorities will provide valuable insights into the practical challenges of compliance.

Conclusion
The passage of the EU’s AI Act marks a critical juncture in the regulation of emerging technologies. For businesses, the new law introduces a host of risks, particularly around compliance and regulatory scrutiny. While the rules are intended to safeguard individuals from the risks of AI, they also pose considerable challenges for businesses, requiring swift action to mitigate risks and ensure compliance.

Hagan Smith can help you prepare and upgrade your risk management frameworks accordingly to ensure compliance with the new act..


Hagan Smith 

September 2024