Regulation meets innovation: the EU AI Act and its implications for Swiss companies

After lengthy discussions between the Council of the European Union (“EU”) and the European Parliament, the EU finally published the EU Artificial Intelligence Act (“AI Act”) on 12 July 2024. This act comes into force on 1 August 2024 and represents a significant step by the Eu-ropean Union towards the regulation of artificial intelligence (AI). This legislation not only af-fects companies within the EU but has also an impact on Swiss companies, in particular compa-nies that operate in the EU or export products to the EU. This article provides an overview of the main aspects of the AI Act and its implications for Swiss companies.

Background and objectives of the AI Act

Artificial intelligence is omnipresent and is already used for a wide range of functions. Yet the use of AI harbours not only advantages, but also risks.

The AI Act was developed to establish confidence in AI systems and ensure that they are used ethi-cally and safely. Even though many AI systems can be used without hesitation from today’s per-spective, some of them involve risks that need to be regulated in order to keep undesirable conse-quences at bay.

In order to address these risks systematically, the EU regulation takes a risk-based approach and divides AI systems into four main categories according to their risk potential. The higher the risk potential, the more regulations companies’ AI systems have to fulfil.

Categorisation of AI systems

  • Prohibited AI systems: AI systems that are considered inadmissable are generally prohibited. These include applica-tions such as “social scoring” (evaluation of people based on their behaviour in society) or “predictive policing” (prediction of potential crimes based on data analysis), which can signifi-cantly jeopardise personal privacy and other fundamental human rights. These systems may only be used in exceptional cases and under strict conditions.
  • High-risk AI systems: these systems pose a high risk to individuals or society. Examples include AI systems that are used in critical infrastructures (e.g. military), with biometric data or in recruitment proce-dures. Such systems must fulfil strict requirements in terms of transparency, traceability and risk management. Companies must ensure that decision-making processes are transparent and carry out regular risk assessments and risk mitigation measures.
  • AI systems with limited risk: these include systems such as chatbots or text generators, which pose a moderate risk. Compa-nies must ensure that users are informed when they are interacting with an AI system. In addi-tion, warnings about possible misinformation must be issued in order to protect users from misunderstandings.
  • AI systems with minimal risk: systems such as spam filters or AI-supported video games, which pose a low risk, are subject to hardly any special regulatory requirements. Voluntary codes of conduct, which can help com-panies to promote good practices in dealing with AI, are recommended for these systems.

Staggered implementation of the regulations

To give companies time to adapt their AI systems with different risk profiles if necessary, the regu-lations are to be phased in. It is important to note that, under the AI Act, prohibited AI systems may no longer be used six months after the act comes into force.

Date Milestone
12 July 2024 Publication of the AI Act in the Official Journal of the European Union.
1 August 2024 Entry into force of the AI Act.
2 February 2025 Companies must implement the regulations for prohibited AI systems within six months of enactment. This means that such systems must be adapted or withdrawn from circulation by the end of January 2025 at the latest.
2 August 2025 Companies have until this date to adapt their systems and ensure that they comply with the new regulations. The penalty mechanism of the AI Act will also apply from this date.
2 August 2026 General applicability of the regulation. This is the date by which certain high-risk AI systems (Annex III AI ACT) must comply with the requirements of the AI Act.
2 August 2027 There is a longer implementation period for certain high-risk AI systems. This is regulated in Annex I to the AI Act.

Extraterritorial effect and significance for Swiss companies

The AI Act has implications beyond the borders of the EU and therefore also applies to Swiss com-panies that operate in the EU or whose AI products are used in the EU. This means that Swiss com-panies operating in the EU market must also comply with the provisions of the AI Act in addition to the requirements of the Swiss Data Protection Act (DPA) and the EU General Data Protection Regu-lation (GDPR). This is particularly relevant for companies that put AI systems into circulation, sell products with integrated AI systems or offer services that are based on AI and are used in the EU.

Challenges and need for adaptation

Swiss companies should carefully analyse their AI systems and adapt them to meet the new re-quirements if necessary. This includes implementing risk management systems and transparency measures and conducting data protection impact assessments (DPIA) in the case of high-risk appli-cations.

  • Risk management: companies must introduce a comprehensive risk management system that includes the identi-fication, assessment and minimisation of risks. This is particularly important for high-risk AI systems, which are subject to strict requirements.
  • Transparency and traceability: AI systems must be designed in such a way that their decision-making processes are transpar-ent and traceable. This means that companies must document how decisions are made and what data are used in the process.
  • Data protection impact assessment (DPIA): a DPIA is required for certain AI applications, especially those that pose a high risk to the rights and freedoms of individuals. This assessment helps to identify potential data protection risks and take appropriate measures to minimise them.
  • User information: in the case of systems that pose limited risk, companies must ensure that users are informed that they are interacting with an AI system. This helps strengthen confidence in AI systems and avoid misunderstandings.

Significant sanctions and penalties

The penalties for violations of the AI Act are significant and may include administrative fines against the offending company of up to 35 million euros or 7 percent of the previous financial year's annual global turnover (whichever is higher). These high penalties underline the importance of adhering to the new regulations and the need to implement appropriate compliance measures.

AI regulation in Switzerland

Switzerland already has guidelines in place for artificial intelligence, which were published by the federal government on 25 November 2020. These provide a framework for the ethical and responsible use of AI in Swit-zerland.

In addition, on 22 November 2023, the Federal Council commissioned DETEC (Federal Department of the Environment, Transport, Energy and Communications) to carry out a comprehensive analysis of possible regulatory approaches for artificial intelligence (AI), which is expected to be available by the end of 2024. This analysis is being prepared in col-laboration with various federal agencies and takes into account the regulatory approaches of the AI Act and the Council of Europe.

It is therefore likely that the use and application of artificial intelligence will also be subject to regulation in Switzerland in the foreseeable future, which means it is advisable to start addressing the challenges and adjustments now.

Recommendations for Swiss companies

Swiss companies should be proactive and review their AI systems and associated processes to en-sure they are compliant with the requirements of the AI Act. They should also keep an eye on de-velopments in the area of AI regulation in Switzerland.

  • Compliance review: organisations should conduct a thorough review of their current AI systems and processes to ensure they are compliant with the new regulations. This can be done through internal audits or external consultations.
  • Training and awareness-raising: staff training is essential to raise awareness of the new requirements and ensure that all em-ployees understand the importance of compliance.
  • Technological adaptations: these may be necessary to ensure compliance with the regulations. They may include adapta-tions to existing systems or the implementation of new technologies.
  • Collaboration with experts: it is essential to work with legal experts and technology specialists in order to ensure the le-gally compliant implementation of AI and to minimise potential risks. Understanding your own systems and requirements at an early stage will make it easier for you to meet future regula-tions.

Conclusion

The EU AI Act represents a significant regulatory challenge, but also offers the opportunity to strengthen confidence in AI systems and to secure innovation. It is crucial for Swiss companies to prepare for these new requirements at an early stage and make the necessary adjustments in order to fulfil the regulatory requirements and maintain a competitive advantage.

BDO has broad expertise in the areas of management consulting, compliance, legal and technology. We would be happy to support you in reviewing your AI systems and implementing the necessary measures.