Agentforce, the GenAI Agent by Salesforce
Artificial Intelligence can bring a wide array of economic and societal benefits but also generate new risks for individuals. The Artificial Intelligence Act presents a risk-based regulatory approach to AI across the EU without unduly constraining or hindering technological development.
Ensure legal certainty to facilitate investment and innovation in AI.
Improve governance and effective enforcement of existing legislation on fundamental rights and safety requirements for AI systems.
Facilitate the development of a single market for safe, legal, and trustworthy AI applications, and prevent market fragmentation.
Providers who distribute or utilize AI systems in the European Union, whether these suppliers are established in the Union or in a third country (extraterritorial reach);
Deployers of AI systems who are located in the EU;
Providers and deployers of AI systems located in another country if the results generated by the system are intended for use in the EU (extraterritorial reach);
Importers and distributors of AI systems.
Up to 7% of total worldwide annual turnover or €35M, depending on the violation found. Member States are responsible for designing their own sanctions regimes.
Up to 7% of total worldwide annual turnover or €35M, depending on the violation found. Member States are responsible for designing their own sanctions regimes.
7%
European Artifical Commission Office
National authorities to be created or appointed depending on the country
The AI Regulation is part of the European data regulation package and is therefore linked to the DSA, DGA, DMA, etc. but also to the GDPR and the recent proposal for an AI Liability Directive.
13 March 2024: Parliamentary vote on the text
14 June 2024: European Council endorsement
12 July 2024: Publication in the Official Journal of the EU
1 August 2024: Entry into force
2 February 2025: Prohibitions on unacceptable AI risk (6 months after entry into force)
2 August 2025: Obligations enter into effect for providers of general-purpose AI models. Appointment of member state competent authorities. Annual Commission review of the list of prohibited AI (potential amendments (12 months after entry into force)
2 February 2026: Commission implements act on post-market monitoring (18 months after entry into force)
2 August 2026: Obligations go into effect for high-risk AI systems listed in Annex III. Member states to have implemented rules on penalties, including administrative fines. Member state authorities to have established at least one operational AI regulatory sandbox. Commission review, and possible amendment of, the list of high-risk AI systems (24 months after entry into force)
2 August 2027: Obligations go into effect for high-risk AI systems that are intended to be used as a safety component of a product. Obligations go into effect for high-risk AI systems in which the AI itself is a product and the product is required to undergo a third-party conformity assessment under existing specific EU laws (36 months after entry into force)
By the end of 2030: Obligations go into effect for certain AI systems that are components of the large-scale information technology systems established by EU law in the areas of freedom, security and justice, such as the Schengen Information System.
Prohibition of AI systems presenting intolerable risks (Article 5).
Increased obligations for high-risk AI systems.
Less extensive obligations for general-purpose AI systems that do not pose systemic risks and for systems interacting with humans.
Fundamental rights impact assessments to be carried out for certain high-risk systems.
Deployers are now responsible when using AI systems: they must implement human oversight to ensure the system is used responsibly and address any issues. The data used with the system must be relevant and up-to-date.
Providers must ensure their AI systems comply with the AI Act requirements, such as maintaining detailed technical documentation and offering clear information on the system's capabilities, limitations, and performance.
Creation of a European-wide AI Office and national control authorities to ensure legal certainty by verifying the effective implementation of the regulation, and by sanctioning bad practices.
Registration of high-risk AI systems in an European database.
Obtaining a CE marking will be necessary before placing a high-risk AI system on the market.
AI systems that contravene the values of the European Union by violating fundamental rights are prohibited, such as:
Companies are subject to several obligations related to documentation, risk management systems, governance, transparency, or safety, depending on their status (supplier, user, distributor, and other third parties). These systems must also be declared to the EU and bear a CE mark.
These are systems that (i) interact with humans, (ii) are used to detect emotions or determine association with (social) categories based on biometric data, or (iii) generate or manipulate content (‘deep fakes’). For these systems, there is an obligation to disclose whether the content is generated through automated means or not.
Voluntary creation and enforcement of a code of conduct that may include commitments to environmental sustainability, accessibility for people with disabilities, stakeholder participation in AI system design and development, and development team diversity.
General purpose AI systems are AI systems that have a wide range of possible uses, both for direct use and for integration into other AI systems. They can be applied to many different tasks in various fields, often without substantial modification and fine-tuning. Unlike narrow AI, which is specialized for specific tasks, general-purpose AI can learn, adapt, and apply knowledge to new situations, demonstrating versatility, autonomy, and the ability to generalize from past experiences.
Impact of the AI Act on General Purpose AI systems:
Codes of conduct will be established at the European Union level to guide suppliers in applying the rules regarding general-purpose artificial intelligence (GPAI) models.
Systemic risk: A GPAI model represents a systemic risk if it is found to have high-impact capabilities based on appropriate technical tools and methodologies, including indicators and benchmarks.
Specific obligations regarding systemic risks:
Providers' obligations:
Develop and maintain model technical documentation and share it with other vendors if they wish to integrate the GPAI model into other systems.
Establish a policy to comply with EU copyright law.
Publish a detailed summary on the content used for training the GPAI model (based on the model provided by the AI Office).
This includes the safety component of a product or a product requiring a third-party conformity assessment according to existing regulations (Dir 2009/48/EC on the safety of toys, Reg 2016/424/EU on cableways, etc).
It also includes products listed in Annex III:
Continuous iterative process run throughout the entire lifecycle of a high-risk AI system (identification, evaluation of risks, and adoption and testing of risk management measures)
Implementation of measures and information in the instructions
Ensure human monitoring during the period in which the AI system is in use
Transparent design & instructions for users
Design and development with capabilities enabling events to be recorded automatically
Demonstration of high-risk AI system compliance with requirements
Training, validation and testing of data sets to be sure they meet quality criteria
OBLIGATIONS FOR PROVIDERS | OBLIGATIONS FOR DISTRIBUTORS | OBLIGATIONS FOR USERS | |
---|---|---|---|
GENERAL REQUIREMENTS | Ensure that the system is compliant | No distribution of a non-compliant high-risk system and if the high-risk AI system is already in the market, | Ensure the relevance of the data entered Stop the use of the system if it is considered to present risks to health, safety, the protection of fundamental rights, or in the event of a serious incident or malfunction. |
Take the necessary corrective actions if the high-risk AI system is not compliant | Storage or transportation conditions must not compromise the system's compliance with requirements | ||
Verify that the high-risk AI system bears the required CE mark of conformity | |||
PROCESSES | Have a quality management system (strategy, procedures, resources, etc.) | Third party monitoring: to verify that the supplier and importer of the system have complied with the obligations set out in this regulation and that corrective action has been or is being taken | Keep logs automatically generated by the system if they are under their control |
Write technical documentation | |||
Conformity assessment EU declaration and CE marking | |||
Design and develop systems with automatic event logging capabilities | |||
Maintain logs generated automatically by the system | |||
Establish and document a post-market surveillance system | |||
TRANSPARENCY & INSTRUCTIONS | Design transparent systems | Ensure that the AI system is accompanied by operating instructions and required documentation | Obligation to use and monitor systems following the instructions of use accompanying the systems |
Draft instructions for use | |||
INFORMATION & REGISTRATION | Obligation to inform the competent national authorities in case of risks to health, safety, protection of fundamental rights or in case of serious incidents and malfunctions. | Obligation to inform the supplier/importer of a non-compliant high risk system and the competent national authorities | Obligation to inform the supplier/distributor, or the market surveillance authority, if the user cannot reach the supplier and the systems present risks to the health, safety, protection of fundamental rights of the persons concerned. |
Register the system in the EU database |
We rely on teams of highly complementary experts to offer you robust, reliable and effective compliance, in line with your strategic objectives, your use of AI and your internal processes and governance. Our consultants are committed to helping you manage your risks and create synergies as part of your AI projects.
Compliance specialists
Data Scientists
Cybersecurity experts
We have built over the years key enablers that will allow to accelerate the project: modular AI governance framework, benchmark of market best practices, system mapping templates, code assessment automations, mature training modules, custom solutions functional to the PoC, standard policies, charters and procedures, etc.
Thanks to our extensive experience of supporting our customers in AI governance and AI risk assessment, our team will be able to quickly get to grips with your specific needs and save time in interacting effectively with your data scientists.