Legal AlertThe EU Commission Announces the First Ever Legal-Framework Regarding the Framework of Rules Related to Artificial Intelligence!

28 April 2021

“Laying Down Harmonised Rules on Artificial Intelligence and Amending Certain Union Legislative Acts” (“Proposal”) regulation has been proposed on 21.04.2021, by the European Union Commission (“EU Commission”). The regulation in question is a legal regulation that includes the framework of rules regarding artificial intelligence. The approval of the European Parliament and member states is required for the regulation proposal to enter into force. When the regulation is adopted, the activities of other countries’ technology companies operating in EU countries will also be restricted. This proposal aims to implement the objective for the development of an ecosystem of trust by proposing a legal framework for trustworthy AI. The proposal is based on EU values and fundamental rights; and aims to give the confidence to embrace AI-based solutions, while encouraging businesses to develop them. Some of the issues regulated by the Regulation are as follows:

1- Definitions

In the proposal, the definition of 44 terms has been made. These definitions include “Artificial Intelligence System” and “User”. Accordingly, the Artificial Intelligence System is defined as “software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with“, and User is defined as “Any natural or legal person, public authority, agency or other body using an artificial intelligence system under its authority, except where the AI system is used in the course of a personal non-professional activity.“. Technical issues such as ‘training data, ‘testing data’, ‘validation data’, ‘input data’ have also been defined.

2- Risk groups

With the proposal, artificial intelligence systems were divided into 3 main groups as “unacceptable risk”, “high risk”, “low or “minimum risk”. Lists regarding the contents of the groups in question are shared in Annex 3. Accordingly, some applications in the high-risk group are as follows:

    • Biometric identification and categorisation of natural persons:
      • AI systems intended to be used for the ‘real-time’ and ‘post’ remote biometric identification of natural persons;
    • Management and operation of critical infrastructure:
      • Al systems intended to be used as safety components in the management and operation of road traffic and the supply of water, gas, heating and electricity.
    • Education and vocational training:
      • Al systems intended to be used for the purpose of determining access or assigning natural persons to educational and vocational training institutions

In addition, artificial intelligence systems, which are considered to be a clear threat to people’s safety, livelihoods and rights, are in the unacceptable risk group and their use is prohibited.

Applications that manipulate human behavior, prevent free will, and enable social scoring by governments and artificial intelligence systems are included in the “unacceptable” risk group.

Artificial intelligence systems in the low-risk group shall be subject to some transparency obligations. It shall be ensured that the users are aware that they are interacting with a machine while talking with the “chatbots” in this group and make informed decisions.

Applications such as AI-powered video games or spam filters were in the minimum risk group.

3- High-Risk Artificial Intelligence Systems

A number of requirements have been stipulated for high-risk artificial intelligence systems. For example, A risk management system shall need to be established for high-risk Artificial Intelligence systems.

The risk management system will need to include a process that runs throughout the entire life cycle of the high-risk Artificial Intelligence system and requires regular systematic updates and the steps determined in the regulation.

High-risk Artificial Intelligence systems shall need to be designed to keep log records while working. Records to be contained at the minimum has also been determined.

With regard to the provision of human surveillance, high-risk Artificial Intelligence systems, including appropriate human-machine interface tools, will need to be designed in such a way that they can be effectively controlled by natural persons while the Artificial Intelligence system is in use.

High-risk Artificial Intelligence systems shall be developed to achieve an appropriate level of accuracy, robustness and cyber security in light of their targeted objectives and to perform consistently in these respects throughout their life cycles.

4 – Obligations

First, the obligations of high-risk Artificial Intelligence providers were discussed. It is envisaged that certain policies and procedures shall be drafted to ensure compliance with the regulation.

Some obligations are also set up for high-risk Artificial Intelligence system importers and distributors.

Some obligations are also regulated for high-risk Artificial Intelligence system users. For example, users of high-risk Artificial Intelligence systems must use these systems in accordance with the instructions for use. The operation of the high-risk Artificial Intelligence system on the basis of the instructions for use shall be observed. If there is a risky situation, the supplier or distributor and suspend shall be informed and the use of the system shall be suspended.

Before launching or putting into service a high-risk Artificial Intelligence system, the provider or, where appropriate, the authorized representative shall register this system in the designated EU database.

5- Penalties

Member States shall lay down the rules on penalties for violations of the regulation, including administrative fines, in accordance with the terms and conditions laid down in the Proposal, and shall take all necessary measures to ensure their proper and effective implementation. The stipulated penalties shall be effective, proportionate and deterrent. They shall pay particular attention to the interests and economic sustainability of small-scale providers and startups.

If the AI system does not comply with any of the requirements or obligations under the Proposal, a fine of up to 30,000,000 EUR or 6% of its worldwide total annual turnover may be imposed.

Finally, the European Artificial Intelligence Board has been proposed to establish.

You may reach the full English version of the Regulation via the link below:

https://digital-strategy.ec.europa.eu/en/library/proposal-regulation-european-approach-artificial-intelligence