The topics of defining AI, high-risk classification, listing high-risk use cases and fundamental rights impact assessment will be on the Council table this week as the Spanish presidency prepares to dive into headlong into the negotiations.
Spain assumed the rotating presidency of the EU’s Council of Ministers on 1 July. In addition to its digital priorities, Madrid seeks to reach a political agreement on the AI Act, a key piece of legislation to regulate artificial intelligence based on its harmful potential.
The Spanish presidency released a document, dated 29 June and seen by EURACTIV, to inform an exchange of views on four critical points of the AI regulation on Wednesday (5 July) at the Telecom Working Party, technical body of the Council.
The discussion will inform the presidency’s position at the next negotiating session with the EU’s Council, Parliament and Commission, the so-called trilogues, on 18 July.
Definition of AI
The European Parliament’s definition of Artificial Intelligence aligns with the Organization for Economic Co-operation and Development (OECD), seeking to anticipate future adjustments under discussion within the international organization.
Artificial Intelligence System (Artificial Intelligence System) means a machine-based system designed to operate with different levels of autonomy and which can, for explicit or implicit purposes, generate outputs such as predictions, recommendations or decisions that affect physical or virtual environments, yes read the text of the Parliament.
Conversely, while the Council also adopted some elements of the OECD definition, it further restricted it to machine learning approaches and logic- and knowledge-based approaches to avoid traditional software falling under the definition.
This [the OECDs] the definition seems to cover software that should not be classified as AI, reads the note from the presidency, indicating three possible options: stick to the text of the Council, approach the Parliament or wait for the September trilogue to assess the direction that the OECD.
High risk classification
The AI Act requires developers of systems with a high risk of causing harm to people’s safety and fundamental rights to comply with a stricter regime on risk management, data governance and technical documentation.
The way systems should fall into this category has undergone major changes. Initially, the bill automatically classified high-risk AI applications that fell into a list of use cases in Annex III. Both co-legislators have eliminated this automatism and introduced an additional level.
For the Council, this level concerns the importance of the output of the AI system in the decision-making process, with purely ancillary outputs excluded from the scope.
MEPs introduced a system whereby AI developers would have to self-assess whether the application covered by Annex III was high-risk based on guidance provided by the European Commission. If companies believe their system is not high risk, they should inform the competent authority, who should respond within three months if they believe there has been a misclassification.
Also in this case the options envisage maintaining the general approach of the Council or moving towards the Parliament, but also in this case various intermediate solutions are envisaged.
One option is to adopt the MEPs’ version, but without the notification of the relevant authorities. Alternatively, this version could be further refined by introducing clear criteria for self-assessment of AI providers as binding rules rather than soft guidance.
The final proposal is Parliament’s system without notification and with binding criteria, as well as exploring further options to provide further guidance to providers, for example by using a repository of examples of Annex III AI-systems which should not be considered high risk.
List of high risk use cases
Both co-legislators have heavily modified the Annex III list. EU countries have eliminated in-depth forgery detection by law enforcement authorities, crime analysis and authenticity verification of travel documents, while adding critical digital infrastructure and life insurance and about health.
MEPs have expanded the list significantly, introducing biometrics, critical infrastructure, recommendation systems of the largest social media, systems that could influence election results, artificial intelligence used in dispute resolution and border management.
Delegations are invited to provide their views on the additions and changes described above, the note continues.
Impact assessment on fundamental rights
Centre-left lawmakers want to compel users of high-risk AI systems to conduct a fundamental rights impact assessment before the tool is put into service, which should consider the intended use, temporal scope and the categories of people of the groups likely to be affected.
In addition, a six-week consultation with interested stakeholders should be launched to inform the impact assessment.
The text of the Council does not include such an obligation, and it is important to remember that the GDPR [General Data Protection Regulation] it already requires both businesses and public organizations to consider whether high risks to rights and freedoms are likely to occur when processing personal data, the document adds.
The Spanish presidency has not even given Parliament the possibility of approving the text without limiting the measure to public sector uses only. Other options include removing the six-week consultation period or requiring the authorities to be informed about the evaluation.
The presidency also asked two additional questions. First, the mandate of the European Parliament touches on concepts such as democracy, the rule of law and sustainability. Therefore, EU countries are asked whether they believe the AI law is the right place to address these issues.
Secondly, Member States are asked for their opinion on whether to introduce the term “distributor” to avoid confusion.
[Edited by Alice Taylor]
Read more with EURACTIV
#Act #Spanish #presidency #defines #options #key #issues #negotiation
Image Source : www.euractiv.com