18/07/2024
FSG Igualdad y Lucha contra la Discriminación
The Official Journal of the European Union published on 12 July the Regulation (Eu) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence.
The regulation, known as the Artificial Intelligence Act, is the first general regulation at a global level and with the status of a law on this important subject, and will condition economic and social development in the coming years.
Fundación Secretariado Gitano would like to highlight that the use of these systems can particularly affect Roma people, for example in predictive policing systems (the act does not totally prohibit these systems), in algorithms that amplify and spread anti-Roma hoaxes and fake news, or in automated systems for granting social aid, grants or loans, as has already been seen in some cases in different European countries (see the FSG’s 2022 report on Discrimination and Roma Community). We also consider it necessary for Spain to establish a stable mechanism for the participation of civil society in the National Agency for Artificial Intelligence Oversight, including the voices of representatives of the Roma people, in order to prevent the aforementioned biases and guarantee the protection of fundamental rights.
For several years now, Fundación Secretariado Gitano has been working on the field of artificial intelligence and the use of automated systems through algorithms to prevent possible discriminatory biases that can affect Roma people. The annual report Discrimination and Roma Community 2022 addressed this issue in depth, and we also organised a conference with experts in 2022 with the National Observatory of Technology and Society (Discriminatory bias in the use of artificial intelligence and algorithms. Impact on the Roma community). In parallel, for the last three years we have been collaborating in various networks working to defend rights in this area (AI and discriminatory bias), such as IA Ciudadana in Spain, or, in Europe, the EDRi network and the Justice, Equity and Technology Table of the London School of Economics.
The act states:
Who is affected: This Act regulates the use of AI systems in all public administrations and also in the field of private companies (there are exceptions in cases of defence, military and national security systems).
According to the AI Act, machine learning systems will be divided into four main categories depending on the potential risk they pose to society. Systems considered high risk will be subject to strict rules that will apply before they enter the EU market.
Deadlines for entry into force and implementation:
Entry into force: 1 August 2024.
Implementation: General AI rules will apply one year after entry into force, in August 2025, and obligations for high-risk systems in three years. They will be under the supervision of national authorities, supported by the IA office within the European Commission.
The Act recognises the dangers of such systems for vulnerable groups, including ethnic minorities, and the need to monitor possible racial or gender bias. It also stresses the need to monitor that such systems do not discriminate in any way.
As regards the protection of fundamental rights, artificial intelligence systems for biometric categorisation on the basis of political, religious, philosophical beliefs, ethnic origin and sexual orientation are prohibited. Nor will it be possible to use systems that score people on the basis of their behaviour or personal characteristics, or artificial intelligence capable of manipulating human behaviour.
On the other hand, systems to expand or create databases of facial data captured indiscriminately via the internet or audiovisual recordings will also be prohibited.
In general terms, the regulation allows or prohibits the use of artificial intelligence depending on the risk it generates for people and identifies high-risk systems that can only be used if they can be shown to respect fundamental rights. For example, those that can be used to influence the outcome of an election, or those used by financial institutions to assess creditworthiness and establish credit ratings.
Fines for violators range from 35 million euros ($37.6 million) or 7 per cent of a company's global turnover to 7.5 million euros ($8 million) or 1.5 per cent of global turnover.
However, the regulation allows for some exceptions to permit certain uses to ensure national security. This was one of the most controversial points during negotiations between the European Parliament and member states. Thus, security forces will be able to use biometric identification cameras, always with judicial authorisation, to prevent a terrorist threat. In addition, these systems can also be used to locate those responsible for crimes of terrorism, human trafficking and sexual exploitation, as well as to search for victims.
Some weaknesses of the act:
These weaknesses mean that the Act has not achieved an adequate standard of human rights protection. These improvements would be necessary to protect these rights:
Links