‘DIGITAL EU’: THE GOOD, THE BAD & THE UGLY Stakeholder Article Civil society calls for a rights-respecting Artificial Intelligence Act MEPs and representatives of EU member states governments must deliver an Artificial Intelligence Act (AIA) that centers people impacted by this technology. As part of a collective of 123 civil society organisations, European Digital Rights (EDRi), an association of civil and human rights organisations from across Europe, has been calling for an Artificial Intelligence Act that foregrounds fundamental rights. We specifically recognise that Artificial Intelligence systems exacerbate structural imbalances of power, with harms often falling on the most marginalised in society. As such, we urge members of the European Parliament to be bold in building the AI Act to safeguard the rights of people and ensure that AI development and deployment fully respect fundamental rights and democracy. The AI Act takes a risk-based approach to regulating the use of AI systems. The intention is to introduce appropriate safeguards and obligations on providers (developers) and users (deployers) of AI systems that pose a risk to “the health and safety or fundamental rights of persons.” ‘Dysfunctional’ However, in its current form, the AI Act is dysfunctional. The AI Act does not go far enough to prohibit the most harmful uses of AI, to ensure accountability for AI deployers, and to empower people affected by AI system to understand and challenge them. Despite consistent documentation of the disproportionate negative impact AI systems can cause to already marginalised groups (in particular women, racialised people, migrants, LGBTIQ+ people, persons with disabilities, sex workers, children and youth, older people, and poor and working-class communities), significant changes are still required to ensure that the AIA adequately addresses these systemic harms. Prohibited practices The AI act recognises that some uses of AI are ‘unacceptable’ and must be prohibited. However, currently, the list of prohibitions, such as the prohibition on the use of facial recognition in public spaces, contains too many loopholes and exemptions. MEPs must expand the list of ‘prohibited AI practices’ to cover all systems that are proven to pose an unacceptable risk to fundamental rights, including predictive policing, emotion recognition, the use of remote biometric identification in publicly accessible spaces by all actors, and AI uses in the migration context. Accountability for users The AIA predominantly imposes obligations on providers (developers) rather than on users (deployers) of highrisk AI. While some of the risk posed by the systems listed in the AIA come from how they are designed, significant risks actually stem from how they are used. Providers cannot comprehensively assess the full, potential, impact of a high-risk AI system during the conformity assessment, and therefore that users must have obligations to uphold fundamental rights as well. MEPs must include a duty to conduct and publish a fundamental rights impact assessment for all high risk AI systems, to ensure accountability and transparency to the public. Empowering people affected by AI systems The AIA currently does not confer individual rights to people impacted by AI systems, nor does it include any provision for individual or collective redress, or a mechanism by which people or civil society can participate in the investigatory process of high-risk AI systems. The AIA should include a right for people affected to seek an explanation for how they are affected by AI systems, and ways to challenge non-compliant AI systems with national authorities. The AIA must also include horizontal, and mainstreamed, accessibility requirements for AI systems including for AI-related information and instruction manuals, consistent with the European Accessibility Act. To guarantee the AIA works for everyone MEPs must widen the list of prohibitions, include accountability measures, and routes to challenge and seek remedies for harmful AI systems. The AIA negotiations are a key moment to ask ourselves what society we want to live in: technological tools should support inclusive, rights-respective futures, not dystopian and discriminatory ones. European Digital Rights is a dynamic collective of 47+ NGOs, experts, advocates and academics working to defend and advance digital rights across Europe. We advocate for robust and enforced laws, inform and mobilise people, promote a healthy and accountable technology market, and build a movement committed to digital rights in a connected world. www.edri.org @edri Download The Political statement on AI Act 13
Loading...
Loading...
Loading...