Policy

Recommendations for incorporating human rights into AI impact assessments

Data & Society and the European Center for Not-for-Profit Law (ECNL)

Data & Society and the European Center for Not-for-Profit Law have collaborated to publish two papers with recommendations around human rights and algorithmic impact assessments. 

In the paper “Recommendations for Assessing AI Impacts to Human Rights, Democracy, and the Rule of Law,” our organizations emphasize that we are at a turning point for the future of algorithmic accountability. Numerous jurisdictions have already proposed legislation that would implement algorithmic impact as- sessments as a tool for bringing accountability to the algorithmic systems increasingly used across everyday life. Despite this heightened focus on impact assessments as an algorithmic governance mechanism, there is no standardised process for conducting such assessments that can be considered truly accountable. 

This paper, written to provide recommendations to the Council of Europe’s Ad Hoc Committee on AI (CAHAI) as they seek to develop a Human Rights, Democracy, and Rule of Law Impact Assessment (HUDERIA), explores both the opportunities and limitations of impact assessments as a mechanism for holding AI systems accountable. Building on Data & Society’s Assembling Accountability: Algorithmic Impact Assessment for the Public Interest report, the paper also provides a framework for evaluating potential HUDERIA tools. 

In the paper “Mandating Human Rights Impacts Assessments in the AI Act,” our organizations emphasize the importance of mandating human rights impact assessments (HRIAs) in the upcoming EU AI Act, the first legally binding framework on AI based on the EU’s standards on fundamental rights. This is an essential step for achieving stated EU goals for the development and deploy- ment of trustworthy AI. It is also central for understanding and determining the levels of risk of AI systems—without understanding the impact of the AI system on human rights, there is little evidence and knowledge for detecting the risk level. Our recommendations are focused on supporting the EU in developing HRIA requirements, as well as deepening engagement across sectors on impact assessments as a mechanism for algorithmic governance and accountability.