Governing Artificial Intelligence

Upholding Human Rights & Dignity

Mark Latonero

Published 10.10.18
Download Report

“Can international human rights help guide and govern artificial intelligence (AI)?”

In Governing Artificial Intelligence: Upholding Human Rights & Dignity, Mark Latonero shows how human rights can serve as a “North Star” to guide the development and governance of artificial intelligence.

The report draws the connections between AI and human rights; reframes recent AI-related controversies through a human rights lens; and reviews current stakeholder efforts at the intersection of AI and human rights.

This report is intended for stakeholders–such as technology companies, governments, intergovernmental organizations, civil society groups, academia, and the United Nations (UN) system–looking to incorporate human rights into social and organizational contexts related to the development and governance of AI.

“A human rights-based frame could provide those developing AI with the aspirational, normative, and legal guidance to uphold human dignity and the inherent worth of every individual regardless of country or jurisdiction.”

–Mark Latonero

Recommendations include:

  • Technology companies should find effective channels of communication with local civil society groups and researchers, particularly in geographic areas where human rights concerns are high, in order to identify and respond to risks related to AI deployments.
  • Technology companies and researchers should conduct Human Rights Impact Assessments (HRIAs) through the life cycle of their AI systems. Researchers should reevaluate HRIA methodology for AI, particularly in light of new developments in algorithmic impact assessments. Toolkits should be developed to assess specific industry needs.
  • Governments should acknowledge their human rights obligations and incorporate a duty to protect fundamental rights in national AI policies, guidelines, and possible regulations. Governments can play a more active role in multilateral institutions, like the UN, to advocate for AI development that respects human rights.
  • Since human rights principles were not written as technical specifications, human rights lawyers, policy makers, social scientists, computer scientists, and engineers should work together to operationalize human rights into business models, workflows, and product design.
  • Academics should further examine the value, limitations, and interactions between human rights law and human dignity approaches, humanitarian law, and ethics in relation to emerging AI technologies. Human rights and legal scholars should work with other stakeholders on the tradeoffs between rights when faced with specific AI risks and harms. Social science researchers should empirically investigate the on-the-ground impact of AI on human rights.
  • UN human rights investigators and special rapporteurs should continue researching and publicizing the human rights impacts resulting from AI systems. UN officials and participating governments should evaluate whether existing UN mechanisms for international rights monitoring, accountability, and redress are adequate to respond to AI and other rapidly emerging technologies. UN leadership should also assume a central role in international technology debates by promoting shared global values based on fundamental rights and human dignity.

This report was animated by the ideas generated at the workshop, Artificial Intelligence & Human Rights, held at Data & Society in April 2018.

Following the workshop, Data & Society published a collection of work from Aubra Anthony, Corinne Cath & Christiaan van Veen, Elizabeth Eagan, Sherif Elsayed-Ali, Jason Pielemeier, Enrique Piracés, and Ben Zeverbergen. These essays were highly instructive to this report.

Subscribe to the Data & Society newsletter

Support us

Donate
Data & Society Research Institute 36 West 20th Street, 11th Floor
New York, NY 10011, Tel: 646.832.2038

Reporters and media:
[email protected]

General inquiries:
[email protected]

Unless otherwise noted this site and its contents are licensed under a Creative Commons Attribution 3.0 Unported license.