Above Photo: Flickr
A new set of principles—the Toronto Declaration—aims to put human rights front and centre in the development and application of machine learning technologies.
In May 2018, Amnesty International, Access Now, and a handful of partner organizations launched the Toronto Declaration on protecting the right to equality and non-discrimination in machine learning systems.
The Declaration is a landmark document that seeks to apply existing international human rights standards to the development and use of machine learning systems (or “artificial intelligence”).
Machine learning (ML) is a subset of artificial intelligence. It can be defined as “ provid[ing] systems the ability to automatically learn and improve from experience without being explicitly programmed.”
One of the most significant risks with machine learning is the danger of amplifying existing bias and discrimination against certain groups who already struggle to be treated with dignity and respect.
How is this technology relevant to human rights? AI is a powerful technology that could have a potentially transformative effect on many aspects of life—from transportation and manufacturing to healthcare and education.
Its use is increasing in all these sectors as well as in the justice system, policing, and the military. AI can increase efficiency, find new insights into diseases, and accelerate the discovery of novel drugs. But with misuse, intentional or otherwise, it can also harm people’s rights.
One of the most significant risks with machine learning is the danger of amplifying existing bias and discrimination against certain groups—often marginalized and vulnerable communities, who already struggle to be treated with dignity and respect.
When historical data is used to train machine learning systems without safeguards, ML systems can reinforce and even augment existing structural bias. Discriminatory harms can also occur when decisions made in the design of AI systems lead to biased outcomes, whether they are deliberate or not.
When Amnesty started examining the nexus of artificial intelligence and human rights, we were quickly struck by two things: the first was that there appeared to be a widespread and genuine interest in the ethical issues around AI, not only among academics, but also among many businesses.
This was encouraging—it seemed like lessons were learned from the successive scandals that hit social media companies and there was a movement to proactively address risks associated with AI.
The second observation was that human rights standards were largely missing from the debate on the ethics of AI; often there would be mention of the importance of human rights, but usually nothing more than a passing reference.
And in some places, discussion on the ethics of AI were starting to take a well-known turn—that AI ethics should be culturally dependent. This opens the door for differing standards and different levels of protection for people’s rights—and in the context of a digital technology like AI that doesn’t care about borders—makes little sense.
We chose equality and non-discrimination in machine learning as a focus because it was already a pressing issue with an increasing number of real-life problems.
It was clear that asserting the central role of the human rights framework in AI ethics was a priority. We chose equality and non-discrimination in machine learning as a focus because it was already a pressing issue with an increasing number of real-life problems.
The Toronto Declaration was drafted after discussions and interviews with dozens of experts in AI, human rights and business, among others. Extensive consultations on the draft were held, and it was adopted on 16 May at the start of RightsCon 2018. The Declaration has three main sections.
First, it sets out the duties of states to prevent discrimination in the context of designing or implementing machine learning systems in public contexts or through public-private partnerships.
This section includes principles to identify risks in the use of machine learning systems, to ensure transparency and accountability (including through publicly disclosing where machine learning systems are used), to enforce oversight, including mechanisms for independent oversight, and to promote equality.
Second, the Declaration outlines the responsibilities of private actors in the context of the development and deployment of ML systems. It is based on the human rights due diligence framework (originally defined in the UN Guiding Principles on Business and Human Rights).
These responsibilities include identifying potential discriminatory outcomes through mapping and assessing risks, taking effective action to prevent and mitigate discrimination (including by submitting systems to third party independent audits where there is a significant risk of human rights abuses), and being transparent, including through publishing technical specifications, samples of the training data used, and the sources of data.
Third, the Declaration asserts the right to an effective remedy and that those responsible for abuses are held to account.
The Declaration calls on governments to ensure standards of due process for the use of machine learning is in the public sector, act cautiously on the use of machine learning system in the justice system, outline clear lines of accountability for the development and implementation of ML applications, and clarify which bodies or individuals are legally responsible for decisions made through the use of such systems.
The development and launch of the Declaration is just a first step towards making the human rights framework a foundational component of the fast-developing field of AI and data ethics.
The development and launch of the Declaration is just a first step towards making the human rights framework a foundational component of the fast-developing field of AI and data ethics.
The Declaration sets out principles for policy and practice, but to make them effective we need practical implementation guidelines that help engineers and product managers in applying the principles in their work. Amnesty is starting to work with engineers and researchers to do this.
Also, the widest possible endorsement of the Toronto Declaration from civil society is important for asserting the centrality of human rights in the debate on AI ethics. It will send an important signal that there is a strong expectation on tech companies to also endorse the Declaration and commit to ensuring that existing human rights are not diluted by new technologies.
More outreach to the technology community in order to establish a dialogue between human rights practitioners and engineers will help to embed human rights in the use of development and AI.
Finally, we need to promote communicating about AI technologies in a balanced way—as a technology that can be used for good, but can also abused.
Presenting AI in either dystopian or utopian visions is not conducive to sound debate, and it will distract us from addressing the real risks and from taking advantage of opportunities to use AI beneficially.