Ethically aligned design

The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems. Ethically Aligned Design: A Vision For Prioritizing Wellbeing With Artificial Intelligence And Autonomous Systems, Version 1. IEEE, 2016. http://standards.ieee.org/develop/indconn/ec/autonomous_systems.html.


 

Something a little different for today… the IEEE recently put out a first version of their “Ethically Aligned Design” report for public discussion. It runs to 136 pages (!) but touches on a number of very relevant issues.

This document represents the collective input of over one hundred global thought leaders in the fields of Artificial Intelligence, law and ethics, philosophy, and policy from the realms of academia, science, and the government and corporate sectors.

The report itself is divided into eight sections, each of which seems to be the result of the deliberations of a different sub-committee. The eight areas are:

  1. General Principles
  2. Embedding values into autonomous intelligent systems
  3. Methodologies to guide ethical research and design
  4. Safety and beneficence of Artificial General Intelligence (AGI) and Artificial Superintelligence (ASI)
  5. Personal data and individual access control
  6. Reframing autonomous weapons systems
  7. Economics/humanitarian issues
  8. Law

I’m going to focus on the first five of these areas today, and of necessity in reducing 136 pages to one blog post, I’ll be skipping over a lot of details and just choosing the parts that stand out to me on this initial reading.

General Principles

Future AI systems may have the capacity to impact the world on the scale of the agricultural or industrial revolutions.

This section opens with a broad question, “How can we ensure that AI/AS do not infringe human rights?” (where AI/AS stands for Artificial Intelligence / Autonomous Systems throughout the report). The first component of the answer connects back to documents such as the Universal Declaration of Human Rights and makes a statement that I’m sure very few would disagree with, although it offers little help in the way of implementation:

AI/AS should be designed and operated in a way that respects human rights, freedoms, human dignity, and cultural diversity.

The other two components of the answer though raise immediately interesting technical considerations:

  • AI/AS must be verifiably safe and secure throughout their operational lifetime.
  • If an AI/AS causes harm it must always be possible to discover the root cause (traceability) for said harm.

The second of these in particular is very reminiscent of the GDPR ‘right to an explanation,’ and we looked at some of the challenges with provenance and explanation in previous editions of The Morning Paper.

A key concern over autonomous systems is that their operation must be transparent to a wide range of stakeholders for different reasons (noting that the level of transparency will necessarily be different for each stakeholder). Stated simply, a transparent AI/AS is one in which it is possible to discover how and why the system made a particular decision, or in the case of a robot, acted the way it did.

The report calls for new standards describing “measurable, testable levels of transparency, so that systems can be objectively assessed and levels of compliance determined.

Embedding values

This is an interesting section. The overall argument / intention seems to be that we want to build systems that make decisions which align with the way the impacted communities would like decisions to be made. The actual wording raises a few questions though…

Society does not have universal standards or guidelines to help embed human norms or moral values into autonomous intelligent systems (AIS) today. But as these systems grow to have increasing autonomy to make decisions and manipulate their environment, it is essential they be designed to adopt, learn, and follow the norms and values of the community they serve, and to communicate and explain their actions in as transparent and trustworthy manner possible, given the scenarios in which they function and the humans who use them.

What if the norms and values of the community they serve aren’t desirable? For example, based on all the horrific stories that are increasingly being shared, the ‘norm’ of how women are treated in IT is not something we would ever want to propagate into an AIS. There are many examples in history of things that were once accepted norms which we now find very unacceptable. Could we not embed norms and values (e.g., non-discrimination) of a better, more noble version of ourselves and our communities? Presuming of course we can all agree on what ‘better’ looks like…

Values to be embedded in AIS are not universal, but rather largely specific to user communities and tasks.

This opens the door to ‘moral overload’, in which an AIS is subject to many possibly conflicting norms and values. What should we do in these situations? The recommended best practice seems guaranteed to produce discrimination against minorities (but then again, so does democracy when viewed through the same lens, this stuff is tricky!):

Our recommended best practice is to prioritize the values that reflect the shared set of values of the larger stakeholder groups. For example, a self-driving vehicle’s prioritization of one factor over another in its decision making will need to reflect the priority order of values of its target user population, even if this order is in conflict with that of an individual designer, manufacturer, or client.

In the same section though, we also get:

Moreover, while deciding which values and norms to prioritize, we call for special attention to the interests of vulnerable and under-represented populations, such that these user groups are not exploited or disadvantaged by (possibly unintended) unethical design.

The book “Moral Machines: Teaching robots right from wrong” is recommended as further reading in this area.

Understanding whether / ensuring that systems actually implement the intended norms requires transparency. Two levels of transparency are envisaged: firstly around the information conveyed to the user while an autonomous system interacts, and secondly enabling the system to be evaluated as a whole by a third-party.

A system with the highest level of traceability would contain a black-box like module such as those used in the airline industry, that logs and helps diagnose all changes and behaviors of the system.

Methodologies to guide ethical research and design

The report highlights two key issues relating to business practices involving AI:

  • a lack of value-based ethical culture and practices, and
  • a lack of values-aware leadership

Businesses are eager to develop and monetize AI/AS but there is little supportive structure in place for creating ethical systems and practices around its development or use… Engineers and design teams are neither socialized nor empowered to raise ethical concerns regarding their designs, or design specifications, within their organizations. Considering the widespread use of AI/AS and the unique ethical questions it raises, these need to be identified and addressed from their inception

Companies should implement ‘ethically aligned design’ programs (from which the entire report derives its title). Professional codes of conduct can support this (there’s a great example from the British Computer Society in this section of the report).

The lack of transparency about the AI/AS manufacturing process presents a challenge to ethical implementation and oversight. Regulators and policymakers have an important role to play here the report argues. For example:

…when a companion robot like Jibo promises to watch your children, there is no organization that can issue an independent seal of approval or limitation on these devices. We need a ratings and approval system ready to serve social/automation technologies that will come online as soon as possible.

CloudPets anyone? What a disgrace.

For further reading, “An FDA for Algorithms,” and “The Black Box Society” are recommended.

There’s a well made point by Frank Pasquale, Professor of Law at the University of Maryland about the importance (and understandability) of the training data vs the algorithm too:

…even if machine learning processes are highly complex, we may still want to know what data was fed into the computational process. Presume as complex a credit scoring system as you want. I still want to know the data sets fed into it, and I don’t want health data in that set…

Safety and beneficence of AGI and ASI

This section stresses the importance of a ‘safety mindset’ at all stages.

As AI systems become more capable, unanticipated or unintended behavior becomes increasingly dangerous, and retrofitting safety into these more generally capable and autonomous AI systems may be difficult. Small defects in AI architecture, training, or implementation, as well as mistaken assumptions, could have a very large impact when such systems are sufficiently capable.

The paper “Concrete problems in AI safety” (on The Morning Paper backlog) describes a range of possible failure modes.

Any AI system that is intended to ultimately have capabilities with the potential to do harm should be design to avoid these issues pre-emptively. Retrofitting safety into future more generally capable AI systems may be difficult:

As an example, consider the case of natural selection, which developed an intelligent “artifact” (brains) by simple hill-climbing search. Brains are quite difficult to understand, and “refactoring” a brain to be trustworthy when given large amounts of resources and unchecked power would be quite an engineering feat. Similarly, AI systems developed by pure brute force might be quite difficult to align.

Personal data and individual access control

This is the section most closely aligned with the GDPR, and at its heart is the problem of the asymmetry of data:

Our personal information fundamentally informs the systems driving modern society but our data is more of an asset to others than it is to us. The artificial intelligence and autonomous systems (AI/AS) driving the algorithmic economy have widespread access to our data, yet we remain isolated from gains we could obtain from the insights derived from our lives.

The call is for tools allowing every individual citizen control over their own data and how it is shared. There’s also this very interesting reminder about Western cultural norms here too:

We realize the first version of The IEEE Global Initiative’s insights reflect largely Western views regarding personal data where prioritizing an individual may seem to overshadow the use of information as a communal resource. This issue is complex, as identity and personal information may pertain to single individuals, groups, or large societal data sets.

What is personal data? Any data that can be reasonably linked to an individual based on their unique physical, digital, or virtual identity. That includes device identifiers, MAC addresses, IP addresses, and cookies. Guidance on determining what constitutes personal data can be found in the U.K. Information Commissioner’s Office paper, “Determining what is personal data.”

As a tool for any organization regarding these issues, a good starting point is to apply the who, what, why and when test to the collection and storage of personal information:

  • Who requires access and for what duration?
  • What is the purpose of the access? Is it read, use and discard, or collect, use and store?
  • Why is the data required? To fulfil compliance? Lower risk? Because it is monetized? In order to provide a better service/experience?
  • When will it be collected, for how long will it be kept, and when will it be discarded, updated, re-authenticated…

The report also points out how difficult informed consent can be. For example, “Data that appears trivial to share can be used to make inferences that an individual would not wish to share…

Afterword

I’ve barely scratched the surface, but this post is getting too long already. One of the key takeaways is that this is a very complex area! I personally hold a fairly pessimistic view when it comes to hoping that the unrestrained forces of capitalism will lead to outcomes we desire. Therefore even though it may seem painful, some kind of stick (aka laws and regulations) does ultimately seem to be required. Will our children (or our children’s children) one day look back in horror at the wild west of personal data exploitation when everything that could be mined about a person was mined, exploited, packaged and sold with barely any restriction?

Let’s finish on a positive note though. It’s popular to worry about AI and Autonomous Systems without also remembering that they can be a force for tremendous good. In addition, as well as introducing unintended bias and discrimination, they can also be used to eliminate it in a way we could never achieve with human decision makers. An example I’ve been talking about here draws inspiration from the Adversarial Neural Cryptography paper of all things. There we get a strong hint that the adversarial network structure introduced with GANs can also be applied in other ways. Consider a network that learns an encoding of information about a person (but explicitly excluding, say, information about race and gender). Train it in conjunction with two other networks, one that learns to make the desired business predictions based on the learned representation, and one (the adversarial net) that attempts to predict race and gender based on that same representation. When the adversarial net cannot do better than random chance, we have a pretty good idea that we’ve eliminated unintended bias from the system…