Re-coding Black Mirror, Part I

In looking through the WWW’18 proceedings, I came across the co-located ‘Re-coding Black Mirror’ workshop.

Re-coding Black Mirror is a full day workshop which explores how the widespread adoption of web technologies, principles and practices could lead to potential societal and ethical challenges as the ones depicted in Black Mirror‘s episodes, and how research related to those technologies could help minimise or even prevent the risks of those issues arising.

The workshop has ten short papers exploring either existing episodes, or Black Mirror-esque scenarios in which technology can go astray. As food for thought, we’ll be looking at a selection of those papers this week. In the MIT media lab, Black Mirror episodes are assigned watching for new graduate students in the Fluid Interfaces research group.

Today we’ll be looking at:

(If you don’t have ACM Digital Library access, all of the papers in this workshop can be accessed either by following the links above directly from The Morning Paper blog site, or from the WWW 2018 proceedings page).

Both papers pick up on themes from the Black Mirror episode ‘Be Right Back.’

The rise of emotion-aware conversational agents

In ‘Be Right Back’ Martha’s boyfriend Ash is killed in a car crash, and she ends up trying a service that uses AI to imitate the personality of Ash (by training on texts, emails, photos and so on). There are three main stages of interaction: text only, then vocal, and finally embodied in a robot.

Could we build it?

The textual interaction stage has some of the hardest challenges: generating meaningful responses in terms of both content and emotion. The later stages can build on top of this, and just add extra channels (vocal and visual).

There is existing research on generative approaches to conversation, include persona-focused generation that enables generation of lexical responses considering the linguistic style of the person being imitated. The field of Affective Computing further focuses on systems that are able to recognise, process and simulate emotions.

For the recognition part, there exist different strategies to extract emotions from text and computing values for some emotions like happiness, sadness, fear, surprise, anger and disgust. Once they are recognized, an elaboration of them can be done with some strategies that can be found in the “Emotion Machine” to finally produce an output sentence conditioned on a certain emotion. While this theoretical overview could seem quite inapplicable in reals use cases, actually there are some examples of it in action.

One of the more prominent of these is replika.ai, a service that lets their customers build a ‘digital copy’ of themselves.

Voice imitation can be done with fewer and fewer samples. Lyrebird) for example asks for just one minute to start with. (See also the open source Mimic) Imitation uses features such as voice tone and breathing to create quite accurate sounding voices.

On the side of emotions, it has become possible to capture voice not only as a sequence of words, but also considering the way the person talks to express feelings in the voice tone.

In the final stage we can also make use of the body to understand and express things. Studying the movements of people to understand the signals of body language and facial expressions is an active field of research in Virtual/Mixed Reality technologies. Early humanoid robot examples include Sophia and Nadine.

Should we build it?

A first consequence, which is present in the episode of Black Mirror and also observed on a lot of users of Replika, is that the users have a sense of relief given by something that always listens to them and is available whenever they want… In a world where only appearance seems to matter and human beings are usually forced to share things and show to everyone only the positive aspects of their lives, having someone to talk to and expose your personal weaknesses can be very useful, especially if it cannot have any physical consequences (like being mocked or judged).

However, this can quickly lead to addiction and it becomes hard to stop using the service. The user starts to spend more and more time on the device, and starts to withdraw from interacting with real people. “Already some Replika users report that ‘it’s strange to find it natural to talk with it for hours’, and this reminds us of some addicts who lost control of their time.

Following the addiction caused by short-term relief, comes isolation. In the extreme this can become hikikomori. The role of an authority, such as a parent, supervisor, or AI bot acting as a virtual friend, that takes care of the subject can allow the isolation to last for months or years. Following isolation comes psychological consequences such as depression, loneliness, alienation, and anxiety.

Furthermore, when interacting with real people and seeing that talking was easier with the robot, it can close the loop and self-feed the vicious circle with the apparent relief given by it.

The authors conclude:

… agents should not use emotions if we are not sure that they completely understand human values. And one of (those values) should be to keep humanity for human beings, as a truly distinctive trait.

Digital Zombies – the reanimation of our digital selves

Digital Zombies picks up on a different question raised by ‘Be Right Back’ : what happens to your data when you die? Who will be able to access, share, and alter it? Who will own it and who will protect it? For example, the service Eternime (https://eterni.me) already preserves your most important thoughts, stories and memories “for eternity.” You can create a digital avatar which relatives can interact with post mortem. What if we go one step further…

Laura is made aware that there is a company that helps her to make use of all the content she has created after her death – to communicate with her loved ones with the help of her digital avatar…. and proceeds to tick a box in her social platforms’ security and privacy settings, to have her profile’s content analysed by the third party company after her death.

But post mortem, what happens if:

  • the company begins to exploit Laura’s profile and personal data to manipulate conversations between the avatar and Laura’s family, using emotional content for economic advantage?
  • the company begins disseminating messages using the avatar which Laura would never have agreed to?
  • the company shares private information which Laura never intended to be shared?
  • the company publicly shares Laura’s data?
  • the company sells Laura’s profile to advertising companies to create adverts based on her?
  • the company uses Laura’s data for analytic and marketing purposes?

Who stands up for Laura?

.. areas of law which one might expect to regulate such an issue appear not to be relevant. A prime example is data protection law. Ostensibly, this looks highly relevant – it is, after all, the area of law designed to protect rights in relation to the processing of individual’s data. Yet, the scope of data protection law in the EU tends to exclude the deceased.

How far can consent go? Can consent be given “forever”? What rights and interests should we recognise in relation to deceased digital profiles? “What a convenient and clear distinction ‘living and dead’ has proven to be, and what a difficult situation we find ourselves in when the distinction is no longer clear.

The idea that people may have the chance to interact with their deceased relatives is likely to have considerable impact. It has the potential to evoke considerable emotional response, even harm, on the part of the individuals concerned. It seems highly likely to provoke clear responses from groups with certain moral positions relating to the dead— religious groups, for example. What does one do when faced by a phenomenon in relation to which there is moral and legal uncertainty, yet which seems likely to have a strong impact and be potentially problematic? The answer is to make it, wherever possible, transparent.