Will disdain for ‘friendly’ artificial intelligence affect how we interact with people?
Fri 6 Nov 2015
In The Hitch-Hiker’s Guide To The Galaxy the late comedic science-fiction author Douglas Adams presaged by many decades the problems that people might have relating to machines that try to ‘like’ them, or otherwise engage with people on a social level. In that work, which was adapted to every form of media except Sellotape, the marketing division of the Sirius Cybernetics Corporation envisaged a robot as ‘Your plastic pal who’s fun to be with’, and went on to manufacture mechanical social nightmares with super-Prozacked personalities, determined to share joy, happiness and optimism in the face of consumers who wanted to reserve their psyches and emotional responses for their meatware chums.
Where Adams’ prediction fell down was in failing to understand how deeply we respond to social cues, even when we don’t really care. You don’t need to be talking to a machine in order to be expressing more interest and politeness than you genuinely feel, and often enough two actual people will engage at length in behaviour with each other which doesn’t necessarily reflect their inner feelings. We’re raised into a spirit of forbearance, more or less, and it would be awfully hard to get along without it when there is so little about the world, and the people in it, that is within our direct control.
Hitch-Hiker’s 1970s stablemate Star Wars likewise made the same presumption regarding how we are likely to feel about anthropomorphic machines. While the cast of the original trilogy were getting ‘all broke up’ about the tribulations and fate of the voiceless and magnificent Millennium Falcon, the robotic ‘fools’ – who will eventually have threaded the entire nine movies – didn’t get anywhere near the same level of attention: R2-D2, whose smashed-up condition was mourned by C3PO after the attack on the Death Star in A New Hope (1977), barely got a ‘He’ll be all right’ from Luke Skywalker, eager to get to the party and chat up his sister; C3PO’s dismemberment was likewise a source of little angst for our heroes in The Empire Strikes Back (1980), except for similarly-marginalised Chewbacca; and again, R2’s almost complete destruction trying to unlock a door in Return of The Jedi (1983) was seen by Han Solo only as a hindrance to his mission.
Perhaps if the Falcon’s on-board computer had articulated itself in the same irritating way that the Hitch-Hiker’s equivalent Eddie did, the reverence wouldn’t have been so deep.
Stanley Kubrick and Arthur C. Clarke presented a far more accurate portrayal of how people would eventually interact with an articulate artificial intelligence. In 2001: A Space Odyssey (1968) we see William Sylvester thanking an automated security system early in the movie, and astronauts Keir Dullea and Gary Lockwood later being polite to the high-level artificial intelligence in charge of their spaceship – well before it exhibited homicidal tendencies.
Rage against the machine
My two-stage commute home deposits me outside three different branches of Sainsburys Local, so inevitably I end up going in there more often than I would like. It’s not my favourite food-store chain, but the reason I really try and avoid it is more to do with the need for engaging with scripted responses – from both the meatware and the hardware. The sales assistants are obliged by their job to ask me, every single time I make a purchase, if I have a Nectar loyalty card – even if they have sold me items hundreds of times and know that I don’t. Perhaps today will be the special day that I do have one..? It is annoying to have a human being ask you the same question over and over again, year in and year out, though they know the answer; even if you know that they’re compelled to, and are able to pity them that thankless task, abstractly.
When possible I check my items out at the auto-tills, which seem to have been designed and programmed by the Sirius Cybernetics Corporation after a particularly jubilant scrum, but at least have the innocence of their programming to excuse them. With quick actions I can keep the machine’s verbosity to a minimum; but if I want to avoid being thanked by it, I am going to need to shop at branch #2 of my journey, the only one where the machines are really near the exit.
I agree with George Carlin that one should question whether there is any such thing as ‘corporate sentiment’, and like most people I find it difficult to respond positively to insincere solicitations, flattery or hollow platitudes. So whenever the auto-till attempts to thank me for Shopping At Sainsburys!, I really do feel like telling it ‘Go stick your head in a pig’.
And it’s a machine, so what does it matter? It’s not like I am even saying it out loud. Even if I did, we’re talking about a scripted processing robot, not a genuine artificial intelligence. And even if it were the latter, would it have any ‘feelings’ to hurt? None of this matters, because the machines are not the issue; it’s us, and how interacting with them might affect us in the long-term.
Empathy for the artificial
If you head down to southern Italy, as I did for some years, or even just watch a few Godfather movies, you’ll see the problem of selective empathy within the social conventions of the Camorra; killers who kiss their children yet shoot their enemies’ children with scant compunction. Or marvel that Heinrich Himmler’s wife considered the architect of the Final Solution as ‘the decent one’ among the leaders of the National Socialist Party, and at the heartfelt love letters he wrote her between his administrative duties in the death camps.
Extreme examples of a universal, fractal principal that some consistency in how we behave towards those in whom we are invested must extend a reasonable way into the social spheres we pass through – must be a base thread in our relations in the world.
Rather than being just another psychological parlour trick in computer science labs, negative machine feedback could be important to us psychologically.
The reverse can happen as well, when our social concern is abstracted towards the distance at the expense of those near to us, typified by the character of Mrs Jellyby in Dickens’ Bleak House, campaigning for African ‘misfortunates’ while neglecting her own huge brood.
But mostly the problem of empathy-failure, or sociopathy, is one of failure to relate to another person’s point of view, because that person is unimportant to you for any number of reasons – outside of your immediate circle of care.
I wonder whether, if I do not begin to engage emotionally with machines which are becoming increasingly intelligent and articulate, such as Cortana and Siri, and an increasing number of forthcoming AIs with which I will have to deal, I might not slowly develop a kind of institutional disconnect from the process of communication per se. The guiding intelligence with whom I am in colloquy may not be a person, but it is an ‘entity’, and in that sense falls into the same class as many people I might meet, about whom I might be just as ignorant and ill-informed. Can I really afford not to treat machines as people, since talking to them may influence how I relate socially in general?
Should the machine rage back?
So in these days machine-love manifests as a complete reversal, in the media, of the indifference to AI foreseen by Douglas Adams and George Lucas. Spike Jonze imagined a very deep love between a lonely Joaquin Phoenix and an intellectually unfaithful AI in Her (2013), while Domhnall Gleeson likewise fell to the sleek and delicate sensibilities of a machine in Ex Machina (2015). In Robot & Frank (2012) Frank Langella developed a genuine connection with a ‘care-droid’, despite its sensible insistence that his attachment was mere ‘projection’ on his part. We are a society that is undergoing a new and deep attachment to technology because of the meaningful areas of our life which it facilitates and enables. When artificial voices speak out from it, it is already ‘personal’.
Notable in the three movies mentioned above was the unusual tendency of the AIs featured to respond negatively or ‘become uncomfortable’ in response to behaviour or conversation which would have elicited the same response in most real people, something which is only currently being experimented with in various competitions related to the Turing test. But rather than being just another psychological parlour trick in computer science labs, negative machine feedback could be important to us psychologically.
We treat other people with base consideration and at least notional interest partly because we were raised that way, but most deeply because in our formative years we had some very negative feedback to the inconsiderate or anti-social behaviour which is pretty much the native soil of any teenager. We learned to behave better because there were undesirable consequences to behaving badly.
Probably the very last thing we need in the forthcoming generation of artificial intelligences is endless forgiveness – our empathic responses were hard work to build up, particularly those that apply to people beyond our own immediate circle of care or sphere of interest. Maybe the next time I throw a dirty look at the auto-till in Sainsburys, it should tell me to fuck off, or burst into tears. Another twenty years of disdain for the scripted responses of commercial technologies, and I fear I might fail a Voigt-Kampf test myself. Maybe if I can’t love the machine, I’ll forget how to love at all.