In this wonderful world of technology, we’ve made up lots of words. That’s natural, since we’ve made up lots of new things as well. But sometimes we make up a word and we use it over and over again, even though the word doesn’t really describe the thing that we’re talking about. It reminds me of one of my favourite movie moments, from The Princess Bride.
A few of the worst offenders are ‘intelligence’ and its corollary prefix, ‘self-‘. Also ‘smart,’ as Simon wrote the other day. Some of you are probably feeling rather indignant right now: what’s wrong with these words? They’re just words, and this stuff is pretty f*cking clever after all.
Well, yes. But.
The problem with these words is that they set an expectation that technology cannot yet hope to meet (and when it can meet it, there will be a whole new set of problems): when we hear terms like ‘artificial intelligence’ or ‘self-driving car,’ we can’t help but think that means the thing, whatever it is, is comparable to human intelligence. Which it is not. I’m not saying that there aren’t moments when the stuff that Siri or Google Now comes up with seem absolutely magical, but there’s a very big gap between deducing that the place you go to every day around 9am and leave every day around 6pm is work, and negotiating the chaos of rush hour traffic.
Our brains are not software. Our lives may make data but they are not made up of it. Our lives are made up of thousands of little decisions we make every day, many of which neither we ourselves nor science can fully describe how we make. So, unsurprisingly, we have not yet managed to make software that thinks like humans do – Douglas Hofstadter has been working on this for more than 40 years. Turns out it’s a lot more complicated than expected.
What we call AI these days is brute force pattern recognition: we program a bit of software to look for certain behaviours and patterns, and respond in a certain way. The cleverer bits of software ‘learn’ based on response. Pretty much all of them get things wrong, even simple things: a friend of mine asked Siri to make a measurement conversion for a big Sunday afternoon waffle party she was throwing, and wound up with dozens of kilos of leftover flour because the conversion was wildly wrong. Another friend’s Android phone thinks he lives at the grocery store because his routine is to go there every day when he finishes work at his home office. Yes, these things are improving all the time, but they’re still nowhere near being human.
That’s why terms like ‘self-driving cars’ are so problematic. When you read that, the automatic assumption/expectation is that the car will drive itself like a person drives it. The truth isn’t quite so straightforward. Mark Rigley has given talks about this, and we’ve discussed it at length: the self-driving car, at least for the next several years, is going to be more like a horse than like a chauffeur. Mostly it will know what it’s doing, but you’ve got to stay alert because chances are good that you’ll have to intervene at some point. While it may be true that if every car on the road, or even most of them, were self-driving, things would work out a lot better, there’s still way too much margin for error to think that a car could drive itself without any attention from the driver. Think about how often Siri’s gotten something really simple really wrong for you, and then think about that level of wrong-ness happening with a two-ton moving vehicle. That doesn’t mean we shouldn’t be exploring self-driving vehicles; it just means that we need to be careful about how we design them so as not to set false expectations.
So you can imagine my dismay when I read about Molly, Microsoft’s new telenurse. I’m not saying technology can’t help in a medical context – it can certainly help to cut costs and bring a level of confidence to people who live far away from their primary care providers. But here again, managing expectations is critical.
The NHS has had an online symptom-checker for years, and while it’s not the most beautiful thing in the world, it works. When I use it, I do not expect a few lines of text on a page to be able to perceive what’s wrong with me. I am the one making judgments as to my condition and simply giving information to the machine. When the machine no longer has an answer it’s certain of, it advises me to call a doctor. I’m going to go ahead and assume that Molly, too, will have a relatively fixed script, but even then, she’s a whole other order of business.
As soon as I’m confronted with a humanoid face, something happens deep inside my brain. The face elicits a set of subconscious responses that leads to all kinds of expectations – largely that the thing I’m looking at is going to be like me in some deeper way. The same cognitive response that causes the Uncanny Valley phenomenon could well lead many people to expect that Molly is able to see things the way that a real person would. The human face sets an expectation that text on a page does not.
Coming full circle, the issue here isn’t the technology (because really, I love technology) but rather the expectations we have of it. We seem to be spending a whole lot of time and resources outsourcing decision-making to algorithms. Mostly on a small level (SatNav, Gmail deciding which of your email is most important), but increasingly on a larger and larger scale. This is (part of) what Simon was on about, and I agree with him that when we trust our technology to be smarter and smarter, we get dumber and dumber. But when we place expectations on our technology that are beyond its capacity to deliver, we could be putting ourselves in real physical danger.
Unsurprisingly I think this all comes down to design – keeping an open dialogue and being honest with ourselves and our audiences about what technology really can do and how far it can go, signalling appropriately and learning as we go.