We Will Not Be Replaced By Machines
In previous articles, we’ve discussed the importance of collaboration between the experts working in different fields of study. Real world problems don’t tend to fall neatly into the scope of one particular area of research — we need to learn to face problems together by not only sharing what we know, but also by being willing to listen — carefully — to the ideas of others. We need to remain open enough to admit that we don’t have all the answers and that someone else may have knowledge and perspectives that we don’t.
I think we can all agree that collaboration is a good thing and that our combined thinking is greater than the sum of its parts. Why, then, do we feel the need to take a different view when it comes to working with artificial intelligence?
Us versus them
Scientific papers constantly compare the performance of their AI algorithms at solving some particular problem to that of an expert. It is seen as a trophy to outperform the human, even if just in some narrow sense of the term. While this may be a useful benchmark to put the achievement and significance of the research into context, I think it also insidiously sets up a kind of antagonistic relationship. It’s suddenly a competition. Us versus the machines.
Competition and eventual conflict between humans and their AI creations has successfully captured our society’s collective imagination. We’ve dramatised the idea to great effect in literature and film. It’s certainly thought-provoking material and poses a number of very good (and very difficult) ethical questions that we should most definitely be thinking about sooner rather than later.
But the other side of the coin is that this fascination has also contributed to a widespread view that AI is somehow against us. Careless scientists blindly pursuing the progression of their research at the expense of the rest of our livelihoods. We are terrified of the day that we are replaced by a robot at the very thing we have spent the best part of our lives learning how to do.
It’s easy to see why this horrifies us so. While we may complain when we feel overworked and undercompensated, there is some fundamental part of us that likes to work. We like having jobs and we like contributing. We want to feel valued by others. We derive self-worth from work. We derive identity from our work. In many ways, we are our work. If we feel that our work — and hence our selves — are being threatened, we dig in. We get defensive. It’s an assault on something we hold dear.
The prospect of being made redundant by an algorithm seems to come with the implication that what we do is no longer of worth, and this rightly disturbs us. We should not, however, let our imaginations get the better of us yet. Let’s check in with reality before we decide that the relentless march of AI research must be stopped at all costs.
The case for a human element
Humans will always retain a flexibility that a machine does not have. Admittedly, we are vastly outperformed when it comes to simple and repetitive decision-making — but the human ability to learn, adapt, and more generally, react properly to novelty cannot be replicated with lines of code. Our successful exploratory behaviour in the presence of the unknown has, over the course of history, rendered us humans more powerful and more successful than any other creatures on earth.
In the presence of anomalous information, we can (provided we’re willing to face it) derive what it means and integrate what we need to know into our existing decision-making processes. While algorithms do “learn” in a sense, we’re nowhere close to developing anything like a general intelligence that can emulate what humans are able to do.
Essentially: for a long time to come, we’re going to be working alongside AI rather than in competition with it.
Our work will be enhanced by AI. By handling the boring, repetitive parts of our jobs, we’ll be able to focus our attention on the more interesting — and more importantly, the more impactful — aspects of our work. Everyone has those things they know they should be working on, those important-but-not-urgent tasks that we know would be good for us and for our organisations if we did them, that we just can’t seem to get around to. We get bogged down in all the less important — but ultimately more pressing — things.
Imagine if everyone in your organisation only worked on the important stuff. For, like, a year straight. How would things change in that time? How much more efficient would processes be? By how much would the quality of the work improve? And how much happier would your clients be? So much more would get done.
Or maybe AI does outperform you at a certain key aspect of your job. The next question is: what could you achieve if you worked in tandem with the machine? How could you compensate for each other’s weaknesses? Could the AI help you with particularly ambiguous situations? Could you cover off the machine’s failure modes?
Choosing not to be afraid
The point is this. Replace the fearful question:
“Will this technology replace us?”
…with the collaborative question:
“What could we achieve if we worked together?”
It’s not robots versus humans. It’s robots and humans versus problems. Big, important, head-scratching problems.
And the sooner we start thinking that way — from the perspective of collaboration with AI rather than that of confrontation — the more we are going to be able to achieve, and sooner.