A new Deloitte University Press piece on Artificial Intelligence technologies is being widely read online. Like many other articles in recent months, it takes a somewhat anodyne approach to the possibility of major disruption from AI tech, citing ‘history’.
The article caused a few thoughts to coalesce around the AI (and related technologies) issue, which is insistently seeking attention everywhere.
Stepping away, for a moment, from people some may insinuate are bloviators on either side of the debate (although how one can describe individuals as clued-in as Stephen Hawking or Elon Musk as such poses some difficulty), here are some brief queries of concern.
- Why is a management thought culture otherwise suffused with the zephyrs of ‘disruption’, from Clayton Christensen down (if you opt to ignore Jill Lepore and place Prof. Clay in that lofty position), so averse to considering the possibility of unprecedented disruption – not merely in employment, but in the larger social context in which business gets nourished – when it comes to AI-driven automation? Joseph Heller might have reminded us to stop being like those modernist portraits with eyes stuck on only one side of the face.
- Is it unreasonable when the metronomic citing of the purportedly benign history of ‘similar’ automation by many business professionals otherwise devoted to breaking free of historical shackles stirs motes of cynicism?
- The other bromide that gets bandied about is the phrase ‘in the near future’, as in ‘automation of many complex jobs is unlikely to happen in the near future’. But how far away is the ‘medium-term’ or the ‘long-term’ future? About 15-20 years, if one averages across the various future-gazing articles. Which seems to leave the people we vaguely imagine as ‘running the world’ only about 3-4 electoral cycles to start giving serious thought to potential social problems, and how they might be mitigated. Would it be circumspect to set any limits on AI-driven automation? If joblessness is likely to be rife, is an assured basic income for everyone something that’s up for discussion? Not going to be easy, seeing the presently tangled climate-policy postures of most politicians – the vast majority of whom are vote-world professionals qualified in law or PPE etc. in a tech-saturated world, and, like in the lead up to the World Wars, may be failing to comprehend the vast, latent noxiousness of certain areas of human endeavour.
- So what sets apart Artificial Intelligence technologies from the other automation revolutions that occurred over the past 200-or-so years? Unambiguously: Plasticity. AI systems are evolving towards the wonderful taffy plasticity of the human mind itself, which can do any job in the world with adequate training – which means that no area of human endeavour will remain free of supra-human achievement by AI-like systems. Only the time-frame is in some dispute. And even a lesser, sub-AI plasticity would still almost certainly have an enormous dislocating impact on society.
- Yes, I can already hear the familiar refrain: “But automation will free up our time for indulging in creative pursuits!” But what satisfaction, Sir/Madam, would those ‘creative pursuits’ provide us when we see AI algorithms producing far more beautiful artistic or literary output etc. than we ever could, and tailored to our ‘individual tastes’? Unlikely to do our egos much good, and a possible recipe for a slow, collective pickling of the human brain?
Perhaps not, but we should at least take a good, hard look at the chances of such an unhinging scenario and explore ways to deflect it.
As Jacob Bronowski once memorably said, paraphrasing Oliver Cromwell: when the stakes are so high, we should at least consider the possibility that we may be wrong.