If youโve read the many predictions about the future of AI, youโve likely found them to be wildly different. They range from AI spelling doom for humanity, to AI ushering in Golden Age of peace, harmony and culture, to AI producing barely a blip on societyโs path toward ever-greater technological achievement.
Those three views โ dystopian, utopian and organic โ present issues we need to consider as we move deeper toward an AI-integrated future. Yet they also contain exaggerations and false assumptions that we need to separate from reality.
The Dystopian View of AI Future
Those with a dystopianย view of emerging technologies point to studies such as the often-quoted 2013 Oxford report[i] on the susceptibility of more than 700 job categories to automation. This report predicts that 47% of jobs are under threat of automation.
Other predictions are even more dire, predicting up to 97% future unemploymentย as a result of AI. All these studies focus on tasks within jobs that AI could do. By assuming that any job that contains any tasks that AI could do will lead to the entire job being eliminated, those with dystopian views arrive at such frightening job-loss numbers.
The world that those with dystopian views of AI envisage features all power being consolidated into the hands of a miniscule class of super-richย who have seized control of AI and placed the remainder of society into impoverished servitude. It views these eliteย as enjoying untold riches and lives of ease.
A second form of the dystopian view of AI advances the view to positively apocalypticย status. It suggests that AI will eventually evolve to surpass humankindโs ability in every way, becoming itself the ruling eliteย that either enslaves or exterminates all humans as being inferior and obsolete. Aside from the obvious sci-fi overtones of this view, the idea of such an evolution of AI is reliant on assumptions about AIโs capabilities that we will examine more closely later in this chapter.
False assumptions in dystopian views
For now, letโs focus on the idea that massive job lossesย will create a super-rich elite that forces the vast majority of humanity into poverty. The problem with this view is that it ignores the fact that such an insulated elite is unsustainable. Without a viable market to which they could sell their goods or services, such a miniscule upper class would have no source of income to fuel its ongoing wealth. It would ultimately collapse upon itself.
While it might be possible to counter this view by pointing out that behavior of both individuals and groups is rarely as rational as economists have traditionally believed it to be, it is also true that a society in which most people have no power to buy goods cannot sustain itself. Dystopian sci-fi novelists may enjoy portraying a world in which a small ruling elite hoards all the value and all the power while 99% of society lives in poverty and servitude, but the fact is that if the non-elite has no purchasing power then the eliteโs advantage disappears.
As for the idea of near-universal job loss, AI professor Toby Walsh tempers such predictions with two examples:
[W]e can pretty much automate the job of an airline pilot today. Indeed, most of the time, a computer is flying your plane. But society is likely to continue to demand the reassurance of having a pilot on board even if they are just reading their iPad most of the time.
As a second example, the Oxford report gives a 94% chance for bicycle repairer to be automated. But it is likely to be very expensive and difficult to automate this job, and therefore uneconomic to do so.[ii]
In other words, Walsh suggests in the first example that humans will for the foreseeable future feel more comfortable knowing that some jobs are being done by other humans, even if those doing them merely oversee the automated systems to ensure that they operate properly. And in the second example, he suggests that the fact that a job could be automated, does not mean that it always will be.
Walsh also mentions that the Oxford report gives a 63% chance of the jobs of geoscientists being automated, but he claims that any such automation would only offer geoscientists the opportunity to do more geoscience and less administrative work. He supports his statement with predictions from the U.S. Department of Labor that the number of geoscientists will increase by 10% over the next decade due to increased demand for people with the skills to find more of the earthโs resources as known deposits diminish.
A similar trend can be observed from the introduction in 1985 of Adobe PageMaker. This was forecast at the time to be likely to put layout designers and typesetters out of work and, indeed, it did so to a considerable extent โ but it also had two counterbalancing effects:
- People in both of those jobs had much more time to try different layouts, and so the quality and appearance of printed matter improved; and
- It became possible for people whose primary occupation was in some other field to produce high quality printed material of their own without having to subcontract it. Costs fell at the same time as quality improved.
The main weakness in the Oxfordย report and other similar predictions of massive job losses is the methodology behind them. The approach has been to assume that if a job contains any tasks that could be automated, that whole job would cease to exist. Only by using that assumption can you arrive at such massive job loss figures.
A 2017 McKinseyย report,[iii] on the other hand, suggests that less than 5% of current occupations are candidates for full automation. This is more realistic than 47-97%. That is not, though, to say that disruption of current occupations is that limited.
What bears consideration in dystopian views
Despite the evidence that shows the dire conclusions of those who promote the dystopian view to be overblown, it would be irresponsible to dismiss the issues they raise. Some of their points, although taken to extremes, are very valid.
There will be job losses, even if they are not as extreme as those with a dystopian view claim. Weโll examine that in more detail later in this chapter. Also valid is the warning against rushing into the new technologies without adequate forethought about their possible side effects.
AI will produce a significant disruption to society that must be thoughtfully planned to reduce the negative effects that it will inevitably produce. The more care we put into planning the direction of AI in both our industryโs future and our personal future, the better we will be able to limit its disruption and keep it from coming anywhere near the doom and gloom predicted in the dystopian view.
The Utopian View of AI Future
The second popular view of AI has it leading humanity into a utopianย future. Those who take this view accept the figures of near-universal job loss as not only true, but a cause for celebration. They picture a society in which AI frees humankind from the need to work for a living, thus permitting humanity to pursue the advancement of altruism and culture.
The world they envisage pictures all work being done by AI-controlled automation. Rather than this leading to poverty for those who no longer have jobs, the utopian view sees this as a boon. With no one needing to be paid for producing all the worldโs goods, the profits from those goods being produced without human input could be distributed equally to all people as a Universal Basic Income (UBI).
This UBI would provide for everyoneโs basic needs and free them to devote their lives to the betterment of society. The idea behind this assumes that those who are free from working for a living would then use their time to volunteer to help others or would pursue artistic excellence, thus enhancing civilization. A cool view of these ideas is that those things may happen and people may be more free than today to be altruistic and creative. Whether they turn those opportunities into reality will be a matter of personal choice.
Be that as it may, UBI would, in the short term, eliminate extreme poverty. People who had been poor would enter the middle class, have access to education, be able to start businesses and have the doors to creativity opened to them. That is not the same as eliminating class. There would still be rich people and people would still be motivated to accumulate riches as a means of self-validation, but the vast majority, freed from the specter of poverty, could focus on self-improvementย which, in itself, would indirectly improve Society.
False assumptions in utopian views
The utopian view of AI bringing worldwide prosperity, peace and harmony rehashes the age-old fantasy that each new form of technology will be the catalyst that enables humankind to overcome its baser nature and evolve into fully actualized human beings. At their inception, radio, television, computers, cable TV and the internet were each trumpeted as technologies that would bring enhanced communication and greater understanding between people, or increased love of the arts and culture. Yet, somewhere along the way, each of them failed to deliver those lofty promises. Humankindโs baser nature has always co-opted those technologies to serve the lowest common denominator.
Rather than leading to greater understanding of others, they have often become vehicles that help people isolate themselves even further and reaffirm their tendency toward self-absorption, insensitivity, anger and even violence. The question that needs to be asked is: Given how we know people to be, how many of them, if released from the need to work for a living, would respond by seeking ways to better society? Even those who are convinced that they would seek societyโs greater good would likely find it hard to agree that the masses would spontaneously do the same.
And it is not easy to express great confidence. It may seem out of place in a book about leading-edge technology to look back at the first emergence of the animal we call the human being, but it is necessary. And what that look back into history tells us is not that we are, as humans, out for ourselves alone and nor is it that we seek the greater benefit of all mankind. What it tells us is that tribalismย is at the very center of what it means to be human.
Back in those far-off days, when humans took shelter in caves, they did so in small groups. And while it would be nice to believe that they engaged in collaborative berry-gathering and root-digging while chatting about young Cecilโs prowess at painting the walls of the cave, it almost certainly wasnโt like that. Charles Darwinย has told us that all of nature progresses by natural selection โ the survival of the fittest โ and there is hard evidence to suggest that meat consumption was behind the growth of the human brain. To eat meat, those cavemen had to kill animals. And killing an animal (while avoiding being killed by the animal) was a group activity. One person alone facing a sabre toothed tiger unarmed would be more likely to form the tigerโs dinner than the other way round. They had to hunt in groups. Family groups, kinship groups, but always tight groups.
And when theyโd killed their animal, they had to get it home. Back to the cave so that the whole group could eat it. If they saw another group while they were dragging their prey home, the assumption would be that that group was out to steal their dinner. And they would resist โ with as much violence as was required.
We can be fairly sure that that is where humanityโs tribalism first emerged. The need to cooperate with your own group and combat every other group, just in order to survive.
In Africa, where all this started, that business of kinship continued to hold sway right up until the arrival of slavers and colonists. Of all the damaging things that have been done to Africa, the worst is probably imposing territorialism in the form of countries on what for millennia had been arrangements of extended families.
But that is, perhaps, to stray too far from the subject at hand. All we really need to tell ourselves is that humans are not well equipped to deal in an even-handed and fair way with other humans. We canโt change that โ but we can take it into account as we implement the Fourth Industrial Revolution.
Can AI really surpass human capabilities in the future?
In part of the reasons set out above, AI is not likely to push humankind to a more highly evolved level any more than any of those other technologies did. Not only that, but AI, contrary to the claims of many proponents of either dystopian or utopian views, remains far from showing the ability to fully match humankindโs capabilities that both views presuppose.
Those who believe that AI will eventually surpass human intellectual capability look only at AIโs ability to speedily process and analyze data. They picture AIโs ability to learn from the data it processes as the only element involved in human intelligence. In doing so, they overlook the essential distinction between AI and the human brain.
Any AI system is essentially what we would call, among humans, a savant, someone who possesses far more advanced mental ability in a tightly limited sphere of expertise, at the expense of diminished ability in all other areas. Like a savant, AI systems are designed for a single or limited set of purposes.
They can more quickly retrieve and use information stored in them than human brains can, enabling them to surpass the ability of grand masters in games like chess or Go that are based on structured rules and probabilities. They fall woefully short of human capability, though, when it comes to applying knowledge of one task to a task that lies outside the scope of its programming.
The human brain, on the other hand, is capable of successfully using its experiences and understanding across an almost unlimited set of situations. By virtue of its multi-use capability, the brain is far more capable of connecting unrelated ideas into a new creation โ intuitive leaps of understanding โ than an AI system is.
A 150-ton supercomputerย could process 93 trillion operations per second; the human brain can process 1 million trillion โ staggeringly more. An AI system can be programmed to process and learn from a defined set of data; the human brain naturally processes and learns not only from limited set of data, but can intuitively incorporate all data to which it is exposed, with no limits on the kind or variety of data that enters a personโs sensory range.
Even in storage capacity, an area that AI proponents frequently quote as proof of AIโs superiority to the brain, the comparison is not as clear-cut as proponents suggest. Estimates of how much data the brain can store are equivalent to 2.5 million billion gigabytes. Granted, an AI system is far quicker at retrieving data than the brain is, but that is because of two other significant advantages that the brain has over mere retrieval speed:
- The data that the brain stores is far richer than what a digital system stores. It can include any sights, sounds, sensations, smells or emotions related to a piece of data โ and the tools to creatively reshape and connect them in different forms.
- The brain, having access to such an enormous and rich store of memories, data, current sensory input and the ability to manipulate those elements creatively and intuitively, has an auto-focus feature that locks onto the information most relevant to the current situation and limits the conscious mindโs focus to what matters at the moment. It pushes data irrelevant to the current situation into the background so it can deal more efficiently with present needs.
When you look at all the ways the brain is superior to AI, itโs clear that AIโs computational โ and even its machine learning โ capabilities, while impressive, leave it far from surpassing humanityโs capabilities.
The risks in overconfidence in AI
Even some at the forefront of AI, like Elon Musk, founder and CEO of Teslaย and SpaceX, have found AI less advanced than they give it credit for. Musk, confident that his most robot-intensive assembly line in the auto industry[iv] would be able to produce 5,000 of his latest model per week, set delivery dates for preordered vehicles accordingly. Despite his most strenuous efforts, however, he could not get the line to produce more than 2,000 per week and customers were predictably dissatisfied. In response to the delays, he tweeted โYes, excessive automation at Tesla was a mistake. To be precise, my mistake. Humans are underrated.โ Although he continues to approach his problems by trying to improve the automation, his admission is spot-on.
Another reason we should not expect AI to displace humans is the old โgarbage in, garbage outโ maxim. The judgments that AI systems make are only as accurate as the data fed into them. People need to remain involved to ensure that conclusions reached by AI systems are not based on bad data.
One AI system designed to decide which patientsย should be hospitalized for pneumonia delivered a startling recommendation. It determined that patients who were diagnosed as asthmatic were less likely to die from pneumonia than those who were not and should not be prioritized for hospitalization. This shocked the medicalย professionals who received this recommendation, because it directly contradicted common medical wisdom about the danger of pneumonia to asthmatic patients.
Statistically, the AI systemโs recommendation was totally accurate based on the data fed into it; a smaller percentage of asthma patients died than their non-asthmatic counterparts. But the reason for this lay in a piece of data that had not been fed into the system: The reason that fewer asthmatic patients died was because doctorsย were much quicker to hospitalize them than non-asthmatic patients. Had the AI recommendation not been checked by doctors who had real-life experience with the issue, a deadly policy of not prioritizing asthmatic patients with pneumonia for hospitalization would have been adopted.[v]
Again, the superiority of the human brain reveals itself here. The doctors who determined what data should be fed into the AI system possessed such a wide body of knowledge that they didnโt even think to include some details that were so basic that they took them for granted as common knowledge. They overlooked this crucial piece of data and the AI system came back with a recommendation that would have been tragic if people hadnโt caught it in time.
We dare not overestimate the capabilities of AI. It will remain a tool that requires human input and guidance for it to benefit humanity.
The Organic View of AI Future
That brings us to the view of AI that is perhaps the most tempting to adopt, the organicย view that jobs lost to AI will be negated by jobs that AI creates. In this view, too, the assumptions that underlie it are dangerous and must be tempered with reality if we are to face AIโs growth with minimal disruption.
Those who advocate the organic view point to past industrial revolutions to support their view that effects of AIโs disruption will be minimal. They relate how, for each occupation minimized or rendered obsolete by past disruptions, new occupations developed to fill the needs generated by whatever new technology caused the disruption. Makers of handcrafted goods were displaced by the First Industrial Revolution, but the rapid growth of factories provided new jobs, and so on through each successive revolution.
Granted, many occupations available today had not even been imagined only one or two industrial revolutions ago. Who would have envisaged such occupations as video game designers or cybersecurity specialists before the technology behind them existed? Thus, holders of this organic view suggest that everything will work itself out as new occupations arise to provide jobs for those displaced from jobs that AI renders obsolete.
False assumptions in the organic view
That assumption, however, ignores the rough and sometimes violent transitions that past industrial revolutions spawned before the labor force could adapt to them. It took time โ and sometimes bloodshed โ before the transitions to new job categories in some of those revolutions worked themselves out.
The move from goods produced by craftsmenย to goods produced by machine led to riots as displaced craftsmen sought to preserve their familiar way of life. The rise of the assembly line led to widespread exploitation of workers under inhumane working conditions, which, in turn, led again to labor riots. It took governments decades in both cases before legal protections for displaced workers finally afforded them basic protections that made the newly created jobs desirable.
And, although the digital revolutionย of the late 20th century did not result in a violent response from those who were displaced, entire job categories were wiped out. Workers found themselves scrambling to obtain new skills that would qualify them for jobs in an increasingly digital marketplace. The suffering from disruption to their lives that they suffered is incalculable.
The danger of overconfidence in the organic view
Taking a laisse faire approach to the growing AI disruption would be, at best, ill-advised and, at worst, callous. A real threat to jobs exists. In some places already, labor statistics show as many job openings as there are unemployed workers.
In other words, people in those locations are failing to find jobs even though plenty are available. Available jobs in those locations require different skills than job hunters have. Such conditions are likely only to accelerate as AI replaces workers in lower- and middle-skill jobs while creating jobs that require skills which our current education and training systems are not preparing workers to fill.
For example, the previously quoted prediction of the need for 10% more geoscientists over the next decade presupposes that 10% more people trained in this specialty will be available. That increase in geoscientists will not come from insurance underwriters, loan officers, cashiers and data analysts โ displaced by AI โ effortlessly shifting into jobs as geoscientists. Future geoscientists will need specialized training. Most displaced workers will not have the skills that AI-created jobs require.
Consider also that AI will disrupt jobs all the way up to the C-level of management as it becomes more commonly employed in data analysis and process management. Companies will turn to AI to perform many tasks currently associated with upper-level management positions. If leaders do not prepare themselves for the encroachment of AI on their positions, many will find themselves as much at risk as those workers mentioned in the previous paragraph.
Takeaways on the Future of AI
The three common views of AIโs future picture wildly different scenarios. But they agree on one key point: AI will cause massive disruption in todayโs workforce. Many tasks that we are used to seeing being done by people today will be done by AI.
The history of past industrial revolutions suggests that this transition will follow a path similar to what the organic view foresees. But that same history suggests that the transition will not be without pain and disruption for many people. The nature of what AI can do, in fact, suggests that this pain and disruption will likely extend much farther up the ladder of skill levels than has been affected in past industrial revolutions.
As weโll see in future chapters, AI is poised to have an unprecedented effect on society and commerce. Weโll look also at specific ways in which it will likely shift needed job skills, and weโll focus on how todayโs leaders can best position themselves for the expansion of AI.
[i] Carl Benedikt Frey and Michael A. Osborne, The Future of Employment: How Susceptible Are Jobs to Computerisation?, Oxford University, 2013, Available: http://www.oxfordmartin.ox.ac.uk/downloads/academic/The_Future_of_Employment.pdf
[ii] Toby Walsh, Donโt be alarmed: AI wonโt leave half the world unemployed, The Conversation, February 18, 2018, Available: http://theconversation.com/dont-be-alarmed-ai-wont-leave-half-the-world-unemployed-54958
[iii] James Manyika, Michael Chui, Mehdi Miremadi, Jacques Bughin, Katy George, Paul Willmott and Martin Dewhurst, Harnessing Automation for a Future that Works, McKinsey Global Institute, 2017, Available: https://www.mckinsey.com/global-themes/digital-disruption/harnessing-automation-for-a-future-that-works
[iv] Laura Geggel, Elon Musk Says โHumans Are Underratedโ, Live Science, April 17, 2018, Available: https://www.livescience.com/62331-elon-musk-humans-underrated.html
[v] Brad Smith and Harry Shum, The Future Computed, Forward, p.9, Microsoft, 2018, Available: https://news.microsoft.com/futurecomputed/