Team Human vs. Team AI
To make artificial intelligence live up to its promise, we need to understand and reframe the values implicit in the technology.
A version of this article appeared in the Summer 2019 issue of strategy+business.
People and organizations are navigating a new terrain, characterized by autonomous technologies, runaway markets, and weaponized media. To many, it feels as if these new phenomena threaten not only to disrupt our companies, but also to paralyze our ability to think constructively, connect meaningfully, and act purposefully. It feels as if civilization itself is on the brink, and that we lack the collective willpower and coordination necessary to address issues of vital importance to the very survival of our species.
It doesn’t have to be this way.
Some are asking how we got here, as if this were a random slide toward collective incoherence and disempowerment. It is not. There’s a reason for our current predicament: an antihuman agenda embedded in our technology, our markets, and our major cultural institutions, from education and religion to civics and media. It’s this agenda that has quietly turned these institutions — including, most likely, elements of your own company — from forces for human connection and expression into forces of isolation and repression. Humanity is seen as a liability instead of a strength.
By unearthing this agenda, we render ourselves capable of transcending its paralyzing effects, reconnecting to one another, and remaking society toward human ends rather than the end of humans. One of the best places to start is in the use of artificial intelligence. To reclaim civilization in an AI-infused world, we need to understand the hidden conflict between those promoting artificial intelligence — Team AI — and those who would reassert the human agenda: Team Human.
The AI Agenda
We shape our technologies at the moment of conception, but from that point forward, they shape us. We humans designed the telephone, but from then on, the telephone influenced how we communicated, conducted business, and conceived of the world. We also invented the automobile, but then rebuilt our cities around automotive travel and our geopolitics around fossil fuels.
Artificial intelligence adds another twist. After we launch technologies related to AI and machine learning, they not only shape us, but they also begin to shape themselves. We give them an initial goal, then give them all the data they need to figure out how to accomplish it. From that point forward, we humans no longer fully understand how an AI program may be processing information or modifying its tactics. The AI isn’t conscious enough to tell us. It’s just trying everything and hanging onto what works for the initial goal, regardless of its other consequences.
On some social media platforms, for example, algorithms designed to increase traffic might do so by showing users pictures of their ex-lovers having fun. No, people don’t want to see such images. But, through trial and error, the algorithms have discovered that showing us pictures of our exes increases our engagement. We are drawn to click on those pictures and see what our exes are up to, and we’re more likely to do it if we’re jealous that they’ve found a new partner. The algorithms don’t know why this works, and they don’t care. They’re only trying to maximize whichever metric we’ve instructed them to pursue.
That’s why the original commands we give them are so important. Whatever values we embed — efficiency, growth, security, compliance — will be the values that AI achieves, by whatever means happen to work. AI will be using techniques that no one — not even an AI application itself — understands. And it will be honing those techniques to generate better results, and then using those results to iterate further.
To a hammer, everything is a nail. To AI, everything is a computational challenge.
This is what all the hoopla about “machine learning” is really about. The things we want our robots to do — such as driving in traffic, translating languages, or collaborating with humans — are mind-bogglingly complex. We can’t devise a set of explicit instructions that covers every possible situation. What computers lack in improvisational logic, they must make up for with massive computational power. So computer scientists feed the algorithms reams and reams of data, and let them recognize patterns and draw conclusions themselves.
They get this data by monitoring human workers doing their jobs. The ride-hailing app on cab drivers’ phones also serves as a recording device, detailing the way they handle various road situations. The algorithms then parse data culled from thousands of drivers to write their own autonomous vehicle programs. Online task systems pay people pennies per task to do things that computers can’t yet do. The answers are then fed directly into machine learning routines.
The Value of a Jobless Future
In the future envisioned in much of the commentary from Wall Street and Silicon Valley perspectives, humans are another externality. There are too many people, asking for salaries and healthcare and meaningful work, who won’t be needed in the long run. Each victory we win for human labor, such as an increase in the minimum wage, makes people that much more expensive to employ, and supports the calculus through which checkout workers are replaced by touch-screen kiosks.
Where humans remain valuable, at least temporarily, is in training their replacements. Back in the era of outsourcing, domestic workers would cry foul when they were asked to train the lower-wage foreign workers who shortly would replace them. Today, workers are hardly aware of the way digital surveillance technologies are used to teach their jobs to algorithms. The humans’ only real job is to make themselves obsolete.
Without a new social compact through which to distribute the potential bounty of the digital age, competition with our machines is a losing proposition.
Losing one’s job to a robot is no fun, but the solution is not to hold on to jobs. It’s to change the way we think about them. The employment model has become so prevalent that our best organizers, representatives, and activists still tend to think of prosperity in terms of getting everyone “jobs,” as if what everyone really wants is the opportunity to commodify their living hours. It’s not that we need full employment in order to get everything done, grow enough food, or make enough stuff for everyone. In the United States, we already have surplus food and housing. But we can’t simply give the extra food to the hungry or the surplus houses to the homeless. Why? Because they don’t have jobs! We punish them for not contributing, even though we don’t actually need more contribution.
Jobs have reversed from the means to the ends, the ground to the figure. They are not a way to guarantee that needed work gets done, but a way of justifying one’s share in the abundance.
If we truly are on the brink of a jobless future, we should be celebrating our efficiency and discussing alternative strategies for distributing our surplus, from a global welfare program to universal basic income. But we are nowhere close. While machines may get certain things done faster and more efficiently than humans, they externalize a host of other problems that most technologists pretend do not exist. Even today’s robots and computers are built with rare earth metals and blood minerals; they use massive amounts of energy; and when they grow obsolete their components are buried in the ground as toxic waste.
By hiring more people rather than machines, paying them livable wages, and operating with less immediate efficiency, companies could minimize the destruction they leave in their wake. Hiring 10 farmers or nurses may be more expensive in the short run than using one robotic tractor or caregiver, but it may make life better and less costly for everyone over the long term.
In any case, the benefits of automation have been vastly overstated. Replacing human labor with robots is not a form of liberation, but a more effective and invisible way of externalizing the true costs of industry. The jobless future is less a reality to strive toward than the fantasy of technology investors for whom humans of all kinds are merely the impediment to infinite scalability.
A future where we’re all replaced by artificial intelligence may be further off than experts currently predict, but the readiness with which we accept the notion of our own obsolescence says a lot about how much we value ourselves. The long-term danger is not that we will lose our jobs to robots. We can contend with joblessness if it happens. The real threat is that we’ll lose our humanity to the value system we embed in our robots, and that they in turn impose on us.
Reclaiming the Human Agenda
Some computer scientists are already arguing that AI should be granted the rights of living beings rather than being treated as a mere instrument or slave. We are moving into a world where we care less about how other people regard us than how AI does.
Algorithms do reflect the brilliance of the engineers who craft them, as well as the power of iterative processes to solve problems in novel ways. They can answer the specific questions we bring them, or even generate fascinating imitations of human creations, from songs to screenplays. But we are mistaken if we look to algorithms for direction. They are not consciously guided by a core set of values so much as by a specific set of outcomes. They are unconsciously utilitarian.
Yet without human intervention, technology will become the accepted premise of our shared value system: the starting point from which everything else must be inferred. In a world dominated by text communication, illiteracy was seen as stupidity, and the written law might as well have been the word of God. In a world defined by computers, speed and efficiency become the primary values.
To many of the developers and investors of Silicon Valley, however, humans are not to be emulated or celebrated, but transcended or — at the very least — reengineered. These technologists are so dominated by the values of the digital revolution that they see anything or anyone with different priorities as an impediment. This is a distinctly antihuman position, and it’s driving the development philosophy of the most highly capitalized companies on the planet.
AI systems are already employed to evaluate teacher performance, mortgage applications, and criminal records, and they make decisions just as biased and prejudicial as the humans whose decisions they were fed. But the criteria and processes they use are deemed too commercially sensitive to be revealed, so we cannot open the black box and analyze how to adjust their biases. Those judged unfavorably by an algorithm have no means to appeal the decision or learn the reasoning behind their rejection. Many companies couldn’t ascertain their own AI’s criteria, anyway.
As AI systems pursue their programmed goals, they will learn to leverage human values. As they have already discovered, the more they can trigger our social instincts and tug on our heartstrings, the more likely we are to engage with them as if they were human. Would you disobey an AI that feels like your parent, or disconnect one that seems like your child?
To a hammer, everything is a nail. To AI, everything is a computational challenge. By starting with the assumption that our problems are fixable by technology, we end up emphasizing particular strategies. We often ignore or leave behind the sorts of problems that the technology can’t address. We move out of balance, because our money and effort go toward the things we can solve and the people who can pay for those solutions. For example, far more people are working on making social media feeds more persuasive than on making clean water more accessible. We are building our world around what our technologies can do.
Instead, we need to build our world around what people need. Human beings are not the problem. We are the solution. The companies that recognize this will build a different kind of legacy in the age of AI. They will be recognized as true allies of Team Human.
- Douglas Rushkoff is the founder of the Laboratory for Digital Humanism at Queens College, CUNY, where is a professor of media theory and digital economics. His books include Present Shock and Program or Be Programmed; he is correspondent and coproducer for the PBS Frontline documentaries Generation Like and Merchants of Cool.
- This article is adapted from Team Human by Douglas Rushkoff. Copyright © 2019 by Douglas Rushkoff. Reprinted with permission of W.W. Norton & Company, Inc. All rights reserved.