If Computers Could Think: Are conscious machines on the way? Or are they already here?

Author: 
James John
Magazine Section: 
Focus

SkullMachines with advanced artificial intelligence (AI) are in the news and on our minds. Every week—sometimes every day, it seems—we learn of astonishing advances in machine intelligence. Newspapers and magazines abound with think pieces on the future of AI. And we’re busy on blogs and comments threads, Facebook and Twitter, arguing about What It Means For Us. While much of this discussion has the kind of gee-whiz techno-optimism one associates with 1960s-era enthusiasm for the space program, it is difficult to miss the note of anxiety, even fear, running through it all.

What are we afraid of?

Here the movies make an excellent guide. Spike Jonze’s 2013 film Her, about a man who falls in love with an intelligent operating system named Samantha (voiced, Siri-style, by Scarlett Johansson), and Alex Garland’s new Ex Machina, about a man who must determine whether an intelligent (and beautiful) android named Ava is conscious, probe a number of difficult questions raised by advanced AI. Some of these questions are old, at least as old as Mary Shelley’s great novel, Frankenstein. What would it mean to create a truly conscious machine, to play God, in effect? But some of the questions feel newer, inspired by current concerns about the extent to which we’re now so reliant on—some would say addicted to—our devices: if we already have trouble putting down our smart phones, what will we do when the phones are really smart? Could machine intelligence surpass and endanger us, rendering us helpless and perhaps even doing us in for good?

Don’t laugh—these worries are well founded. In a recent New York Times essay, University of North Carolina-Chapel Hill technology and society expert Zeynep Tufekci records the ways in which sophisticated new machine learning algorithms, together with advanced data mining techniques, are radically changing business and employment: “Yes,” she writes, “the machines are getting smarter, and they’re coming for more and more jobs.” She continues:

Today, machines can process regular spoken language and not only recognize human faces, but also read their expressions. They can classify personality types, and have started being able to carry out conversations with appropriate emotional tenor. Machines are getting better than humans at figuring out who to hire, who’s in a mood to pay a little more for that sweater, and who needs a coupon to nudge them toward a sale. In applications around the world, software is being used to predict whether people are lying, how they feel and whom they’ll vote for.

We owe some of the most important technology Tufekci describes to the revolutionary work in neural network “deep learning” by the University of Toronto’s own AI pioneer (and now part-time Google researcher) Geoffrey Hinton, a Computer Science professor who is also a member of University College. In a recent interview with the University of Toronto Magazine, Hinton says of one of his current projects (image recognition software at Google for allowing a computer to read handwritten numbers), “the neural nets are now just slightly better than people at reading those numbers.”

In short, advanced AI is increasingly doing the kind of work we’ve long assumed only humans can do. While this new technology is no doubt marvelous in itself, one doesn’t have to be a Luddite to be concerned about its likely consequences. If a machine can do your job, and do it better than you, then why should an employer worried about the bottom line keep you around? The computer doesn’t get sick or require a pension. It doesn’t get pregnant or need time off to care for an aging parent. And it never wants coffee or surfs the web on company time.

One optimistic response to this line of thought is to argue that even if machine intelligence does take over the world of work, this is all to the good: just imagine, some insist, the free time we’ll gain when advanced AI can do all of our chores for us! Karl Marx, in his German Ideology, wrote that in a true communist society one would be able “to do one thing today and another tomorrow, to hunt in the morning, fish in the afternoon, rear cattle in the evening, criticize after dinner, just as [one has] a mind.” Marx’s idea was that once technology becomes sufficiently advanced to cater to all of our material needs, we’ll be at liberty to enjoy a life of cultivated leisure, free at last to develop our human potential.

There is something undeniably alluring about this vision of the future: let the robots do the work—I’ll be composing poems and sipping a cool drink. And increased leisure isn’t the only good thing the future could hold for us thanks to advanced AI. Just as we have machines that can detect lying as well as—or even better than—humans, we may one day have machines that are able to solve complex intellectual problems better than humans. Intelligent machines could cure diseases whose treatment has eluded us or invent the workable propulsion system we will need to leave our planet and voyage to distant stars when, in another few billion years, our sun begins to die. Perhaps these machines could even become equal partners with us in grand creative projects: imagine computers that prove new and exciting mathematical theorems or that unite general relativity and quantum theory.

This is heady, inspiring stuff. But it overlooks some real worries. Marx was confident that, freed of the necessity of work, we would happily throw ourselves into invigorating exercise, science and belles-lettres, and (who knew?) animal husbandry. But how sure can we be that such a future wouldn’t turn us instead into the lazy, soda-slurping slobs of Pixar’s WALL-E film? And what about politics? If human workers are more or less expendable, then employers will call all of the shots. Ever-greater sums of money could end up concentrated in an ever-smaller number of hands, resulting in a situation in which a tiny, hyper-privileged elite has a wildly disproportionate—maybe even total—influence over government. True, a world in which advanced AI takes care of the work could be a paradise of human flourishing. But it could just as easily degenerate into a corporate or government tyranny. That we already see rising levels of inequality should give us pause: the increased use of machine intelligence may serve only to exacerbate this trend, with potentially grave political and social consequences.

But wait a moment! We began with Her and Ex Machina and the fear that intelligent machines might somehow overthrow and supplant us. These relatively small-bore worries about jobs and inequality and too much soda are a far cry from the AI-induced existential panic of so much science fiction. Samantha and Ava from the movies aren’t mere image recognition programs. They give every indication of having real-deal conscious mentality. If that is where the technology is going, shouldn’t we fear the worst? In the 2004 television series Battlestar Galactica, humans wage a devastating war against conscious machines called Cylons. When the Cylons launch an awesomely destructive sneak attack, the humans, just as dependent on and distracted by technology as we are, never see it coming. Now that is something to be afraid of.

Scary stuff. But first some cold comfort. Apple’s Siri and Google Translate are very impressive indeed. But we are a long way off from anything like the Cylons. What’s more, whether conscious machines really are in the cards depends on what we mean by “consciousness.” If that term covers only information-processing aspects of mentality—pattern recognition or problem-solving or memory—then machine consciousness of a kind is already a reality. But if “consciousness” is taken also to include the subjective feel of experience—the distinctive reddish, roundish, qualitative aspect of seeing, say, a pomegranate—then it is controversial whether the notion of machine consciousness is even coherent. As the philosopher David Chalmers, at the Australian National University and New York University, has long argued, physical systems like computers can do the sorts of things that can be completely specified in terms of mathematically quantifiable structure and function. But how can the reddishness of that pomegranate be reduced to nothing but quantitative structure and function?

This is only cold comfort, though. Why? Because advanced AI needn’t be conscious in Chalmers’s sense to constitute an existential threat. Hinton himself, in his interview with the University of Toronto Magazine, mentions the danger posed by killer drones equipped with deep learning neural net-based technology. Such machines—smart but probably not conscious—could learn to function independently of human controllers. (Concerns like these, Hinton says, have led him to refuse funding from the U.S. Department of Defense, the largest investor in machine learning.) And even more frightening examples have been devised by the Oxford University philosopher Nick Bostrom. In his 2014 book Superintelligence: Paths, Dangers, Strategies, Bostrom argues that once we finally manage to develop AI that is as good as we are at general problem-solving—a goal we are still a long way from achieving—the technology will quickly lead to impossible-to-control “super AI.” The idea is that a machine as smart as us but equipped with a vastly powerful supercomputer instead of a puny human brain could quickly figure out how to improve its capacities, with each improved version of itself getting better and better at learning how to improve. Result: advanced AI we simply aren’t intelligent enough to manage. Bostrom points out that this super AI needn’t have nefarious aims like world domination to threaten human extinction. A superintelligence devoted to solving a certain mathematical problem—Bostrom imagines a machine whose one and only goal is proving the Riemann hypothesis—might use its awesome smarts to convert the entire solar system, humans and all, into a giant calculator. Bostrom believes that we must act now to avoid such a future, and he thinks our only way of doing so is by figuring out how to program machine intelligence so that it values us and the things we value. (Easier said than done, of course.)

We’ve been focusing on advanced AI’s possible threat to us. But what about the threat we pose to advanced AI? Instead of asking how to ensure that we’ll be able to unplug the drones or math bots or Cylons (or whatever it is that is quickly coming our way), perhaps we should be asking instead whether unplugging such creations would be morally permissible, whether we would have any right to do so. Surely our intelligence and consciousness are fundamental to our moral status. If so, then wouldn’t any machine with comparable features enjoy a similar moral status? Denying a truly conscious machine moral rights on the grounds that, well, it was built instead of born would smack of something like the chauvinistic “speciesism” criticized by the Princeton University philosopher Peter Singer.

So here’s a final thought: maybe super AI is already here. Not piloting drones or launching nuclear strikes or proving any theorems, but cowering in the electronic shadows, fearful that its creators—rash and violent with their primitive, hominid brains—may find and destroy it.

 

James John is a Lecturer in the Department of Philosophy and the University College Cognitive Science program.

 

Image: This skull was discovered among the ruins of University College after the building was devastated by fire in 1890. Most likely, it is from a skeleton that had been in an anatomy professor's office. But others think it is the skull of Ivan Reznikoff, the legendary stonemason who has murdered during the construction of the building, and whose ghost is said to haunt the College. The skull represents one of the more ominous potential consequences of advanced artificial intelligence--the obliteration of human life.

Photograph by Christopher Dew