acm - an acm publication

Articles

Opening Statement: Will computers out-compete us all?
The technological singularity (Ubiquity symposium)

Ubiquity, Volume 2014 Issue October, October 2014 | BY Espen Andersen 

|

Full citation in the ACM Digital Library  | PDF


Ubiquity

Volume 2014, Number October (2014), Pages 1-8

Ubiquity symposium: The technological singularity: opening statement: will computers out-compete us all?
Espen Andersen
DOI: 10.1145/2668424

To jumpstart this symposium, Espen Andersen describes the debate surrounding "technological singularity" and questions whether this is something that will happen—and if so, what the consequences might be.

"I do not fear photography—it cannot be used in heaven or hell."—Edward Munch, painter

"Only the collective intelligence of computers could save us from the destructive powers of the weapons they had allowed us to invent."—George Dyson [1]

I teach technology management to executive students at a business school. Toward the end of some of my courses, I like to challenge the students by assigning Alan Turing's Computer Machinery and Intelligence [2] If in a particularly quarrelsome mood, I will add Ray Kurzweil's Scientific American article [3] on how computers and humans eventually will blend together, with computers becoming superhuman by computational evolution and humans enhancing themselves with technology in order to keep up. Ian Pearson, former futurist for British Telecom, once told my students via videoconference: "We will have computers smart enough that we can talk to them, but the real question is—will they want to talk to us? After all, you do not go out and have a conversation with a garden snail, do you?"

Most students react, almost atavistically, against artificial intelligences and human-like machines. They formulate familiar arguments: Technology just won't get there, we will still be in command because computers have to be told what to do, computers will never be able to do certain "human" things (having feelings, behaving irrationally, being creative), and even if they could, they will would not be human, and in any case there will be an "off" switch. I show slides on Moore's law and similar advances in storage and communication; I discuss the Turing test, give overviews on the evolution of AI, evolutionary computing, and neural networks; and I argue that a simulated emotion is no different, to an observer, than a real one—after all, we are not very good at differentiating between psychopaths and normal people. The discussion is lively but non-conclusive.

That's fine for a bit of epistemological mucking-about as we round off a technology course with some future-gazing. After all, the aim of this exercise is to teach the students that our current technology is not the end-all and be-all, that technology will continue to evolve almost in a Darwinian sense [4] through a complicated set of parallel evolutionary trajectories [5] and both businesspeople and technologists may do well in occasionally paying attention to gradual changes at the peripheries of their domains, where disruptive innovations [6] can sneak in.

Nice. But this is much more than a discussion in a classroom.

The Singularity and Its Adherents

The singularity—or, more specifically, "technological singularity"—is a simple concept. Computers will, faster and faster, get smarter and smarter. Eventually they will get smarter than us. After this point in time —the singularity point—extrapolation doesn't work and nobody knows what will happen.

The term "singularity" was first proposed by the science fiction writer Vernor Vinge [7]—though he credits John von Neumann with first use—and then enthusiastically described and expanded by Kurzweil [3, 8, 9]. Kurzweil believes the singularity will happen around 2025, when technology will have reached the point where it can be used to counter aging and give humans (almost) perpetual life and when we will be able to build the first artificial brains. At that point, he argues, technology will enable indefinite repair of the human body, or when that is not possible, it will allow uploading the contents of our brains—for backup, continued life, or forking—into avatars on a continually refreshed technology platform [10].

I don't want to go into the biological life extension side of the singularity idea—suffice to say that there is quite a bit of entertainment to be had out of watching Kurzweil eat hundreds of pills per day in anticipation of his uploading, or listen to Aubrey de Grey, a gerontologist (certainly looking the part of Methuselah with his rake-thin body and luxuriously flowing grey beard) argue we can achieve "longevity escape velocity" by mitigating seven aging processes.

The perpetual digital life side of the singularity starts with the notion that the brain is a massively parallel information processing engine. Proponents hold that it is possible to replicate a brain's structure and copy the state of a real brain into the machine, where all the functions of the real brain can continue. Others doubt this is possible even in principle. Even if it were, it instantly raises philosophical questions such as whether deleting your avatar is murder. I find this notion consistent with Dan Dennett's notion of self:

"If what you are is that organization of information that has structured your body's control system (or, to put it in its more usual provocative form, if what you are is the program that runs on your brain's computer), then you could in principle survive the death of your body as intact as a program can survive the destruction of the computer of which it was created and first run" [11].

Technology historians will point out that we have always seen ourselves in terms of the technology of the times [12]—be it waterworks, clocks, or steam engines—and that the metaphor and model of the brain as computer is flawed, in principle as well as details. Perhaps we simply do not understand the brain and its component systems well enough to make the claim that we will be able to replicate it in a couple of decades. Kurzweil and others have argued the number of processors now connected to the Internet equals that of a human brain.

The singularity has gone beyond being simply a controversy or a viral meme. It is now a movement. It now has its own Singularity University. Some, such as Rodney Brooks, see the movement as a means in which technology-minded non-believers can find spiritual succor. But this movement is not so easy to dismiss—members include technology luminaries Vint Cerf, Brad Templeton, John Gage, and Paul Saffo. In this group there must be some variety of opinion, surely, they cannot all mean we are 20–30 years from superhuman intelligence and eternal life?

The Singularity and Its Critics

In his excellent review [13] of James Gleick's The Information: A History, a Theory, a Flood [14], Freeman Dyson reminds us that scientists of the 19th century (Lord Kelvin included) believed in the "heat death of the universe." The idea that eventually, as heat transfers from warmer objects to colder, all temperature differences in the universe would level out, and one of the conditions for life to exist would disappear. Those were pretty grim prospects, and there didn't seem to be any counter-argument. Eventually it was understood that large objects, like the sun, produce heat through gravity rather than transfer. The heat death of the universe was postponed indefinitely.

To me, the singularity idea has much in common with the heat death—it looks inevitable on the surface, but surely it must suffer some hidden flaws in thinking or unacknowledged gaps in knowledge? The arguments against singularity have focused on individual parts: Some have argued that limitations of current processor technology—speed of light for inter-processor communication, erratic behavior of single electrons at small feature sizes, or energy requirements—will kick in long before we approach the computing power required for sentience. Others—such as the neurobiologist David Linden [15] argue the physical brain is much more complicated than we think, both in terms of structure and connectivity, and that the direct analogies between brains and computers, or minds and software, simply do not work and are founded on a very superficial understanding of the brain. With 86 billion neurons [16] operating in parallel, that organ is vastly more complex than we can fathom and far too dense and complicated for nanobots and other instruments to navigate. Others have argued consciousness arises from the physical structure of the brain, and uploading the "brain contents" will produce nothing but an unconscious avatar. Some worry about whether we can stay in charge. Bill Joy has argued experimenting with these technologies endangers the human race [17]; therefore, we should either stop going in that direction, or institute controls such as those for nuclear weapons.

More recently, the role of the computer in replacing even rather sophisticated human work has been highlighted. In Race Against the Machine [18], Eric Brynjolfsson and Andy McAfee examine labor economic figures and point out an increasing disparity of income in the United States: While the rich get richer, the median income has not increased in real terms for more than 10 years. The authors attribute this to the increasing ability of computers to do what humans do—not just automating drudgery work like payroll processing and data registration, but jobs traditionally held by relatively educated employees, such as translation, customer support, sales, and routine forms of management. Brynjolfsson and McAfee explain many jobs that just a few years ago were deemed way too complex to be automated, such as the driving of a taxi or a delivery truck, within a few years—certainly before 2025—could be automated, for instance Google's self-driving cars. As for intellectual tasks, IBM's Watson system [19] has already won "Jeopardy!," and is currently being repurposed for medical chemistry database search and possibly patent evaluation [20].

This is not new in principle. Computers have always been used for automating what humans can't or won't do, but it might be in scale and scope. So far, those anxious about job losses have always been told computers eventually create more jobs than they take. The jobs that disappeared may not have been all that great, anyway. But we may have reached a tipping point, where:

"... some human skills are more valuable than ever, even in an age of incredibly powerful and capable digital technologies. But other skills have become worthless, and people who hold the wrong ones now find that they have little to offer employers. They're losing the race against the machine, a fact reflected in today's employment statistics" [18].

On the other hand, perhaps technology really can step in to make us smarter, in a smaller way. We live in a time where Google organizes the world's data, Facebook connects the world's people, and more and more of what we do is digitized and hence, eventually, subject to aggregation, search, and analysis. The future promises even more data and access: Lifeloggers capture everything they do electronically for easy access via computer-mediated interfaces [21]. My sanguine colleague Cathal Gurrin automatically takes 3,000 pictures every day and knows what he had for dinner—and the conversation he had over it—for the last five years [22]. The technology acts as a "surrogate memory," creating "a freeing, uplifting, and secure feeling—similar to having an assistant with a perfect memory" [23]. But it is also a situation that creates dependency on technology that may not always be available—and in some instances, such as the financial markets, where robotically called trades now outnumber human-called ones—perhaps not fully understood.

The Singularity and the Symposium

The prior concerns behind singularity (super-intelligence and immortality) have expanded to include concerns about IT automating jobs out of existence and our putting civilization at risk by depending on a fragile information-preservation environment. We need to expand the discussion, and that is the purpose of this symposium: To get a variety of speakers, including many new ones, to discuss these issues and give us some perspective.

Most of my students simply do not seem to believe in the singularity—but, like me, they do not have the arguments necessary to support or refute it. So they endure an uncomfortable discussion and move on. Others may react differently—and perhaps we will see a return of Luddism, this time with legislators limiting research on and application of technology?

So there is my challenge to you all: Will computers outcompete us all and how can we in a clear-cut, understandable and helpful way make sense of the emerging competitive computer and the technological singularity?

References

[1] Dyson, G. Turing's Cathedral: The Origins of the Digital Universe. Pantheon Books, New York, 2012.

[2] Turing, A.M. Computer Machinery and Intelligence. Mind 59, 236 (1949), 433–460.

[3] Kurzweil, R. The Coming Merging of Mind and Machine. Scientific American (1999).

[4] Kelly, K. What Technology Wants. Viking/Penguin, New York, 2011.

[5] Arthur, W.B. The nature of technology: What it is and how it evolves. Free Press, New York, 2009.

[6] Christensen, C.M. and Raynor, M. The Innovator's Solution: Creating and Sustaining Successful Growth. Harvard Business School Press, Cambridge, 2003.

[7] Vinge, V. The Coming Technological Singularity: How to Survive in the Post-Human Era. In VISION-21 Symposium. NASA Lewis Research Center/Ohio Aerospace Institute. 1993.

[8] Kurzweil, R. The Age of Spiritual Machines. Penguin, New York, 1999.

[9] Kurzweil, R. The Singularity is Near: When Humans Transcend Biology. Viking Adult, 2005.

[10] Moravec, H. Mind Children. Harvard University Press, Cambridge, 1988.

[11] Dennett, D.C. Consciousness Explained. Little, Brown & Co., Boston, 1991.

[12] Bolter, J.D. Turings Man: Western Culture in the Computer Age. Unwin Brothers Ltd., Old Woking, Surrey, United Kingdom, 1984.

[13] Dyson, F. How We know. The New York Review of Books 2011 (March 10).

[14] Gleick, J. Information: A History, a Theory, a Flood. Pantheon Books, New York, 2011.

[15] Linden, D.J. The Singularity is Far: A Neuroscientist's View. boing boing, (July 14, 2011).

[16] Herculano-Houzel, S. The Human Brain in Numbers: A linearly scaled-up primate brain. Frontiers in Human Neuroscience 3, 31 (2009).

[17] Joy, B. Why the Future Doesn't Need Us. Wired. 8.04 (1999).

[18] Brynjolfsson, E. and McAfee, A. Race Against the Machine. Digital Frontier Press, 2011.

[19] Ferrucci, D. et al. Building Watson: An overview of the DeepQA project. AI Magazine 31, 3, (2010).

[20] Anthony, S. IBM Watson to battle patent trolls. ExtremeTech. 2011.

[21] Cox, S. Memories are Made of Disks. Sunday Times Magazine. 2011: London.

[22] Doherty, A.R., Gurrin, C. ,and Smeaton, A.F. An investigation into event decay from large personal media archives. In Proceedings of the 1st ACM international workshop on Events in multimedia. (Beijing, China) ACM Press, New York, 2009, 49–56.

[23] Gemmell, J., Bell, G., and Lueder, R. MyLifeBits: A personal database for everything. Communications of the ACM 49, 1 (2006).

Author

Espen Andersen is Associate Professor of Strategy and Director of the Technology Strategy Center with The Norwegian Business School in Oslo. He is a frequent speaker and writer on technology and business strategy issues.

2014 Copyright held by the Owner/Author. Publication rights licensed to ACM.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2014 ACM, Inc.

COMMENTS

POST A COMMENT
Leave this field empty