acm - an acm publication

Articles

Exponential Technology and The Singularity
The technological singularity (Ubiquity symposium)

Ubiquity, Volume 2014 Issue November, November 2014 | BY Peter Cochrane 

|

Full citation in the ACM Digital Library  | PDF


Ubiquity

Volume 2014, Number November (2014), Pages 1-9

Ubiquity symposium: The technological singularity: exponential technology and the singularity
Peter Cochrane
DOI: 10.1145/2667644

The Priesthood of the Singularity posits a fast approaching prospect of machines overtaking human abilities (Ray Kurzweil's The Singularity is Near, Viking Press, 2006) on the basis of the exponential rate of electronic integration—memory and processing power. In fact, they directly correlate the growth of computing technology with that of machine intelligence as if the two were connected in some simple-to-understand and predictable way. Here we present a different view based upon the fundamentals of intelligence and a more likely relationship. We conclude that machine intelligence is growing in a logarithmic (or at best linear fashion) rather than the assumed exponential rate.

At a modest estimate there are more than 120 published definitions of intelligence penned by philosophers and theorists. Unfortunately, none provide an iota of understanding, enlightenment or quantification. Worse, the established IQ measure [1] is both flawed and unhelpful. And so it is difficult to have a sensible conversation on 'Machine Intelligence' as we have no adequate definition, description, quantification or measure to refer to.

In addition, the commonly used metric of counting the number of processors and interconnections, and then multiplying the two to create a single figure of merit is too simplistic to be meaningful and does not reflect any real notion of intelligence. Estimates of machine intelligence growth on this basis would suggest Hal 9000 [2] should be alive and well at this time, but clearly he is not! Moreover, the likelihood of there being a direct linear relationship between machine size (processing and storage) and "intelligence" seems unlikely on an intuitive basis. Large complex systems rarely, if ever, foster simple linear relationships as complexity begets complexity.

Back To Basics

Real-life experiences, experimentation, signal theory, and thermodynamics present us with clues and chart a clear direction for a more realistic understanding of intelligence. In brief, we can formulate a more general argument along the following lines:

  1. Slime molds and jellyfish (et al.) exhibit intelligent behavior without distinct memory or processing. They only have "directly wired" input sensors and output actuators. (A proviso here is that we exclude the delay between sensing and reacting as a distinct memory function, and the sensor-actuator as a one-bit-processor on the basis of being so minimal as to be insignificant in the context of this paper. This also turns out to be a good engineering approximation based on the capabilities of far bigger and more complex organic and silicon systems).
  2. In the main our machines have memory and processing maintained as distinctly separate entities, but this is seldom the case in organic systems where they overlap and share functionality and interconnection. (But again this assumption of separation will suffice for the class of machines being considered. This distinction is made because of the immediate temptation is to apply the theory that follows as a general model for organic life, and whilst not entirely inapplicable, this limitation should be born in mind for any later expansion, extension and/or application).
  3. Intelligent behavior is possible without memory or processors—simple sensors and actuators alone can furnish that facility. Conversely, systems cannot exhibit intelligence without an input sensory system or output actuators. (This was recently writ large by experiments on human subjects. Place a fully able human in an MRI scanner, ask them to close their eyes and imagine they are playing tennis, and their brain lights up. Now repeat that experiment with comatose patients of many years and the same result is often evident. So, to date, their inability to communicate/output has seen them classified as brain dead, without feeling, response, or intelligence [3].
  4. All forms of natural and artificial intelligence encountered so far invoke state changes in their environment. Taking an "information theoretic stance" we can therefore assume that all natural and artificial intelligences are inherently entropic in that they produce some form of state change or re-ordering of bits and/or atoms in the overall environment. For the most part the entities that embody intelligence are also impacted/changed along with the nature of the originating or prime mover intelligence. In this respect "feedback" is a powerful non-linear mechanism of change that can be translated into "learning or experience." (A comparison of such change sees an expansion or compression of the quantity of the information derived from the original input. For example; the answer to the question "why is the sky blue?" would contain a far more words than the reply to "Do we know why the sky is blue?".) This implies that entropic change may be the key characteristic of an intelligent entity.

ANALYSIS

Previously published intelligence measures have assumed a direct correspondence with the exponential growth in computational power and storage capacity with no account being taken of any input or output mechanisms. But, as argued above, this is patently not the case. More reasonably, and rationally, intelligence is more likely that intelligence is related to the information contained in the complete system, the sensory system, and some measure of reaction/response time. So, if we follow this line we discover that it leads to a different outcome to the "singularity conclusion."

As we are now looking at a "state change or information transforming machine," it is entirely appropriate (and experimentally confirmed) to employ an entropic measure to account for the reduction or increase in the system information or state change, and to take this as a measure of the applied or relative intelligence. We therefore define a measure of comparative intelligence as:

eq01.gif

We take the Modulus value as we are using the "state change" as our measure. Entropy E = the amount of information to exactly define the systems state.

For the purpose of formulation and demonstration we assume a simple intelligent entity consisting of a singular Sensor (S), Actuator (A) Processor (P) and Memory (M) as depicted in Figure 1. We also skirt the complex and precise functional operations of S, A, P, and M functions as in the general case we have no method or mathematical framework that allows us to adequately describe these temporal processes. Fortunately, it turns out to be sufficient to apply "weighting values" to each of the functional elements as follows S => S : A => A : P => P : M => M.

S, A, P, M are the weighted values only and do not infer any define functional operation.

Only in very simple and limited cases can we define the above functions (S, A, P, and M) accurately and work through the processes of a specific intelligent entity a case at a time. Also, we should note that our simplified diagram (see Figure 1) is but a shadow of the true complexity we encounter in nature and many man made systems. Multiple (1000s) of inputs and outputs, plus Hebbian storage compounded by 100s of feedback and feedforward loops are very common in natural and engineered systems [4]. A key limiter is thus our mathematical framework that does not extend beyond order 5, and even at this level they are only able to deal with a very narrow range of options. However, testing the Figure 1 model by including more loops and elements has been attempted with conventional system analysis, justifiable engineering license, and approximations. The results always reduce to the approximate form depicted in Equation 5. The extent of "real system" complexity has also been modeled with numerous parallel sensors, processors, memory, and actuators included along with more feedback/feedforward loops.

Biology and natural systems groups at UC San Diego have recently applied this approach, assumptions and applied formula to amoeba and other simple organisms where the question of "intelligence" has been a "suspicion" but previously escaped analysis or quantification. Their results have yet to be published and more widely discussed.

Here we denote the temporal functions associated at the input/output of each function component S, A, P, and M as s(t), a(t), p(t), m(t), but for ease and clarity of annotation in Figure 1 we omit the temporal element (t) in each case. And from here on we assume the weighted value of each of these elements as: s, a, p, and m.

Analysis therefore gives a transfer function of the form:

eq02.gif

At point we apply and aggregated weighting per group of like values for the general operators, with p1 p2 => P, which reduces (2) to:

eq03.gif

eq04.gif

In bigger and far more complex arrangements with multiple S, A, P and M elements interlinked, the necessity for the notational moves and aggregation above become more obvious with: s1,2,3,4...n a1,2,3,4...n p1,2,3,4...n m1,2,3,4...n

Using entropic change (1) as the defining property of intelligence, along with weighted values leads to a reasonably general formula for a machines comparative intelligence as follows:

eq05.gif

To obtain a relative measure of intelligence between systems/machine we now include a "time to complete a task" component "N"—the number of computational cycles or FLOPs. Hence, our final formulation for comparing systems is now:

eq06.gif

Where N = the number of computational cycles or FLOPs.

Observations

This (5) agrees with two essential properties observed in natural systems that are also consistent with experimental findings

  1. With zero processor and/or memory power intelligence is still possible.
  2. With zero sensor and/or actuator power intelligence is impossible.

We can also see the derived relationship (5) flies in the face of the established wisdom of singularity believers as the speed of intelligence growth is logarithmic and not exponential with memory and processing power. So, if we see 1,000 fold increase in the product of processing and memory (PM) power, then by virtue of the log2 function, intelligence only increases by some 10 fold. Hence a full 1,000,000 increase in PM sees intelligence grow by 20 fold. This is far slower than previously assumed and goes some way to explain the widening gap between prediction, experimentation, expectation and reality.

At this point, a further important observation is the fact the sensors and actuators have largely been neglected as components of intelligence. But from (5) it is clear they play a fundamental part in the intelligence of anything. Interestingly, this fact has been observed in other disciplines but not directly linked to the whole, and perhaps more critically, sophisticated sensors have only recently emerged as "key capability" components in robotics, artificial intelligence, and control systems.

Extending The Analysis

Suppose we now include the growth of memory and processing power as exponential components—so that:

ueq01.gif

Now suppose further, our actuator and sensor technology is improving at an exponential rate, which it is, but far more slowly than processing and memory; we can show that:

ueq02.gif

And so, the growth of intelligence as a function of time (or technology) growth is at best linear with time and not exponential.

More Speed To Come

With the arrival of a myriad of new sensor technologies and components—and their rapid deployment on the periphery of networks, the Internet, robotics, and large and small systems—we are indeed moving toward achieving truly intelligent entities. But is not entirely clear just how quickly this is happening. For sure machines now play a better game of chess than we do; IBM Watson can answer detailed questions faster than we can; machines manufacture most of what we consume and use; and they also provide many of our life support and high risk support functions, but they are still a ways off becoming generally intelligent and utilitarian.

Sensors 'R' Us

Our intelligence is founded on our interactive capabilities—sensors and actuators—and we learn through experience, thinking, and reasoning. For machines to match and/or overtake us they will need a sensory capability par excellence. Where will it come from? How about the nano-devices embedded in our mobile devices providing sight, sound, touch and vibration, position, and direction data coupled into a global network that increasingly mimics biological life? And in the longer term we will most likely see chemical and biosensor embedded in mobiles for medical and care applications. This implies a default future of mobile intelligences networked across the planet akin to a giant ant colony.

Sleep Easy!

Personally, I am not worried about the machines taking over anytime soon, and certainly not by the much celebrated 2035. However, I am keen to see them move into my life and help me achieve more in the time I have left. What really concerns me is one vital question; will we be smart enough to recognize a new intelligence when it erupts on the Internet or within some other complex system? So limited and limiting is our appreciation of what intelligence is and could be, it is almost certain that we are going to see some surprises emerging from the multi-dimensional, non-linear and networked data space that we are now constructing. In the meantime, I don't feel threatened by my laptop, Watson, robotics, or the Internet, and I find that I can sleep at night!

Originally Submitted March 2013.

References

[1] Binet, A. New methods for the diagnosis of the intellectual level of subnormals. In E. S. Kite (Trans.), The Development of Intelligence in Children. Publications of the Training School at Vineland, Vineland, N.J., 1916. (Originally published in L'Année Psychologique 12 (1905), 191–244.)

[2] Clarke, A. C. and Kubrick, S 2001: A Space Odyssey [Motion picture]. Metro-Goldwyn-Mayer, United States, 1968.

[3] Owen, A. M. et al. Detecting Awareness in the Vegetative State. Science 313, 5792 (2006).

[4] Hebb, D.O. The Organization of Behavior. Wiley & Sons, New York, 1949.

Author

As a scientist, engineer, entrepreneur, advisor and consultant to governments and companies, Peter Cochrane (OBE, BSc, MSc, PhD, DSc, CGIA, FREng, FRSA, FIEE, FIEEE) has worked across: circuit, system, and network design; software, human interfaces and programming; adaptive systems; AI and AL; company transformation and management. He was formerly CTO BT and has also been the Collier Chair for the Public Understanding of Science & Technology @ Bristol, a visiting Professor to CNET, Southampton, Nottingham Trent, Robert Gordon's, Kent and Essex Universities and University College London. He has received numerous awards, including the C&G Prince Philip Medal, the IEEE Millennium Medal, an OBE, and the Queen's Award for Innovation and The Martlesham Medal.

Figures

F1Figure 1. The assumed "simple" model.

F2Figure 2. A comparison of entropic and simple linear projections of AI.

2014 Copyright held by the Owner/Author. Publication rights licensed to ACM.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2014 ACM, Inc.

COMMENTS

Agreed, great article, I would however have liked to see a more systemic approach taken on the potential damage that can be caused by the relentless drive towards a future of machines. It strikes me that the more we innovate, the less we consider humankind and the need to keep people occupied and fruitful, in order to avoid other concerns. Many of today's problems are associated with machines, yet we continue to use reductionist thinking in our pursuit to find the answer to problems through machines. Maybe there needs to be some guidance on what makes a machine beneficial, and how they can quickly become a burden ...

��� John Frieslaar, Fri, 09 Jan 2015 12:22:11 UTC

POST A COMMENT
Leave this field empty