Volume 2014, Number November (2014), Pages 1-7
Liah Greenfeld and Mark Simes have long worked together, integrating the perspectives of two very different disciplinary traditions: cultural history/historical sociology and human neuroscience. The combination of their areas of expertise in the empirical investigation of mental disorders, which severely affect intelligence—among other things—has led them to certain conclusions that may throw a special light on the question of this symposium: Will computers outcompete us all?
From our point of view, the "us all" object of the question—"Will computers outcompete us all?"—refers to human beings, and presumes that the individual and collective human capacities—particularly, the capacities of the mind, or intelligence—are essentially comparable to the capacities of computers. Only on the condition of the essential comparability of human intelligence and the cognitive capacities of computers does the question of this symposium make sense. The answer, therefore, entirely depends on whether these capacities are indeed so comparable, consequently bringing into question the nature of human intelligence and thus of humanity.
Admittedly, the biological, or neuroscientific, response to this question is unclear. The prevailing approach in the field of human neuroscience emphasizes the size and complexity of the human brain vis-à-vis other nervous systems in an attempt to explain the unique qualities of human intelligence. The logic that supports this approach is based on the assumption that an increase in neuronal density and network complexity necessarily results in the appearance of qualitatively new cognitive capacities. The perceived task of neuroscience, therefore, is to unpack the complexity of the human brain to find the "missing-link"—or links, for the sake of complexity —that result in something akin to the cogito of Descartes.
The concept of technological singularity is based on a similar logic and imagines a process that travels from the original point of human intelligence in the opposite direction of biological reductionism, though its principles are fundamentally the same. Futurists predict, whatever the technological medium may be, engineering a sufficient increase in computational complexity will result in machine intelligence that replicates and perhaps surpasses the cognitive capacities of the human brain. The futurist-technological position, therefore, seeks to re-pack the processing complexity of the human brain to arrive at virtual human minds.
Gerard Edelman cites the incredible complexity of the human brain in his book, Bright Air. Brilliant Fire. In the cortex alone, he writes, there are about 10 billion (1010) neurons. The actual connections between these neurons may number one million billion (1015). As for possible connections in this matrix, Edelman writes this number is "hyper-astronomical"; he must mean this in a very literal sense because he then goes on to indicate it exceeds the number of positively charged particles in the known universe. Edelman's preliminary conclusion from these incredible facts is that the size and complexity of the human brain make it "so special that we could reasonably expect it to give rise to mental properties" .
What is overlooked in such paeans to quantity and complexity, however, is the astounding regularity with which these connections/networks/brains seem to form in the billions of individual humans who span distances and generations. The essential question may not be how does this complexity give rise to human intelligence or consciousness, but instead how does this complexity become systematically ordered so that any process that an individual brain supports (that is, any individual mental process) becomes an organized, patterned process—which is to say nothing of its self-intelligibility or its intra-species communicability.
The theory of evolution provides us with an explanation of how complex nervous systems evolved in multicellular organisms, allowing animal bodies to interact with a dynamic and unpredictable external environment. This dynamism and indeterminism of stimuli in the environment are correlated with the nervous system's unique physiological characteristics: The capacity for neurons and networks to organize learning and memory. Interactions with external stimuli effect changes in the nervous system, which organize and solidify networks of neurons to respond and combine in ways that reflect the influence and challenges of the species' environment. In every case therefore, it is a combination of the genetic information of a species and its interactions with the environment that organize the networks of its nervous system.
In the biological world, stimuli occur as signs to an organism directly conveying information derived from a physical-chemical aspect of its referent in the environment. Empirical investigation (that is, investigation of actually existing characteristics) of human cognitive processes, however, shows humanity is essentially unlike any other animal species in this one crucial respect, from which numerous characteristic features derive. Unlike the rigid, determined relationship that animal nervous systems and societies have with signs in their environment, the defining feature of human mental stimuli is laden with meaning that cannot be traced to the physical-chemical constituents of the medium in which it is delivered. Instead, the primary stimuli in human mental life are symbolic. While all other animal species process signs in their environment and transmit their ways of life (including their social organization) genetically, humans are constantly interacting with, and transmit their ways of life by means of, symbols.
Symbols are intentionally articulated signs and, in sharp contrast to signs, they represent phenomena of which they are not a part. In this sense they are arbitrary, dependent on choice. The meaning (the significance) of a symbol is given to it by the context in which it is used and this context is constituted by associative relationships to other symbols. Language is the clearest example of this feature; words are not definite and linguistic communication is both a creative act on the part of the producer and an interpretative act on the part of the receiver. As a result of the dynamic, ever-changing meaning of symbols and their contextual dependence upon an equally dynamic matrix of other symbols, the significance of any instance or set of symbols is both constantly changing and endlessly proliferating. It is this dynamic change and self-proliferation of symbols that creates the innumerable variability among human minds and human societies. We call this symbolic process of transmission of human ways of life culture and assert that it is the symbolic nature of culture that constitutes the causal force in human history .
In the words of a great historian and philosopher of history Marc Bloch, historical science, which focuses on human history whose subject matter and data all social sciences and humanities share, is the science of the mind. It is focused on the qualities and permutations of human consciousness. Indeed it is one such permutation—claimed to be singular and unprecedented in its dimensions and importance—that the concept of technological singularity predicts. The verdict regarding technological singularity depends on whether history allows for such a singular and absolutely unprecedented change, or whether all great historical transformations, of which there have been many, are fundamentally the same. This leads us to consider the nature of human consciousness itself—the mind.
For those who, while perhaps experts in other areas, consider humanity only from the perspective of laymen, the mind is just another name for the brain. Thus Dan Dennett without much ado equates the human person with "the program that runs on your brain's computer." This lay perspective, which reduces humanity to a biological species, qualitatively, that is essentially, equating it with all other biological species, from which it may then be distinguished only quantitatively, is a necessary background for the concept of technological singularity. Only in its framework the question "Will computers outcompete us all?" makes any sense, and only in its framework it can be raised and answered.
In contrast, we argue culture makes humanity, and therefore human intelligence, a reality sui generis—a reality of its own kind. It is this process of transmission, which is unique in the animal kingdom, that explains only humans have history and in distinction to even the most remarkably sophisticated, minutely stratified, and rigidly structured, animal societies—such as those of bees, of wolves and lions, or of our closest primate cousins—human societies are almost infinitely variable across distances and generations. Culture constitutes a world of its own: an autonomous, self-creative world that functions according to historical laws of causation that do not apply anywhere in non-symbolic reality.
Of course, the symbolic, historical world of culture is supported by the mechanisms of the human brain, without which it is certain; it could not have emerged in the first place. The use of every symbol, the perception of its significance, its maintenance and transformation, is supported by the mechanisms of the individual brain and reflected in some, not necessarily specific, physical-chemical neuronal activity. Therefore, the symbolic and historical cultural process is also a mental process. But it is not from originating repetitively in newborn individual brains that culture endures, it is instead a ready-made, cultural environment, rich with symbolic stimuli, into which all new human brains are born. Culture is the symbolic process by which humans transmit their ways of life on the collective level, but on the individual level—the level of the individual human being with his or her brain in which this process is active—this process is called the mind. This symbolic process on both the collective and individual level is at every moment the same process, separated only by the focus of analysis (i.e. whether it is psychological or sociological). Thus, we can accurately call the mind "culture in the brain."
In certain respects the brain can be compared to a computer. However complex the former is in comparison to the latter, the difference between them is quantitative, pertaining to how much information from the outside each can process and how fast and accurately. But the mind is an altogether different matter: It is not a more powerful brain than any other we know, because it is not a brain at all, and for this reason it cannot be compared to even the most powerful computer imaginable. The mind, as suggested by its definition as "culture in the brain," instead, is a symbolic process representing an individualization of the collective symbolic environment. While the mind is by no means equivalent to the brain, it is certainly supported by the brain at every moment in the process and it may be, in fact, the symbolic processes of the mind/culture that organize the connective complexity of the individual brain.
Thus, in distinction to both the current neuroscientific paradigm and the approach of futurists who equate complex structure with emerging, intelligent capacities—remember, the foundation of these two schools are fundamentally identical—we hypothesize the symbolic, cultural environment is causally responsible for reining in the hyper-astronomical complexity of connective possibilities in the human brain. Furthermore, we argue mapping and explaining the organization and biological processes in the human brain will only be complete when such symbolic, and therefore non-material, environment is taken into account.
This approach, although most directly relevant to human neuroscience, has important implications for any project in artificial intelligence. First, it places primary emphasis on the significance of symbolic processes rather than on configuration/capacities of hardware, assuming no transformation of quantity into quality (which is assumed by the concept of technological singularity). Second, it implies the symbolic nature of human mental processes must be the central focus of any effort to replicate human intelligence artificially. In distinction to previous analogies in the philosophy of mind, it also does not liken the mind/brain relationship to systems of software/hardware. This is because the mind, the symbolic cultural process, is a self-generating and endlessly creative process—a feature that no dynamic code structure begins to approximate.
In neuroscience it is illogical to dig into the minutiae of the structure and function of the brain, with the expectation of explaining how our biological nature may have, at one original point, given rise to symbols. This activity is retro-speculative in an unscientific sense—even Darwin fervently highlighted the inability of science to explain origins.1 What we do have empirical access to is evidence of the human symbolic process all around us; the mind, though symbolic and therefore non-material, constantly creates material by-products and leaves material side effects (such as buildings, roads, domesticated animals, pollution, and computers) outside of us. As scientists we do have the possibility of taking this unique type of data into account while analyzing the incredible organ that is constantly involved in interpreting and generating the symbolic stimuli and perhaps apply our understanding to virtual models that more accurately represent the unique nature of human intelligence.
In the present paradigm, however, computers no more compete with minds than speed-trains or fast-running cheetahs compete with Shakespeare (a comparison which, however lame, is possible). A core quality of the symbolic and historical process of human life, which distinguishes humanity from all other forms of life, making it a reality sui generis on both the collective level (as culture) and on the level of the individual (as the mind), is its endless, unpredictable creativity. It does not process information: It creates. It creates information, misinformation, forms of knowledge that cannot be called information at all, and myriads of other phenomena that do not belong to the category of knowledge. Minds do not do computer-like things, ergo computers cannot outcompete us all.
Originally Submitted May 2012.
Liah Greenfeld is best known as the author of the trilogy on modern culture (Nationalism: Five Roads to Modernity, The Spirit of Capitalism: Nationalism and Economic Growth, and Mind. Modernity. Madness: The Impact of Culture on Human Experience. Harvard University Press, 1992, 2001, and 2013).
Mark Simes holds an interdisciplinary Ph.D. from Boston University where he has studied the overlap between human cognitive neuroscience, philosophy, and social theory; previously, Mark worked in user interface design and software development. His current academic research investigates the role of time in mental and neural activity and analyzes the functional relationship between the human mind and the human brain by focusing on the processual nature of these coincident realities.
1In a correspondence to J, D. Hooker dated March 29,1863, Darwin wrote, "But I have long regretted that I truckled to public opinion, and used the Pentateuchal term of creation, by which I really meant "appeared" by some wholly unknown process. It is mere rubbish, thinking at present of the origin of life; one might as well think of the origin of matter."
©2014 ACM $15.00
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2014 ACM, Inc.