acm - an acm publication

Articles

Workings of science
AI in 2156: the science of intelligence

Ubiquity, Volume 2022 Issue April, April 2022 | BY Kemal A. Delic, Jeff A. Riley


Full citation in the ACM Digital Library  | PDF


Ubiquity

Volume 2022, Number April (2022), Pages 1-18

Ubiquity Symposium: Workings of science: AI in 2156: the science of intelligence
Kemal A. Delic, Jeff A. Riley
DOI: 10.1145/3512335

While most people are not familiar with the details of how artificial intelligence (AI) works, the term itself is becoming more familiar to the non-scientific community, to the point that it ("AI") has almost become part of the regular vernacular of the ordinary person. AI has been with us for a long time-from first beginnings in ancient times in the form of automatons and other devices mimicking humans or other animals, through the middle of the last century when the term "artificial intelligence was actually coined, to the present times where it (the label rather than the actual technology) is entering the psyche of the general public.

This article explores the notion that the technology we call artificial intelligence is not yet ripe, but is establishing itself as a science in its own right, and that by 2156-the 200 year anniversary of the coining of the term-the technology should be in a position to deliver on its promises.

There is a long history of works on the imitation of human behavior, starting from the ancient stories and Greek myths via medieval devices for amusement, to modern approaches to mimic human thinking (See Appendix, "A Brief History Of AI"). The field of artificial intelligence (AI) started in a summer workshop in 1956 when this unlucky term was coined, and research was triggered. There was huge interest in AI, which was supported by generous funding, and subsequently survived (at least) two winters that were characterized by interest in AI waning and funds drying up.

These days, AI is somewhere between high hopes and strong fears, but it is certainly omnipresent in all kinds of media. AI has passed the initial period of theoretical explorations, often overpromising and later disappointing, until IT advanced to the point that many narrow applications appeared as result of experimentation, fine tuning, and engineering adaptation. We can say that AI is a very successful marketing tool if you do not ask for details, reasonably successful in deployment on mobile phones and in home gadgets, but as a general theory of intelligence is largely seen as a failure.

In this article, we submit that AI should become an established science by itself, but it must be projected on a much longer time scale, should involve several neighboring disciplines, and must have sound scientific bases in established sciences of biology, chemistry, physics, and mathematics. We project the year of 2156 as a possible horizon of this to happen, marking 200 years of the AI coinage in Dartmouth workshop.

ON INTELLIGENCE: SEVEN AI TECHNOLOGIES

The word "intelligence" derives from the Latin nouns intelligentia or intellēctus, which in turn stem from the verb intelligere, to comprehend or perceive. In the Middle Ages, the word "intellectus" became the scholarly technical term for understanding, and a translation for the Greek philosophical term nous.

We discuss manifestation of intelligence starting from presumed, supreme human intelligence, all the way down to supposed intelligence of plants and fungus. We hypothesize about forces and factors shaping each form of intelligence, while thinking that the ultimate purpose and mission is defined by the single, well-chosen word. For example, coordination is the ultimate word describing the working mechanism of insect intelligence. Table 1 depicts five layers of intelligence observed on the Earth, starting from fungus and ending with human intelligence—all being somehow intertwined together into the mystery of life on the planet.

"Intelligence" is a loaded word, indeed. It may have different meanings in different contexts, it casts a long shadow on philosophy, and causes significant controversies when associated with engineered machines, devices, and instruments. It becomes very challenging when confronted with the biology of life. Huge effort and literature was/is dedicated to the subject of intelligence and life on Earth, with notable examples of Schrodinger, Turing, Von Neumann. [1, 2, 3]

Over time, the AI field has developed an important number of technologies based on emulating human intelligence and mimicking human behaviors in a wide variety of activities: from game playing, via problem solving, to a highly qualified expert acting in several fields (see Table 2).

Many of these technologies produce results that approach, if not match, human performance, and while these results are typically achieved in well-defined, bounded, application domains, many of the technologies are likely to be generalizable, in time, to other domains. However, despite the much-hyped AI renaissance of the past decade or so, AI is still in its infancy, and matching the ensemble performance of the human brain is not likely to be achievable in the foreseeable future: the best we can hope for at the current stage of development is to try to mimic and match parts of the human brain that are responsible for specific activities and apply what we learn to specific domains. The hope is that over time our ensemble of technologies will converge on the ensemble performance of the human brain. Not all technologies have reached maturity or very wide applicability, but they have been the seed of computing technology augmentation—as was the most recent case with deep neural networks. Whether AI will ever match, or even get close to matching, the emergent abilities and functionality of the human brain (e.g. intuition, consciousness, etc.), whether AI could ever "think like humans", is not clear, and much debated [5, 6].

It is our belief that future advances will be based on observing intelligent behaviors in nature and trying to understand the deep principles involved—not just the external manifestations. Furthermore, aiming to mimic human intelligence is a noble goal, but we can, and should, learn more from the behaviors and intelligence of insects, animals, fish, and plants. Though not comparable to human intelligence, they have survived, and have evolved behaviors, over millions of years that we can learn from.

THIRD WAVE OR THIRD WINTER?

As we look back over the past several decades, and consider the immediate future, we clearly observe three distinct ages of AI research transforming high aims into practical systems and technologies:

The embryonic age is filled with enthusiasm, wildly optimistic promises, and ultimately missed steps while research is struggling to understand the basics of AI (e.g. problem solving, state space search and gaming). The focus of the field is more on mechanization and automation as first steps rather than developing a deep understanding of the biological processes that enable living systems to survive in a constantly evolving environment.

Much of the early hype emanated from uninformed commentators who equated the electronic computer with the human brain, and electronic computing and computation with human thought. Media perpetuation of the hype created quite unrealistic, sometimes bizarre, expectations, and even some fear, in the general public, while much of the AI research community went along for the ride out of self-interest: the publicity helped attract research funds.

The embedded age is facilitated by advances in computing technology, power, and speed. It is now possible to create embedded systems better able to emulate intelligent behavior. Embryonic search of earlier times is rescaled into an omnipresent global hyper-structure acting as an enormous knowledge base accessible to anyone, at any time, from anywhere. Mobile devices are now apparently able to understand speech and comprehend language. Machines outperform human world champions in chess, Go and natural-language knowledge quizzes. Robotics research advances from impressive demonstration devices to consumer and industrial robots doing real work in real environments. Autonomous vehicles, already prevalent in industrial and mining environments, are beginning to make appearances on our roads.

The embodied age is in its infancy. We expect social robots, some humanoid, to move from novelty to mainstream in the foreseeable future. Some social robots will exist purely to serve us as they interact with us—as waiters, bartenders, cooks, drivers, guides, and caregivers, etc. Others will exist simply as companions—some humanoid, but others constructed to resemble beloved pets.

We expect to see many more intelligent drones of varying capabilities and forms deployed in many more applications, all learning from their interactions with their environments. Swarms of intelligent drones, each capable of communicating with and learning from its peers, will be capable of carrying out tasks that in the past required large machines and many human operators.

We already see the occasional intrepid early adopter with microchips or other miniature electronic devices embedded in their bodies. These devices allow their hosts to monitor their own bodies and upload the data to a remote database for analysis, communicate and interact with nearby devices, and be monitored and tracked externally. We expect more integrated cyborgs—humans augmented with advanced mechanical, electronic, and AI devices—to become more prevalent in the not-too-distant future.

THE NEXT 100-PLUS YEARS: TIME TO DELIVER

There are many lessons for the AI community to learn from the past two winters, but probably the most important is that we need to set realistic expectations around the size, scope, and speed of progress to be made. After all the enthusiasm, hype, and promises of the embryonic age, what we've learned best is that AI is hard. Nature may make it look easy, but nature has had millions of years to find and refine solutions through trial and error—the AI research community has no such luxury. AI research, like all hard research, is complex, time-consuming work, characterized by many blind-alleys, missteps, and wrong turns—it will take time, progress has been slow and will continue to be slow, but progress is being made and will continue to be made (incrementally), and the field will deliver if we can ignore the uninformed hype and focus instead on what must be done to properly address the research issues.

Artificial Intelligence research is a broad, complex area of research, combining cognitive science, computer science, and robotics, but also reaching into quite diverse areas such as biomechanics and biometrics, law, ethics, and sociology to name just a few. Artificial Intelligence can no longer be considered a sub-branch of computer science—with AI producing potentially transformative applications across all areas of society including commerce, industry, health, entertainment, the arts, sport, and leisure activities, we must take a more multidisciplinary approach to AI research.

Moreover, many questions resulting from AI research cannot be answered strictly from a science or engineering perspective alone but require intimate involvement from a broad range of disciplines and participants. These questions are not limited to science. One of the goals of AI research is to develop intelligent systems that will safely interact with humans and the physical world across all of society, so questions of ethics, governance, impact, and accountability need to be addressed.

Because the product of AI research will become so pervasive, and important, across all of society world-wide, it is imperative that there be global coordination of research, and funding for research, to provide the right balance of academic, sociological, ethics, legal, scientific research, industrial innovation, and technology development skills and expertise to guide the field far into the future. The Agricultural Age lasted for some ten millennia, while the Industrial Revolution ushered in the new Industrial Age some two hundred years ago. The new age of information is now upon us, and just as the industrial revolution changed agricultural society beyond recognition, we believe the Information Age, might be driven largely by AI, will cause a profound change in society, the economy and ecology.

We will see the emergence of a hybrid society consisting of humans assisted by a wide variety of embodied, autonomic devices, working together in a smooth partnership in the new digital economy. This partnership will become essential to managing a balanced exploitation of limited resources as the Earth's population expands at an increasing high rate. Humans will likely be enhanced (augmented) and more powerful—we are destined to become a race of cyborgs—and will live very different lifestyles. Augmentation of humans will take different forms, and may, as a result, spawn different cyborg castes.

Augmentation will come in two forms:

External, wearable devices and prostheses, such as exoskeletons and artificial limbs that will enhance or replace the biomechanical functions of the human body, with embedded intelligence that will allow these "enhancers" to learn and adapt, to predict and anticipate. These devices will both enhance and protect humans—protect from the stresses and injuries caused by overtaxing the human body, as well as from external threats such as falling objects, projectiles, and predators.

Internal, or embodied, devices that will interface directly with the human brain and nervous system. These augmentations will range from enhanced sensors, intelligent bio-monitoring devices capable of regulating human functions (heart rate, chemical levels etc.), auxiliary memory and computation devices (co-processors, long-term storage arrays), devices capable of wireless interfaces to external computing and information systems (e.g. GPS, weather information systems, emergency services and alerting systems, etc.).

The science of intelligence (previously known as AI) will infuse new technologies into all aspects of social, economic, and cultural life. The composition of the world's population will change: Cyborgs and robots will join humans as functioning members of the population. Political systems will be changed beyond recognition. The role of humans vis-à-vis cyborgs vis-à-vis full robots will need to be considered. Laws to deal with the reality of different lifeforms, even to define what a lifeform is, will need to be debated and implemented.

We have navigated the embryonic and embedded ages of AI, and we are just embarking upon the embodied age (Figure 1). The embryonic and embedded ages of AI have built the foundations for the embodied age, the age of augmented humans and autonomic devices. The stage is now set for AI to come into its own. AI has the potential to enhance and improve the lives of humans enormously, and so it must. The next 100 years, the embodied age of AI, is the time for AI to deliver on its promises, after surviving the third winter [6].

References

[1] Schrödinger, E. What is Life? With Mind and Matter and Autobiographical Sketches. Cambridge University Press, 1992.

[2] Von Neumann, J. The Computer and the Brain. Yale University Press, New Haven 2012.

[3] Turing, A. Computing machinery and intelligence. Mind, 1950.

[4] Riley, J. The elusive promise of AI: A second look. Ubiquity 2021, April (2021),

[5] Riley, J. Will machines ever think like humans? Ubiquity 2021, June (2021),

[6] IEEE Spectrum. The Great AI Reckoning. Special Report, October 2021.

Authors

Kemal A. Delic is a Senior Visiting Research Fellow working in The Center for Complexity and Design at The Open University. Previously he was a Senior Enterprise Architect and Senior Technologist with Hewlett-Packard for many years.

Jeff A. Riley is the founder of Praescientem, a company specializing in AI and IT consulting and education services. Previously he was a Master Technologist and Scientist with Hewlett-Packard for many years. Jeff holds a master's degree in IT, a Ph.D. in AI, and is a former Adjunct Principal Research Fellow of RMIT University in Melbourne, Australia.

Figures

F1Figure 1. The three ages of artificial intelligence.

Tables

T1Table 1. Five Layers of Intelligence

T2Table 2. Seven AI Technologies

2022 Copyright held by the Owner/Author.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2022 ACM, Inc.

COMMENTS

POST A COMMENT
Leave this field empty