In his article "Artificial and Biological Intelligence," Subhash Kak of Louisiana State University asks if "humans will eventually create silicon machines with minds that will slowly spread all over the world, and the entire universe will eventually become a conscious machine?" See http://www.acm.org/ubiquity/views/v6i42_kak.html". Below are some comments on his paper.
Date Posted: Monday November 21, 2005 08:13:01 PM
The article was very interesting. The author concluded with the words - "the entire universe will eventually become a conscious machine". I have some questions on this. What is consciousness? Is the definition of consciousness restricted to how we humans/other living organisms behave? Why isn't the universe conscious? Is it because it does not conform to our version of consciousness? Or is it because we are not able to recognize other forms of consciousness? I would like to know the author's and others opinion on the definition of consciousness. As our knowledge of science is incomplete, maybe we are not able to recognize other forms of consciousness. Maybe it is possible that AI can benefit from this knowledge as it probably can from quantum theory.
Date Posted: Tuesday November 22, 2005 06:03:10 PM
It is thought that science and God (Metaphysical) cannot go hand in hand. But it is seen that none of the holy books like Bible and Koran (except Hinduism) provide any physical form to God. So, it can be argued that God is nothing but a thought. The Bible says that all has one and only one origin. So, what is that energy source from which all this creation sprouts ?
Einstein showed that E=mc^2. Thus, all matter in this universe is a form of energy and is interchangeable. Now, Plank showed hv = mc^2, where v=frequency of wave and h=constant. Thus, we are all in fact forms of energy and can be converted to waves. All the matter that sprang from Big-Bang leading to formation of atoms, is in fact one. Thus, science indeed has shown as that we all come from one origin. Thus, uniting Physics with metaphysics.
Now, since we all are made up of matter, same atoms of carbon, hydrogen and like, the point that is to be considered is when and how do these atoms become intelligent and where does the consciousness step in. When a person dies, his material composition remains same, then what is that goes out of him. Where does this consciousness lie ?
When we have found the answer to the above question, then probably it will be child's play to create "artificial intelligence".
Now coming back to what we have, quantum physics and neural networks, it is seen that these two do create a good model for animal brains. But there are other organisms such as amoeba and bacteria which live without a brain and have survived probably from the inception of life on earth. One would argue that we are talking about intelligence here. But then survival is the basic reason for intelligence to be present. So, do we define intelligence only by the ability to manipulate nature artificially ?
Only if we could create enough of neural connections and feed in the basic rule of survival probably a silicon chip robot may become intelligent human being over a million years of evolution.
Many scientists have argued that we cannot create intelligence through conventional means of implementing equations in silicon computer, but physics has revealed that almost every phenomenon in nature is governed by an equation, all the laws of physics and chemistry. Thus, isnt nature (which is a wider form of intelligence), a set of equations ?
The only reason, conventional methods cannot lead to intelligence is that, an equation is a form of perfection while intelligence is imperfect. Its main aim is survival and not perfection. Thus, as argued in the paper, a silicon reorganising computer would soon settle into a state of minimal power and then no longer respond to outer stimuli.
Considering the manner in which studies of intelligence are conducted over animals. I totally discard them. In one of the studies it is said that chimps and bonobos, can be taught a vocabulary of several hundred words but not syntax of language. Now if we teach a person how to add and divide and multiply will he be able to solve a quadratic equations ? And not only this, if we teach a person who does not know what numbers are, who has is a nomad in strict sense, from the jungles of Africa, will he even be able to count ? And we say the chimps and bonobos are not intelligent "enough"??? We try to judge animals according to our parameters.
May be a bird might be thinking that humans are idiots since they cannot fly and crawl on the ground !!!
In the paper it is also brought out that, pigeons and preschool children were given a similar test and their results were roughly comparable. In my view this only proves that pigeons are more intelligent than humans. I mean, if we leave a new born baby in a jungle and suddenly bring him back and give some training and put through a test, will he even be able to take the test ??? Let alone the question of comparison !!!
Also, it is argued that animals don't have language. But lately we have discovered that dolphins do in fact communicate and chimps even play politics (watched in a documentary on National Geography once). So, the question might be in fact put this way, that humans are still not intelligent enough to understand animal languages.
Now, coming back to quantum computing, which is a very good model of what we describe in human language as Free-will. It is very much possible that free-will is just the manifestation of quantum phenomenon in the human brain.
The model of a superorganism arising from ant and bee colonies is a very good example of collective intelligence, where in all the parts of the body perform their own function, brain been mapped to queen in the hive.
I am at loss to comment on the issues discussed in the paper about split brain and blindsight.
I would say that, when nature took million of years to create intelligent beings like humans, how can we expect to do the same within a century ? If at any stage science can answer the question "where does consciousness lie ?" We will be 90% through to our goal. In fact it seems that one of the prerequisites to intelligence is an opposing THUMB !!!
Thus, all the research is only small steps that we take in the direction of discovering the source of consciousness. Eventually, we will see AI walking the face of earth.
Date Posted: Tuesday November 22, 2005 08:38:01 PM
I think Consciousness is a form of light. As an example, we can see an image or any kind of picture in our dreams with all the light focused on the image...where this light has come from?; we can understand that there is a light that embedded in the compact vacuumed space of our brains, which tells us that there is a consciousness. Human beings have the capability to expand Consciousness through experiences and to create new things, whereas animals don't have the capability to expand the consciousness beyond a threshold. The expansion probability or say elasticity of the consciousness for humans is unlimited. And the expansion of the Consciousness is the very purpose of the SELF.
As far as Wassermann experiment of pigeons and children are concerned, where he categorized between animals and humans, but cognitive thinking may also vary from pigeon to pigeon or from pigeon to child and also from child to child.
Now a machine can act as a human being or animal if we provide that consciousness.
Probably a machine (say a car with AI) can expand its consciousness by storing the daily experience (say AI cars which can store images, sounds from daily journey) and also by knowing the other machines experience and coming with a imaginative idea of its own (say a car which understands his co-car's experience about the possible problems in a long journey and come with its own idea about how to go with it). Scientists have succeeded in giving peripheral intelligence to machines like sensing and working on some predefined algorithms. The biological, physical, neural and quantum science has it own deficiency in predicting the exact work of mind.
Even if we build the entire complex human structure, say human robot, its impossible to provide a consciousness of an ideal human being.
Message edited by: sandeepkumar on Tuesday November 22, 2005 08:41:16 PM
Date Posted: Wednesday November 23, 2005 01:41:55 AM
This article brings forth many questions about intelligence, conscience, free-will vs. determinism, etc. The question that intrigues me the most is the question of the will and the conscience.
Determinism (as I understand it) was a popular philosophical view before the advent of quantum theory. Every event, it was said, has a deterministic cause. And yet, those who promulgated this (dare we say, fatalistic) theory still made moral judgments. Since quantum theory has told us that there exists in nature things which have no deterministic cause, is this the answer to the moral inconsistency of determinism? Dr. Kak seems to believe that it is indeed quantum theory that provides us with the basis for saying that humans do have genuine freedom. "One striking success of the quantum models is that they provide a resolution to the determinism-free will problem." He says that consciousness is the "observing" that breaks the "strict regime of causality."
If quantum theory does indeed provide us with a sufficient answer to declare every human (and some sub-humans) as endowed with free-will, what then? The questions of morality lurk just beneath the surface of the ocean of differences of opinion between determinism and free-will. Either option will face the obvious question, "what of morality?" The determinist, as I stated above, is forced to go against his or her own stated opinion if he or she is to make any moral judgments. The free-willer must also face several difficult questions. How are differences in will related to conscience? Are there inherent differences in the will of different individuals? Are people assigned a disposition of will at birth (by genetics or some other means)? What about the hedonistic assumption ("people always choose the perceived path of greatest pleasure" [Dilbert Cartoon, 2004-11-12] )? In addition to questions such as these, many questions of morality also surface.
In passing, I would also like to note that Paul Davies in "The Mind of God" uses examples such as the "universe as a giant machine" to make some interesting observations. "When it comes to computation, obviously only those regions of the universe between which information can flow can be considered as a part of a single computing system; this will be the region within our horizon. Imagine that every particle in this region is commandeered and incorporated into a gigantic cosmic computer. Then even this awesome machine would still have limited computational capabilities, because it contains a finite number of particles (about 10^80, in fact). It could not, for example, even compute pi to infinite precision. . . This carries the implication that [pi] could not be considered to be a precise, fixed number. . . but would be subject to uncertainty (Davies, 147)."
Date Posted: Wednesday November 23, 2005 05:47:32 AM
The complexity of the thought process has not been fully understood, and perhaps will not be understood for thousands of years, if ever. Thought has been historically perceived to be either analytical/logical or abstract. Abstract thought has been romanticized, perhaps deservingly. However, it is unclear to me if the abstract thought process has been understood. The underlying similarities between quantum physics and abstract thought are intriguing, wherein the minute you try to analyze a thought, it collapses. So here arises the question of the utility of AI from the point of view of trying to generate serendipity, as opposed to focused and cold computational power. I have not understood the mechanisms by which serendipity manifests itself.
The question now arises if AI attempts to create an environment where serendipity can manifest itself in a synthetic, "artificial" manner, or to utilize the intelligence of machines to carry out tasks in a focused analytical manner that is rarely seen in humans. This prompts me to question the goals of AI, and wonder if the goals are to understand the human thought process, or to create an environment that creates and sustains productive thought.
The concept of neural networks is intricately woven with AI. Neural networks provide us with the basis of AI because it looks at a wide variety of models that is roughly similar to the way the human brain works. A perception of biological intelligence as evidenced by the examples of experiments conducted on non humans is very intriguing but it can be mimicked to some extent by the use of neural networks. Training algorithms are being constantly developed to achieve a higher level of pattern recognition by having multi layered networks although still based on traditional computing methods. The question of whether or not this AI will emulate the complex and massive parallelism employed by the human brain is a thought provoking one. The closest that might come to explaining certain biological process just might be quantum computing, as suggested by the author.
Having stated the above points, I agree with the view point that biological intelligence is multi-layered and complex. The idea of a universal consciousness has been dealt with in Jungian philosophy and to a large extent in eastern, particularly, Hindu mysticism. While this provides a rich, complex and intelligent view of consciousness, to extend this to AI would require an understanding of the philosophy that transcends the philosophy itself, into the physical realm.
Therefore, the conclusions that ensue would be to define the goals of AI in the first place, and to place an increased emphasis on understanding the human thought process. I strongly feel that the goal of computation and thought itself is to find order in chaos�or rather to organize and analyze logic, to arrive at conclusions that are practical to the sustenance and advancement of civilization, and to maybe someday understand the abstract.
Date Posted: Wednesday November 23, 2005 11:31:02 AM
I would just like to make a comment on an abstract level. I feel human beings are desperately striving to understand creation or "God" as religions call it either through science or through religion. I do not believe either of the fields(science or religion) has done it even though they like to say so. Because if "God" was truly understood, all of us would not being trying to discover new things in science or discovering new/modified religions because if the source of everything is understood, there are no more mysteries. I believe AI is a field which will evolve constantly because we are still to understand the basics of one of the most complex components of the universe (human consciousness) at least complex among what we know. Another thing I would like to point out is that nowadays, more and more theories are being built on previous assumptions. I don't know but I fear that there may be a day when a "Copernicus" will say that the "earth is round" and everything which we know will fall apart.
Date Posted: Wednesday November 23, 2005 02:51:31 PM
I agree that brain is not an ordinary machine. Its capabilities are unlimited. We cannot design a machine which has all the capabilities of brain. The capability of the brain comes out according to the situation. All we can do is design a machine that handles a certain situations. Moreover the machines can do only the work assigned to them or they are designed to do .They cannot do their own work , because they don't get the meaning of "I".
Machines can do things based on logic. Brain can do things based on logic and intuition. Can machines have intuition? Can they think on their own? .I don't think so.
Brain can adjust to anything on its own .It can adjust to a new language. But machine cannot do that on its own. It is kind of dependent on the logic and person designed it.
The machine falls behind the living organisms by many things. Understanding the surroundings, not just understanding them but by having some special relation between things that have come across. The physical structures may be same, but the relationship associated may change according the events associated with them. Machines can only recognize and perform the task assigned to them .
Date Posted: Wednesday November 23, 2005 03:33:13 PM
The article was little lengthy but interesting enough to attract the readers attention. I especially like the arguments, "Brain and Mind", "Biological Intelligence", "Reorganizing Signals", "Anomalous Abilities and Deficits Amongst Humans", "Split Brains and Unification" given by the authors to support his thesis that knowledge of self organization and quantum characteristics is inadequate to explain the capacities associated with brains.
I believe that there is a supernatural power (God), which is responsible for providing the Intelligence to the animals and human beings (human beings, are his best creation). Further human being are also given the ability to adapt and change their thinking to develop consciousness of the environment. Humans have come long way inventing outstanding things and in future its possible to create machine with consciousness with the help of quantum mechanics and AI, but it will have certain limitations.
Finally, I agree with authors concluding remarks that if conscious machines are created then humans have the ability to adapt and develop their consciousness to accept this new life form thus making universe a conscious machine.
Department of Electrical Engineering.
Date Posted: Thursday November 24, 2005 05:27:49 PM
I believe that the consciousness arise from the curiosity of human kind. This curiosity arises in humans by questioning everything around them, including their existence. This is not merely collecting data from our surrounding environment by our 'sensors', and processing this information. Machines can collect huge amount of data and process it in short amount of time, however there is still something missing. You can call it the sparkle, desire, or intuition which triggers phase of thinking. Interpreting this process in other way leads to considering the humans as machines which started in 30's as author states. This school of thought perceived humans as machines having inputs and outputs. So, every industry discipline treated workers as machines and took necessary precautions to maximize the production of these machines. What is left was the humanistic side of the machine. I believe that human kind still suffers from this misinterpretation.
Date Posted: Thursday November 24, 2005 05:30:08 PM
Things that seem to be impossible for a period may turn out to be very easy after a while. People were laughing at the Wright brothers when they were trying to fly, however, they demonstrated people that it is possible to fly... Furthermore, history is full of similar examples. I do not agree with the arguments such as we can not design a machine like the brain, we can not do this, we can not do that... I believe that everything begins with imagination. How many people do you think would believe 50 years ago if someone had told them that genetics, as a science, is going to improve this much?
Human body is a wonderful system, it has extraordinary capabilities such as thinking and I believe that it also has capability to design a machine like brain. Humans have come long way from invention wheel to computer age, and have a very very long way to go. I think humans are going to be able design systems which has something like brain, consciousness, even intuition. By comparing the acceleration of technology in last several decades, I may say that it is not too far. May be not us, but our children are going to see those days, may be not our children, but our grandchildren...
But who knows, may be we may see... everything is possible...
My concern is on the usage of the technology. I am afraid of that there is going to be a day that no humans will survive due to a disease which is a product of humans, and the human being generation is going to be finished.
To conclude, I think conscious machines are going to be designed, even with intuition, and universe is going to be a conscious machine if humans do not annihilate their own generation. "Never say impossible"
Date Posted: Friday November 25, 2005 03:17:52 AM
I do not think we will ever be able to model consciousness. Consciousness embodies a sense of existence and meaning ("raison d'etre") which is neither absolute nor rational at times. These qualities are inherent in humans. How do we model something that humans have yet to understand? How far do we go with this limited knowledge? How much biological behavior are we going to mimic in our attempt to give the machines consciousness? Do we trust these machines to operate autonomously without oversight? If so, what kinds of activities will they be allowed to operate in this capacity?
I do not deny that one day machines will be created that will closely parallel human behavior. I just do not believe that we will ever be 100%. Assuming that consciousness can be achieved, in time, humans will be able to populate the world with these machines. I also assume that the technology in the future will be sufficient to connect these machines that communication is easily achieved. Therefore, they can operate as one to form one big conscious machine.
Date Posted: Friday November 25, 2005 02:29:21 PM
My opinion is that abstract conceptualization is a keyword to realize how far we can go in the machine automation. Because human can only simulate or program something that is known to them. So to implement brain and mind, as defined in the article, we need to understand the full functionality of brain and that can't be satisfactorily explained by neural networks and other possible methods.
Machine learning procedures are trying to learn from the environment (similar to our daily life experience) or from oracles (similar to our books). But is that good enough to trigger an impulse when some unknown procedure takes place in the environments without any prior knowledge? (Intuition) Genes might be able to answer why human behave regularly or randomly but its still beyond our knowledge and is a subject of speculation.
So if we are not able to explain behavior of our brain till now, how can we think of implementing it in a proper manner? We may try something that partly( very small part) looks humane, like the automated car mentioned in the article or self-learning machines. But we don't know how to implement abstract conceptualizations like love, respect, hatred, wit, gratefulness, responsibility etc. etc....and importantly, be able to differentiate among those concepts.
Speaking of the article, I found it a well-argued one with good examples to support them and I agree with the author's point of view.
It's very easy to say "nothing is impossible" (Napoleon Bonaparte) and hope that these concepts will be implemented one day. But asking how, is much more difficult. I'd prefer Einstein to Napoleon and hail his quote: "No, this trick won't work...How on earth are you ever going to explain in terms of chemistry and physics so important a biological phenomenon as first love?"
Date Posted: Friday November 25, 2005 06:15:15 PM
I think it is too optimistic to believe that humans will be able to create machines with consciousness in the future. Humans were created by a superior power and they were created in such a manner so as to be able to develop, get influence by the environment and freely take decisions on subjects based on their conscious. On the other hand, machines were design by humans to solve problems based on classical logic that is generally accepted by most people. In my opinion, consciousness is something that machines will never be able to have. Consequently, machines will never have a "free will" and they will be always under human control.