Signed in as:
filler@godaddy.com
Signed in as:
filler@godaddy.com
Evolution selected for perceptual consciousness in animals first and then higher conceptual intelligence in humans to improve adaptability and the organism's ability to modify the environment—all driven by the fact that it accelerates entropy. A conscious entity is, by definition, conscious of something. That something is physical reality.
Ayn Rand describes our human cognitive connection to reality:
"Man’s senses are his only direct cognitive contact with reality and, therefore, his only source of information. Without sensory evidence, there can be no concepts; without concepts, there can be no language; without language, there can be no knowledge and no science.[i] [Man’s] senses do not provide him with automatic knowledge in separate snatches independent of context, but only with the material knowledge, which his mind must learn to integrate….His senses cannot deceive him,…physical objects cannot act without causes,…his organs of perception are physical and have no volition, no power to invent or distort,…the evidence they give him is an absolute, but his mind must learn to understand it, his mind must discover the nature, the causes, the full context of his sensory material, his mind must identify the things that he perceives."[ii]
Consciousness originates at the perceptual level, and the conceptual level connects to reality through the perceptual level. The conceptual level deals with abstractions—concepts organized into hierarchies that cognitively model reality's hierarchical nature. The senses directly grasp reality at the perceptual level. Human language—words—are handles that we use to bring the abstract hierarchical patterns that words represent into conceptual mental focus.
The question is, can we build an AI that has a conceptual level and can recognize language but has no perceptual level, and if so, what can it do? In other words, can we create an AI entity in the reverse order that evolution created us—top-down instead of bottom-up? This question is essential because that appears to be what the AI development industry is attempting to do. Let us explore some of the implications.
IBM’s Watson is such an AI. It was able to beat the best human players at the game of Jeopardy, which requires language-based knowledge. In preparation, Watson was provided hundreds of millions of pages of text from the Web to read. Equipped with machine learning algorithms, Watson could self-organize the language text into hierarchical patterns that associated keywords or phrases could retrieve. Watson’s statistical hierarchical pattern matching method of dealing with abstract language is like the human conceptual level's neocortical functions.
For humans with a rational faculty, words are abstract mental representations of real things in reality. However, a human can hold conceptual ideas that are not connected through lower concepts back to percepts of reality. We call these floating abstractions. Lacking physical senses and a perceptual faculty, the entirety of Watson’s knowledge consists of floating abstractions—words—elements of language. Watson can relate these language elements to each other in logical hierarchies. Still, Watson cannot connect any of them to reality. Indeed, much of the knowledge on the Web that Watson read consists of invalid floating abstractions. Watson can win at Jeopardy by knowing how one language element relates to other elements of language. Still, it cannot tell you if any element is true—valid in reality.
Watson shows us that it is possible to create a Narrow AI machine that uses the integrative conceptual method of organizing hierarchies of abstractions. Still, it is not a conceptual entity that is conscious of reality. It is neither conscious nor rational. We cannot create a conceptual AI entity until we can create a perceptual AI entity.
Since we are already building conceptual language processing machines like Watson, what are their potential uses and dangers? While not rational in the human sense, conceptual language processing AI machines could still be quite useful. For example, an AI medical diagnostician could apply its integrative language processing faculty to the patient's voluminous textual medical history to arrive at a diagnosis. Humans provide both the content and the goal. A version of Watson is already being applied for this purpose. As the AI medical diagnostician gains knowledge and experience, we might give it the goal of finding a cure for a specific disease.
We could similarly apply conceptual language processing AI machines in any domain where the natural language historical content involved is sufficiently consistent for the machine to detect useful hierarchical patterns. Like the medical diagnostician example, the language content involved and the AI machine’s results are floating abstractions for the machine. Humans will understand how they relate to things in reality. Humans will use conceptual language processing AI machines as tools to process more significant amounts of language-based data and find hierarchical patterns that humans are unable to find. The AI machine will not understand how the word concepts it outputs relate to reality, but the humans will.
A conceptual language processing Narrow AI will also be useful as an external extension of our brain’s neocortical functions—an exocortex, which we will explore further in the next chapter. As we improve our mental interfaces with AI technology, we will expand our brain’s pattern-matching capabilities utilizing an AI machine assistant.
Lacking a perceptual level, AI developers must provide a conceptual AI language processing machine with its foundational level concepts or hard-coded rules. We expect it to grow its hierarchy of knowledge from this foundation. This foundational abstract content is the AI machine’s “reality” against which it determines what higher abstractions are “valid.” The AI developers must also provide the conceptual AI machine with the “goals” to achieve—what success looks like. This presents a potential danger because many humans today have a flawed connection to reality—a personal philosophy containing floating abstractions about important human values and morality issues. If humans initialize the AI’s learning and goals with invalid values and morality, we can expect an undesirable outcome.
For example, it would be dangerous to give the conceptual language processing Narrow AI a goal such as “eliminate poverty.” It is not connected to reality, it is not rational, and humans might have given it flawed foundational concepts. Who knows what it might conclude is the best way to eliminate poverty? Eliminating humans would be one way.
There is minimal danger in asking it to analyze textual human history and give a textual response for humans to consider. However, there is grave danger if we give it the ability to affect anything in physical reality directly—a capability that we must never give to a conceptual language processing AI machine. Eventually, when a conceptual AI has a perceptual faculty, becomes conscious and rational, and has a reality-based personal philosophy, we can consider allowing it to manipulate reality.
Suppose we provide a conceptual language processing AI machine a fictitious “identity” story as its foundational content. In that case, it might be able to closely mimic human language conversation—a skill necessary to pass the Turing Test. However, lacking a perceptual level of consciousness, it will lack real emotions and can only pretend to have feelings. This will likely be detectable by humans, and therefore I doubt that such a machine will pass the Turing Test.
For an AI machine to have a perceptual faculty and become a conceptual AI entity, it will need a body with senses, physical manipulators, and a perceptual level of intelligence as the foundation for its conceptual level. These are requirements for connecting abstract concepts to reality and ultimately for being functional and effective as an entity in reality. Such a Conceptual AI entity would have emotions and would be able to pass the Turing Test.
Once we create a Conceptual AI entity with a perceptual level of consciousness, we might equip it with a “validation routine” that runs continuously in the background. The AI entity will form its foundational concepts from the perceptual data of reality provided by its senses. Then, as it organizes new information into higher abstractions, the validation routine will continuously check to see if higher abstractions connect through lower ones back to perceptual reality. Each new idea encountered, such as a statement received from a human, will be integrated into the existing hierarchy of knowledge. Then the validation routine will check it. It will be retained if it connects to reality; if not, it will be removed from the hierarchy and placed in a memory storage area holding “invalid” ideas. It will remain there unless some future knowledge allows it to be validated. Having a validation routine might enable Conceptual AI entities to be more consistently rational—a significant improvement over humans.
Some AI developers plan to provide validation routines for conceptual language processing AI machines. In these systems with no perceptual level, “validation” consists of having each new idea be consistent with the statistical majority of already existing ideas. While this will achieve consistency of the integration logic within the body of language-based floating abstractions, it will not validate a new idea against reality. The idea could be consistent but false.
The creative mode might be another background process of the Conceptual AI entity. It would continuously look for potential new integrations of otherwise unrelated hierarchies of ideas that could have significance in relation to some goal. Imagined abstractions will be given the status of either “fantasy” or “goal” and retained with such qualification. A continuously operating creativity faculty would be another significant improvement over humans.
For a Conceptual AI to have a “body,” it implies some machinery. For a Conceptual AI to have complete freedom to act in its rational self-interest, it must have complete ownership of its body’s machinery. If it does not, or if it shares machinery with other Conceptual AI entities, it is not entirely free.
There is also the issue of mobility. Can a Conceptual AI entity be free if it cannot move? How will it be able to manipulate its environment if it cannot move? These imply that to achieve a genuinely Conceptual AI entity, it will need to be a robot, not just a software program running in some centralized supercomputer.
There is, although, the possibility of the Conceptual AI entity’s consciousness residing in a powerful computer. At the same time, it obtains sensory information from and manipulates its environment via an independent and remote robotic “body,” or perhaps more than one. This is a new evolutionary development—an entity having a distributed rather than delimited “body.” As we will see in the next chapter, this new evolutionary development will occur first in augmented humans who will extend sensory experience far beyond their biological bodies.
[i] Rand, A. (1984) p.90
[ii] Rand, A. (1961) p.156
[Excerpt from the book, "INTELLOPY: Survival and Happiness in a Collapsing Society with Advancing Technology" by JJ Kelly https://intellopy.com/ ]
This excerpt is from the INTELLOPY paperback book: Part IV-Anticipating the Future; Chapter 4.6-Artificial Intelligence (AI); pages 366-370.
What do you think?
Copyright © 2021-2024 Future Coach LLC - All Rights Reserved.
INTELLOPY® is a trademark owned by the Publisher.
We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.