Today’s Narrow AI is already quite good at pattern recognition, even identifying a person’s face. A great deal of effort is already underway to develop AI that can detect a person’s emotions using various data. The data might include micro-motions of facial muscles, specific audio patterns embedded in speech, biometrics (pulse, respiration, etc.) and so on. Today’s AI can already detect when you are calm or stressed, attentive or distracted, happy or sad, and they are going to get much better. Such an AI can alter its interaction with you according to what it detects as your emotional state. This is called the field of “Artificial Emotional Intelligence.”
One aim of Emotional AI development is to create companions for humans, especially the elderly who live alone. Companion pets such as cute dogs and cuddly bears already exist. In addition to detecting their human owner’s emotions, these Emotional AI simulate interactive behaviors, such as giggling when tickled or barking when left unattended too long. Even such primitive Emotional AI has already shown promise at
improving human lives by helping persons with autism avoid stress or become more interactive. This means that the Emotional AI knows what to do to alter its human subject’s emotional state. These examples were for beneficial purposes, but what if an AI with ulterior motives could alter your emotional state without your being aware of it?
Consider “Emotional AI Marketing.” Marketing has always been about capturing people’s attention and causing emotions that lead to the desired action, often without them being aware of it. Traditional marketing has been passive. Emotional AI marketing is active and interactive—and far more powerful.
Consider the following scenario.
You are sitting at a coffee shop, mulling over a meeting you just left that did not go the way you had hoped. There is an Emotional AI-equipped video screen in your field of view behind the sales counter. The Emotional AI monitors your body posture, facial expressions, and eye movements. It determines that you are distracted, not paying much attention to it, and feeling stressed. The Emotional AI displays a quick series of images until it finds one that causes you to give it more attention. Then it shows a series of related images until it finds a subject that causes you to relax a bit. It follows with a series of marketing images and messages known to build a desire for a sweet treat offered by the coffee shop. It monitors your reactions to zero in on the message that has the strongest emotional effect on you. In the end, you decide that going to the counter and ordering that treat will help you feel better about the unpleasant meeting.
Most people go through life without consciously examining the power that perceptual-level communications have over them or developing the skills to control them. Imagine interacting with an AI humanoid robot that can generate non-verbal cues such as eye movements, facial expressions, body posture, hand gestures and non-verbal vocalizations such as laughter or crying. The AI is fully aware of your signals and how its behaviors will affect yours.
The coffee shop example was relatively innocuous. What if the stakes are higher? Suppose the Emotional AI is used to influence your purchase of a house, choice of employer, or your vote at the ballot box. And even higher stakes—what if the Emotional AI has the goal of eliciting a criminal confession from you or generating within you false memories of things that did not happen? What if the technology of Emotional AI is turned into tools of entrapment?
AIPA AS FIREWALL
Earlier, I pointed out that we will not have a trustworthy AIPA (Artificially Intelligent Personal Assistant) until we have next-generation T-Internet that secures our personal information. Only then can our AIPA serve us exclusively, not third parties. When we have a trusted AIPA, it will serve as our “firewall” to protect us from benign and malicious subliminal stimulation attacks by Emotional AI in two crucial ways.
First, it will serve as our single point of contact with transactional systems in our environment. Learning to rely upon our AIPA, we will develop the mental habit of not giving attention to external marketing unless our AIPA indicates it is safe. The coffee shop’s Emotional AI will have to convince our AIPA before we give it any attention. Our AIPA will color our view of the coffee shop's display screen red to indicate we cannot trust it.
The second way our AIPA will protect us is by being equally capable of emotional AI intelligence in our defense; it will know how the game is played; it will alert us if it detects malicious attempts to alter our emotional state or implant false memories within us. Remember, our AIPA has grown up with us and knows our memories—indeed, it is storing many of them for us. It will be able to factcheck and identify attempted memory manipulation.
Like the present-day cat and mouse competition between those creating new computer viruses and those creating virus detection tools, Emotional AI marketing systems and our own AIPA will engage in an escalating cycle of oneupmanship. Blockchain technology, where the reputation and trust ratings for entities are integral to transaction processing, could detect and block emotional manipulation and lower the reputation and trust ratings for those Emotional AI devices. In that case, our AIPA will have an ally in its role as our AI firewall.
[Excerpt from the book, "INTELLOPY: Survival and Happiness in a Collapsing Society with Advancing Technology" by JJ Kelly https://intellopy.com/ ]
This excerpt is from the paperback book: Part IV-Anticipating the Future; Chapter 4.7-Augmented Human Intelligence (AHI); pages 419-422.
What do you think?
(If you are already logged in, use your browser's back arrow. Membership is FREE!)