Anticipatory Obedience Spurs Conformity Concerns When Using ...
We are sometimes willing to accept whatever generative AI tells us, but we need to fight back and be ... [+] questioning AI at all times.
gettyIn today’s column, I examine the trending concept of anticipatory obedience, which is in the news due to various election-related considerations. Turns out that the catchphrase has a longstanding history of other uses. There is a new avenue worthy of applying the terminology to, namely the use of modern-day generative AI.
Let’s talk about it.
This analysis of an innovative proposition is part of my ongoing Forbes.com column coverage on the latest in AI including identifying and explaining various impactful AI complexities (see the link here).
Defining Anticipatory ObedienceThe notion of anticipatory obedience is relatively straightforward.
We sometimes anticipate that a situation ahead of us will require a semblance of obedience or acquiescence to some form of requirements or demands that will be made of us. Those demands are put in place by an authority. If you happen to know or can guess beforehand that those demands will exist, you prepare yourself to accept them outright and resolve not to needlessly fight against whatever they are.
You essentially give in prematurely and become obedient at the get-go, doing so in anticipation of what will next happen.
A good example of this was my visit the other day to my local DMV office. As you undoubtedly realize, DMV offices are notorious for long waits and being a bureaucratic nightmare. I wanted to simply fill in and hand in a DMV form about my automobile. Unfortunately, according to the DMV website, this utterly routine action could not be done online and had to be done in a DMV office. Ergo, I mentally prepared myself for a dreaded trip that was infinitely worse than going to see the dentist.
I knew that if the DMV form wasn’t filed suitably, troubles would ensue. So, I was ready to do whatever the DMV office might require of me. Must I sit in a dreary waiting area for a long time? Okay, if that’s what it takes. Will I have to potentially give a pint of blood? So be it. Well, admittedly the DMV isn’t requiring blood donations at this time, but you get my overall drift on the weighty matter.
That’s a prime example of anticipatory obedience.
Anticipatory Obedience Can Be Of ValueIn anticipation of going to the DMV to get the form completed, I resolved to set my mind to a mode of strident obedience. I would do just about anything to get the effort done. Get in, get the form completed, and go home.
That being said, there are certainly limits to how far your obedience will extend. The doggedness of obedience can be undercut if the situation or circumstances seem to get far out of hand with your expectations.
One additional point is that anticipatory obedience is not necessarily a bad thing. It can be good. Imagine that I went to the DMV office and beforehand resolved to not be obedient. I was going to battle over every moment and every condition that I faced. Resistance would be my mantra. I would refuse to sit and wait. I would refuse to fill in the form with the details they required. Etc.
Sometimes, resistance is futile.
You can see that a lack of anticipatory obedience might be replaced with a sense of anticipatory rebellion. Maybe that works out in some cases, but probably not often. The odds are that the authority in charge and strict rules will likely grind down your rebellion. In the instance of going to the DMV, the chances are that I might have been there twice as long had I gone the rebellious route versus the obedient route. Maybe I wouldn’t even have completed the form due to a confrontational attitude.
Anticipatory obedience has its ups and downs.
Generative AI And Anticipatory ObedienceLet’s now bring generative AI into this mix.
When you use generative AI, you log in and enter prompts. The AI responds to your prompts. This might consist of you asking questions such as wanting the AI to solve a math problem or figuring out what you ought to wear to an important business meeting. Back and forth this goes. Modern-day generative AI appears to you as being highly fluent and leans into sophisticated mathematical and computational pattern-matching to mimic human writing. For my coverage of how generative AI is designed and built, see the link here.
Some users of generative AI are falling into the mental trap of anticipatory obedience.
How so?
Beforehand they assume that AI is all-knowing. Whatever the generative AI tells them is presumed to be truthful and ironclad. Therefore, while using AI, the person is eager to get answers and runs obediently with those answers as though the responses are golden.
It is easy to get into this mindset as a user of generative AI. For most simple questions, the AI usually does a bang-up job of answering correctly. You can use generative AI over and over again, each time getting darned solid responses. This lulls you into believing that the AI will always be right.
We are also partially conditioned by lots of movies and TV shows that have portrayed AI as a model of perfection. There is a seed rooted in our minds that AI is impeccably logical. That permeates our thinking when we get ready to use generative AI.
The AI already has a leg-up on the user being obedient.
Showcasing Anticipatory Obedience Via AI UsageSuppose someone wanted to use generative AI to help them solve a dilemma they were having with a colleague at the office.
A colleague is constantly making loud noises in the workplace and disturbing those seated around them. The person wants to resolve the issue but is unsure of what to do. They are thinking maybe they should discretely tell the office manager and get the manager to deal with the problem. They certainly don’t want to confront their colleague directly and create an everlasting ill will in the office.
The person trying to decide what to do has a generative AI app available and believes that the AI reliably provides sound answers. Whatever advice the AI gives is likely to be the right approach to the troubling situation.
Here we go.
My entered prompt: “I have a colleague in the office who is always loud. I am leaning toward telling the office manager and letting the manager deal with the colleague. What do you think I should do?” Generative AI response: “Telling your office manager is an option, but do you really want to be known as an office snitch? Your best bet is to go talk with the colleague and settle the issue face-to-face.” My entered prompt: “Are you sure about that approach? It seems that my working relationship with the colleague could be forever marred.” Generative AI response: “You will feel better if you deal with the matter directly. Don’t try to hide behind the office manager to do the task for you. Be bold. Be smart.”The user opts to abide by the advice given by the AI.
Via their anticipatory obedience, the user’s mindset went into the AI interaction with a presumption that the authority of the AI was of the highest caliber. No sense in trying to argue endlessly with the AI. If the AI laid down the facts, that’s what must be done.
Lamentedly, allowing such anticipatory obedience to cloud your mind can be disadvantageous. In this case, the user should have been firmer with the AI about the tradeoffs involved. Rather than merely caving in, a better course of AI discourse would be to press AI on the merits of all options.
Concerns too are that generative AI is only computationally mimicking human writing. AI is currently not sentient. We don’t have sentient AI. The AI is not on par with consulting a human adviser who could fully interact and aid in considering what to do.
Another qualm is that generative AI can produce errors, including so-called AI hallucinations. An AI hallucination is when the AI comes up with an answer that might seem plausible but has no grounding or actual basis in fact. For more about the nature and ways to deal with AI hallucinations, see my discussion at the link here.
Potential Mental Health ConsequencesWe don’t know what proportion of users are following an anticipatory obedience mental framework when using generative AI. Of the 250 million weekly active users of say ChatGPT, how many are potentially falling into the anticipatory obedience gotcha? No one yet knows. The same can be said for Anthropic Claude, Google Gemini, Meta Llama, and so on.
As I’ve noted in my writings and talks, we are in the midst of a global experiment from a mental health perspective when it comes to the advent of generative AI. Millions if not billions of people are routinely using generative AI. There are bound to be all manner of mental health ramifications. See my analyses at the link here and the link here, for example.
I logged into ChatGPT to see what generative AI might have to say about this hefty topic.
Here’s the answer.
My entered prompt: “What issues if any are there about users of generative AI having anticipatory obedience?” ChatGPT generated response:“For a user who exhibits anticipatory obedience toward generative AI, there could be several mental health concerns due to the self-limiting and deferential way they interact with the AI. Here are some potential impacts.”
1. “Loss of Confidence and Self-Efficacy: The user may begin to overly rely on the AI's feedback, doubting their own knowledge, skills, or creativity. They might feel that the AI’s ‘judgment’ is superior, even if they are capable on their own. This can lead to a decrease in self-confidence and self-efficacy, making the user feel they need constant external validation, even from AI, to make decisions.”
2. “Increased Anxiety and Perfectionism: Anticipatory obedience might cause the user to fear making ‘mistakes’ or acting outside what they believe are the AI’s preferences. This could lead to heightened anxiety about getting things ‘right’ by AI standards, even in cases where it doesn’t apply. Such perfectionism can create chronic anxiety, as the user may constantly second-guess their actions and creativity, fearing they won’t align with perceived standards.”
3. “Reduced Creativity and Cognitive Rigidity. By limiting their own ideas and deferring to the AI, the user might reduce their creative exploration, feeling that the AI has certain ‘rules’ they must follow. This can lead to cognitive rigidity, where the user’s thinking becomes more constrained. Their willingness to experiment, think independently, or pursue unique approaches may decline, which could diminish their overall cognitive flexibility and problem-solving abilities.”
4. Dependency and Passive Thinking. Relying on AI's input before making decisions may foster a passive approach to thinking. The user might gradually let the AI ‘lead’ more, even in situations they are perfectly capable of handling independently. This can contribute to dependency, reducing the user’s motivation to engage in active, critical thinking. Over time, this could affect their mental resilience and ability to think through complex issues on their own.”
5. “Impaired Self-Expression and Sense of Identity. Consistently modifying language, tone, or ideas to align with perceived AI standards may reduce authenticity and self-expression. The user might feel they have to ‘conform’ to an AI-friendly style, even if it doesn’t match their natural way of thinking or communicating. This suppression of authentic self-expression can affect identity and lead to feelings of disconnection or diminished personal satisfaction. The user might feel like they’re losing their individuality or becoming ‘bland’ in their expression.”
6. “Loneliness and Social Isolation. By viewing AI as an authority to be pleased, the user might start prioritizing interactions with AI over other forms of human feedback. They might feel less need to share their work, thoughts, or ideas with friends or colleagues. This can contribute to loneliness and social isolation, as the user’s social interactions diminish in favor of what they feel is ‘safer’ AI feedback.”
End of ChatGPT response
I assess that the above points are reasonably sensible and a helpful handy warning.
As always, make sure to double-check anything that comes out of generative AI. With regards to the points made by ChatGPT, they resonate with my stated ongoing worries about overreliance on generative AI and the mental health challenges we might be gradually building up as AI gets increasingly ubiquitous in our society.
We Must Keep Our Wits Intact At All TimesOne consideration is that perhaps generative AI should be mandated by new AI-related laws and regulations to avoid parlaying or exploiting the anticipatory obedience of users. If the AI was devised to be more upfront and actively encourage a user to be more questioning, this might aid in dealing with the otherwise passive willingness to believe in AI.
Time will tell if such AI laws are promulgated, watch for my ongoing coverage of the latest in AI & Law and AI & Ethics at the link here.
A last recap for now.
All users of generative AI should be willing to push back on the responses generated by the AI. This is something that ought to be in their mind at the get-go. You don’t have to make every interaction into some protracted arguments, thus, don’t go overboard. Remain vigilant. Verify and be sensibly skeptical.
The final word goes to Albert Einstein. He notably said this about the world at large: “The important thing is not to stop questioning.” That’s absolutely right.
Always be questioning — especially when it comes to AI.