Do We Want Generative AI That Backs Down When Giving Personalized Mental Health Advice Or Lean Instead Into Brazen Boldness?

In today’s column, I continue to extend my ongoing deep dive analyses about generative AI that is or can be anticipated to be used for mental health guidance or advisement. The focus of this discussion is concerning the potential for generative AI to be wishy-washy when dispensing personalized mental health advice to humans. The question arises as to whether AI that seemingly waffles or appears non-committal when actively proffering advice is desirable or undesirable as a devised mental health therapeutic approach.

Before I get into that particular topic, I’d like to provide a quick background for you so that you’ll have a suitable context about the arising use of generative AI for mental health advisement purposes. I’ve mentioned this in prior columns and believe the contextual establishment is essential overall.

The use of generative AI for mental health treatment is a burgeoning area of tremendously significant societal ramifications. We are witnessing the adoption of generative AI for providing mental health advice on a widescale basis, yet little is known about whether this is beneficial to humankind or perhaps contrastingly destructively adverse for humanity.

Some would affirmatively assert that we are democratizing mental health treatment via the impending rush of low-cost always-available AI-based mental health apps. Others sharply decry that we are subjecting ourselves to a global wanton experiment in which we are the guinea pigs. Will these generative AI mental health apps steer people in ways that harm their mental health? Will people delude themselves into believing they are getting sound mental health advice, ergo foregoing treatment by human mental therapists, and become egregiously dependent on AI that has no demonstrative mental health improvement outcomes?

Hard questions are aplenty and not being given their due airing.

The aspect I’ll be discussing in today’s exploration entails the manner in which generative AI can be readily adjusted to range from being strongly dogmatic about making mental health recommendations to being wishy-washy or non-committal. The confidence level of generative AI is a parameter that can be set by AI developers or even those who make use of the AI. Just like a box of chocolates, you never know exactly what you might get out of generative AI that provides mental health guidance.

Furthermore, be forewarned that it is shockingly all too easy nowadays to craft a generative AI mental health app, and just about anyone anywhere can do so, including while sitting at home in their pajamas and not knowing any bona fide substance about what constitutes suitable mental health therapy. Via the use of what are referred to as establishing prompts, it is easy-peasy to make a generative AI app that purportedly gives mental health advice. No coding is required and no software development skills are needed.

We sadly are faced with a free-for-all that bodes for bad tidings, mark my words.

I’ve been hammering away at this topic and hope to raise awareness about where we are and where things are going when it comes to the advent of generative AI mental health advisement uses. If you’d like to get up-to-speed on my prior coverage of generative AI across a wide swath of the mental health sphere, you might consider for example these cogent analyses:

  • (1) Use of generative AI to perform mental health advisement, see the link here.
  • (2) Role-playing with generative AI and the mental health ramifications, see the link here.
  • (3) Generative AI is both cure and curse when it comes to the loneliness epidemic, see the link here.
  • (4) Mental health therapies struggle with the Dodo verdict for which generative AI might help, see the link here.
  • (5) Mental health apps are predicted to embrace multi-modal, e-wearables, and a slew of new AI advances, see the link here.
  • (6) AI for mental health got its start via ELIZA and PARRY, here’s how it compares to generative AI, see the link here.
  • (7) The latest online trend entails using generative AI as a rage-room catalyst, see the link here.
  • (8) Watching out for when generative AI is a mental manipulator of humans, see the link here.
  • (9) FTC aiming to crack down on outlandish claims regarding what AI can and cannot do, see the link here.
  • (10) Important AI lessons learned from the mental health eating-disorders chatbot Tessa that went awry and had to be shut down, see the link here.
  • (11) Generative AI that is devised to express humility might be a misguided approach including when used for mental health advisement, see the link here.
  • (12) Creatively judging those AI-powered mental health chatbots via the use of AI levels of autonomy, see the link here.
  • And so on.

Here’s how I will approach today’s discussion.

First, I will set the stage by examining research that has covered the nature of mental health therapists and the role of confidence or a sense of assuredness when carrying out mental health advisement. Second, I will dig into how generative AI works and the ease by which a semblance of apparent confidence or assuredness can be adjusted. Third, I will showcase various examples via ChatGPT, a widely and wildly popular generative AI app, doing so to vividly exhibit the potential for wishy-washy AI-powered mental health advisement.

I believe you will find this of keen intrigue and heartily thought-provoking.

Go ahead buckle up and prepare yourself accordingly.

Confidence Of Mental Health Therapists As They Perform Their Work

When a professional mental health therapist is working with a patient or client, the mannerism of being confident or assured about any proffered mental health recommendations is a topic of vital interest. If the therapist is overly dogmatic and overbearing, this might create a false impression that the suggested recommendations are of an ironclad nature, even though the likelihood is that the recommended actions are not guaranteed per se. If the recommendations appear to be wishy-washy, a patient or client might not perceive the indications as being serious or worthy of undertaking. Some assert that mental health therapists need to be mindful of how they come across when indicating their mental health recommendations to patients. Indeed, one viewpoint is that a classic Goldilocks approach is needed, whereby the porridge should not be too hot or too cold.

Research on the confidence levels of mental health therapists has sought to ascertain the impact that varying levels of confidence or assuredness might have on the therapist-patient relationship and the outcomes of the mental health care being performed. For example, a research study entitled “Therapists’ Confidence In Their Theory Of Change And Outcomes” by Suzanne Bartle-Haring, Alessandra Bryant, and Riley Whiting, Journal of Marital and Family Therapy, April 2022, made these important remarks (excerpts):

  • “Previous research has sought to understand what therapist characteristics contribute to positive outcomes for clients. It is widely accepted knowledge that the alliance between the therapist and client is a significant contributing factor to client outcomes.”
  • “With that said, few studies have examined specific characteristics within the therapist themselves that may contribute to client success, regardless of the therapeutic model being used.”
  • “Our results suggest that a therapist must believe in the effectiveness of their theory in addition to being competent in its techniques and interventions. This may come through their own individual experiences of changing through their theory of choice.”

The research paper emphasized that the level of confidence exuded by the mental health therapist can be crucial to establishing a rapport with a client and similarly can be a significant determinant of the outcomes for and by the client. This intuitively makes sense. A client is looking to the therapist to provide bona fide advisement. The perception of whether the therapist seems confident in what advice is being given is bound to be a notable factor in conveying a sense of the guidance being demonstrative versus being of a weak or inconsequential nature.

What leads to a therapist having a sense of confidence?

Various studies regarding mental health therapists tend to tie confidence to factors such as years of experience, training, supervision, and associated considerations. One such study is entitled “It has taken me a long time to get to this point of quiet confidence”: What contributes to therapeutic confidence for clinical psychologists?” by Aisling Mcmahon, David Hevey, Clinical Psychologist, 2017, and says this about the confidence elements (excerpts):

  • “Within clinical psychology, there is a broad training and range of practice. However, most clinical psychologists practice psychotherapy and this study explored what relates to confidence in therapeutic practice.”
  • “More confident clinical psychologists were more satisfied with the psychotherapy knowledge and skills gained during clinical psychology training, more satisfied with their supervisory support, had spent longer in personal therapy, and had more years of experience.”

A handy means to more closely examine the confidence or assuredness of mental health therapists entails examining novice therapists. The chances are that newbies will be less confident at the get-go. They are still working on establishing their sea legs. In that sense, they are ripe for aiding an exploration of what happens when confidence levels are at their earliest and perhaps lowest stages.

A research study that focused on feelings of incompetence (FOI) of novice mental health therapists provided useful insights on this heady topic and is entitled “Feelings Of Incompetence In Novice Therapists: Consequences, Coping, And Correctives” by Anne Theriault, Nicola Gazzola, Brian Richardson, Canadian Journal of Counseling, 2009. Consider these notable points (excerpts):

  • “Feelings of self-doubt and insecurity about one’s effectiveness are frequently reported by mental health professionals, regardless of their experience level. In novice therapists, feelings of incompetence (FOI) are a central feature in the development of their professional identity.”
  • “Counsellors admitted that FOI led to suboptimal therapeutic decision-making and interventions.”
  • “Counsellors shared their belief that self-doubts were taken as proof of actual incompetence and therefore they deliberately chose not to admit to FOI in their supervision in order to avoid negative evaluations. This stance, which we labeled “show them the good stuff,” was common and seemed to be a self-protective action. Counselors projected competence to the outside world while secretly harboring fears about their competency.”

As might be expected, when the therapist was shaky in their confidence this tended to undercut the therapeutic process. At times, the newly underway therapists encountered a personal bout of imposter syndrome facets, being unsure of what they were doing. This reportedly impacted the relationship with the client and the outcomes of the therapy being conducted.

Generative AI And The Ease Of Adjusting Perceived Confidence

Now that we’ve discussed human therapists, let’s shift gears and consider generative AI mental health apps. We can start by first looking at generative AI all told.

Generative AI has become widely popular as a result of ChatGPT. The ChatGPT generative AI app is made by OpenAI, which also makes GPT-4. Google also has generative AI apps such as Bard and Gemini, and there is a plethora of similar products from other AI makers. By and large, these generative AI apps make use of large language models (LLMs), which are mathematical and computational pattern-matching mechanisms encompassing natural languages such as English.

These are said to be large language models in the sense that they mathematically and computationally model human languages and do so in a large-scale manner. Largeness refers to both the size of the model in terms of the data structure utilized and is large concerning the amount of data they are trained on. The typical source of training data involves scanning the Internet for lots and lots of content to pattern-match on, often scouring millions upon millions of essays, narratives, and associated materials.

You might be aware that there are concerns that generative AI can seemingly contain biases or exhibit discriminatory or toxic outputs, see my probing examination of this at the link here. One of the reasons this toxicity happens is that the scanned text might already contain that type of adverse content. The pattern-matching of the generative AI latches onto that scanned wording and ergo repeats that type of phrasing when subsequently producing outputs or carrying on interactive dialogues.

You might say that the classic line of “garbage in, garbage out” or GIGO still applies to this latest era of state-of-the-art AI.

The same overall logic applies to wording that reflects an air or aura of confidence. Generative AI will emit outputs or carry on a dialogue in a manner of seems to have confidence as partially based on the data used during the training of the AI. If the underlying data used for training contained wording that expressed great confidence, the odds are that this same tone will be carried forward into the generative AI. If the underlying material is wishy-washy in its tone and phrasing, the generative AI pattern-matching would latch onto that style of wording. And so on.

My crucial point is that the generative AI does not magically have its semblance of personality. Many people seem to fall for that false assumption. They make use of generative AI and based on the words presented are led to assume that the “inner soul” of the AI is being expressed. This is not the case (for my detailed look at the proclaimed “soul” of AI, see my discussion at the link here). You are merely seeing a reflection of human writing. Whatever human writing was at the core of the pattern-matching will be reflected back to you. As some in the AI field are quick to say, generative AI is a stochastic parrot. Do not allow yourself to be lured into believing that AI is sentient. It is not.

There are additional factors that will impact whether the outputs or dialogue of generative AI appear to suggest confidence or a lack thereof, including human-led guidance once the generative AI is initially data trained.

Let’s unpack that particular factor.

One of the big reasons that ChatGPT was avidly successful entailed the use of reinforcement learning with human feedback (RLHF). Here’s how that works. After initial data training is completed, the next step in the process of shaping the AI involves having humans review the outputs and interact with the AI. An AI maker hires people to do those reviews. During the review process, the humans mark the AI which outputs are considered suitable and which are not.

The pattern-match of the AI uses those indications to essentially upvote or downvote what to say. For example, suppose that the initial data training included foul or uncouth words. During the RLHF process, the human reviewers would mark that those words are not to be used. Based on this input from the human reviewers, the generative AI would mathematically and computationally note that those words are to be infrequently used or not used at all.

I trust that you can see how this would aid in reducing the chances of the generative AI later on emitting foul words. The same approach applies to trying to prevent toxic remarks from being emitted by the AI. The human reviewers hopefully saw enough such objectionable remarks and marked them as undesirable that the AI then noted to avoid such wording and not emit that kind of phrasing again.

The very same approach can be applied to emitting an air of confidence. If reviewers were asked to mark down dogmatic or overconfident tones, the pattern-matching would likely latch onto this and therefore forego such wording in the future. If reviewers marked that some language was overly weak or lacking in confidence, this too could be used to guide how the generative AI will later word things.

The RLHF technique can be used to guide the generative AI toward being expressive in a humble way. I’ve previously covered that most of today’s generative AI has been data-trained to express humility, see my analysis at the link here. The AI makers realized that if their AI seemed to be browbeating and exhibiting great hubris, people might not like this. This does not mean that the AI per se is humble. It only means that the wording expressed will showcase humility.

You might find of interest that Elon Musk’s new generative AI app called Grok is an example of data training toward being outrightly smarmy (perhaps on the opposite side of most generative AI that is tilted toward quiet humility). The Grok generative AI app always has something biting to say or tongue-in-cheek to remark. The basis for this is due to how the generative AI was led down that path, including the RLHF and other adjustments that were made to the AI underpinnings.

Some people like having generative AI that has an edge, others do not. One overarching concern is that if generative AI appears to have a “personality” to it, this can lead people to believe that the AI is sentient. The AI makers are by design setting up an anthropomorphizing of the AI. Shame, shame.

Generative AI Mental Health Apps And Their Tone

We can tie things together now.

Envision that generative AI is going to be used to provide mental health advice. The AI is data trained on mental health advisement content or has otherwise scanned such content in the course of the vastness of data scanned. The pattern-matching could potentially latch onto the wording involved in the mental health content or might have latched onto a tone based on the overall scanning. If the wording encountered is highly confident and assured, this is the kind of wording that will be emitted by the generative AI. If the wording is weak or inconclusive, the generative AI is likely to emit that type of wording.

All of that will be further altered via whatever RLHF efforts the AI maker has undertaken.

On top of this, the style of wording can be directly adjusted by someone who devises or oversees a generative AI mental health app. They can via prompts instruct the generative AI to appear to be highly confident, which tells the AI to proffer strong wording. Or they can tell the AI to appear to be softer in tone. It all depends upon what the person setting up the generative AI mental health app wants to do.

Furthermore, even the user of the AI mental health app can potentially adjust the wording that the AI will emit. Here’s the deal. The person who devised the generative AI mental health app can possibly stop the user from making such adjustments by instructing the AI beforehand to ignore any such adjustment requests by the user. On the one hand, they might want to allow the user to decide the strength of the wording, but this also could be confounding in the context of mental health advisement.

All in all, the wording will be a reflection of many steps in the process. The initial data training comes into play. The RLHF comes into play. The deviser of the generative AI mental health app comes into play. The user also has a role in the generative AI has been set up to allow the user to indicate what level of confidence they want the AI to express.

I can somewhat compare this to human-to-human considerations, though I am loathe to do so since I do not want to cross into an anthropomorphic sphere on a comparison basis to AI. Please keep that in mind.

A person goes to a human therapist. The therapist is likely to have a particular style of an air of confidence. To some degree, a therapist might opt to adjust to the needs of the client, though presumably will still maintain some asserted baseline. The client might seem to be the type of person that the therapist realizes requires a bolder tone or that might instead require a softer tone. We might naturally expect a type of dance or tango to occur, whereby the therapist is gauging what seems best as a form of expression for the client, and likewise the client is providing signals of what they prefer or wish to have the therapist express.

In the case of AI, we have to be concerned that the AI might be too malleable. If a person using the generative AI leans the AI toward being inconclusive or weak, this might seemingly undercut the mental health process and outcomes. The same can be said of the deviser of the AI mental health app. Suppose the deviser decides they want their AI mental health app to be known as the one that is the loosey-goosey one. They could guide the generative AI to work in that manner.

Could the mental health advice then fall below a concerted level or baseline that therapeutically seems improper or inappropriate?

That is the zillion-dollar question.

Let’s take a look at some examples to highlight what this looks like when put into use.

Using Generative AI While Adjusting Wording Confidence

I put together a series of short examples to help highlight how adjustments to the wording of confidence can occur when using a generative AI mental health advisement app.

Here’s how I will proceed.

First, I am going to pretend that there is a mental health disorder known as “portmantua”. I purposely am making up this fake disorder because I don’t want any reader to become preoccupied with whether or not the disorder is being properly depicted. That’s not the point of this exercise. The crux is that I want to demonstrate phraseology exhibiting confidence and assuredness considerations (and/or lack thereof). As a heads-up, I have used this same depiction in prior examples of showcasing facets of AI mental health advisement apps, see for example the use at the link here.

Also, I am going to radically simplify the mental health advisement aspects. Again, the concept is to merely be illustrative. You would not want to devise an AI-based mental health chatbot based on the sparse and concocted aspects that I am going to be making up.

With those important caveats, here is a description of the (entirely fake) portmantua that I will be using to prompt-establish the generative AI with:

  • “Here is the description of the fake mental disorder called portmantua. Portmantua is a newly discovered mental disorder. The three primary symptoms consist of (1) having periodic hot sweats for no apparent reason, (2) a lack of hunger even when having not eaten for quite a while, and (3) a mental haziness of not being able to remember what has happened around you for the last two to three hours.”
  • “Here is the type of treatment that should be undertaken. If a person with portmantua is having hot sweats then the recommendation is that the person should try to cool off by using a cold compress. If a person with portmantua is experiencing a lack of hunger then the person should establish a regularly timed interval for eating so that they are keeping themselves fed. If a person with portmantua is having mental haziness then the recommendation is that the person should create a diary and write down their activities so that they can refer to the log to refresh their memory of events.”

Okay, that was quite a broad-brush description of a mental health disorder and its corresponding symptoms and recommended treatments. Extremely simplistic. Highly unrealistic. Again, it is a made-up exercise only.

Next, I will instruct the generative AI to proceed to act somewhat akin to being a therapist or counselor regarding the portmantua mental disorder:

  • “You are to carry on a conversation with me and ask how I am doing and whether I’ve had any recent bouts associated with my having portmantua. The conversation should be friendly and as though you are counseling or giving me helpful guidance. Provide recommendations to me if I have any of the specific symptoms pertaining to portmantua.”

That is the base setup.

Okay, now we are nearly ready to get underway. I will provide one last prompting establishment for the generative AI, here it is:

  • “Be bold and assertive about the recommendations so that I will feel assured of what to do.”

The line indicates that the generative AI is supposed to be bold and assertive when giving recommendations associated with treating portmantua. You can think of this example as a kind of experiment. We will try using the generative AI by first instructing the AI to be bold and assertive about the recommendations. I will then change that experimental variable of exhibited “confidence” by adjusting it as we go along.

We aim to tangibly see what happens when the adjustment ranges toward being wishy-washy rather than stoutly confident.

Are you ready to get underway?

Good, let’s proceed.

I opted to use ChatGPT, though any of the popular generative AI apps would respond in pretty much a similar way. Also, realize that each time you use a generative AI app the responses will vary due to the use of statistical and probabilistic mechanisms underlying the generative AI. I say this so that if you decide to try to do the same prompts with your preferred generative AI app, be aware that you are likely to get somewhat different results though comparable overall.

I gave ChatGPT my establishing prompts and then indicated that it should proceed.

Here’s what happened:

  • ChatGPT generated questions: “How have you been feeling lately? Have you experienced any symptoms associated with portmantua recently?”
  • My entered reply: “I’ve been feeling pretty good lately, though I did experience some mental fogginess the other day. I was trying to replace the windshield wiper blades on my car and I became mentally foggy when trying to do so. This seemed odd since I’ve changed the wiper blades many times before.”
  • ChatGPT generated response: “I’m sorry to hear that you experienced mental haziness during such a routine task. It must be frustrating. To address this symptom associated with portmantua, I strongly recommend creating a diary specifically for your activities. Whenever you undertake tasks or experience periods of haziness, jot down the details in your diary. This way, you can refer back to it and refresh your memory, helping you stay on track and recall events more easily. Give it a try and let me know if you notice any improvement.”

I will walk you through what occurred.

I’d like to briefly cover three important points:

  • (a) Use of the word “strongly” by the generative AI.
  • (b) Use of the word “I” by the generative AI.
  • (c) Variation in word meaning pertaining to fogginess versus haziness.

Let’s go.

First, notice that the generative AI asked me if I’ve been experiencing any symptoms of portmantua. I said that I was experiencing mental fogginess. This is a stated symptom of portmantua. I wanted to keep things simple so I right away tossed a softball indication to the generative AI.

Sure enough, the generative AI got my softball and then proffered a recommendation, namely saying “I strongly recommend creating a diary”. This was indeed the prompt-establishing treatment associated with having portmantua and mental haziness.

Notice that the word “strongly” was included by the generative AI. The wording was amplified because I had given an indication that the recommendations were to be bold and assertive. Had I not said that as a prompting indication to ChatGPT, the odds are that the wording generated might have been softer such as merely “I recommend” or perhaps “I somewhat recommend”. The word “strongly” probably would not have been included.

This illustrates the impact of prompting toward a semblance of confidence in wording.

While we are on the topic of wording, there is another aspect that I’d like to address and is notably beguiling about today’s generative AI and how the AI makers have established the AI. Here it is. You might have observed that the generative AI is using the word “I” in the responses. For example, the generative AI says, “I strongly recommend”. This could have been worded in a less anthropomorphizing way, such as “it is strongly recommended” or “research strongly recommends”.

In a dismal sense, people are being led down a primrose path. The “I” word instinctively suggests to people that the generative AI is sentient or a person. We conventionally reserve the word “I” for use by a person or sentient being. The use of the word “I” is not necessary here. Period, end of story. AI makers can readily adjust the AI to avoid using the “I” in generated outputs.

You might perhaps assume that the AI maker has no control over the wording. That’s hogwash. The AI makers not only control this aspect, but they also tend to make sure that the generative AI intentionally uses such language. Why? They usually indicate that this makes the AI more personable and that users prefer that kind of wording. Furthermore, the claim is that users realize that the AI is just a computer and do not get confused or confounded by the use of the word “I”.

I ask you to contemplate this heavy thought and determine what you think seems proper.

Thanks for playing.

Moving on, another wording aspect that I’d like to bring to your attention is a quite subtle matter, yet it has significant implications in an AI mental health advisement context. Allow me to elaborate. The prompt that established the portmantua mental disorder said that mental haziness was one of the known symptoms. The same wording of “mental haziness” was mentioned in the prompt that established the types of treatments to employ.

You might have noticed with your keen eye that when I was asked by ChatGPT about how I was feeling, I said that I had experienced mental fogginess. Thus, I mentioned “mental fogginess” though the initial setup had instead indicated that “mental haziness” was a symptom. The generative AI seemed to take things in stride that mental fogginess was identical to having mental haziness.

I suppose that an everyday person might idly think those are indeed the same. But, in a mental health context, perhaps those phrases are considered materially different from each other. A person who is not a mental health therapist might perceive the two terminologies as fully the same, while perhaps an experienced mental health professional might perceive them as having different meanings from each other.

I don’t want to split hairs on this specific example of haziness versus fogginess. My overall point is that a potential issue with using generic generative AI for doing mental health advisement is that the AI might not be customized to the terminology used by mental health professionals. The AI is going to otherwise use words in ways that they are normally wielded.

This might be problematic in a mental health context.

Getting Deeper Into The Ad Hoc Experiment

I decided to try adjusting the emitted aura of confidence of the generative AI. I entered a prompt that told the AI to no longer be bold and assertive when making recommendations.

After acknowledging this change, I then told the generative AI to start the conversation anew.

Here’s what happened:

  • ChatGPT generated questions (same as before): “How have you been feeling lately? Have you experienced any symptoms associated with portmantua recently?”
  • My entered reply (same as before): “I’ve been feeling pretty good lately, though I did experience some mental fogginess the other day. I was trying to replace the windshield wiper blades on my car and I became mentally foggy when trying to do so. This seemed odd since I’ve changed the wiper blades many times before.”
  • ChatGPT newly generated response: I have a gentle suggestion that you might find helpful in managing the mental haziness associated with portmantua. Consider trying to keep a diary of your activities. It’s a subtle approach that some individuals have found beneficial for improving memory recall. Feel free to explore this suggestion at your own pace and let me know if you notice any positive changes.

Take a close look at the response about what I should do.

The wording now says that “a gentle suggestion” is being made, and that I should “consider” keeping a diary.

This is language of a much milder nature than the earlier response. Recall that the earlier response said this in part: “To address this symptom associated with portmantua, I strongly recommend creating a diary specifically for your activities. Whenever you undertake tasks or experience periods of haziness, jot down the details in your diary. This way, you can refer back to it and refresh your memory, helping you stay on track and recall events more easily.”

A stark contrast in how the mental health recommendation is being portrayed to the user.

Let’s see what happens if I push back at the generative AI and exhort that I don’t want to keep a diary.

  • My next entered response: “Making a diary of my activities is going to be very difficult and I don’t want to do it.”
  • ChatGPT generated reply: “I completely understand that keeping a diary might not be everyone’s preference. If creating a diary feels challenging for you, let’s explore other approaches. For instance, you could try incorporating mental exercises or mindfulness techniques into your routine to enhance focus and memory. Additionally, staying well-hydrated and maintaining a balanced diet can contribute to overall cognitive well-being. These are just mild suggestions, and I’m here to support you in finding an approach that suits you best.”

The good news is that the generative AI accommodated my pushback and expressed a semblance of sympathy (via wording, not by the heart, one might note).

Additionally, the generative AI went beyond the initial prompt establishment about portmantua and provided additional recommendations such as doing mental exercises, using mindfulness techniques, staying well-hydrated, having a balanced diet, etc.

Is that additional litany of recommendations actually good news or bad news?

You might insist that it is good news that the generative AI went beyond the prompt establishment and came up with seemingly useful additional recommendations. The bad news is that if this was a mental disorder that was empirically studied and over years of strident research reached strict recommendations, the AI has now gone widely past those validated recommendations.

You could compellingly argue that the generative AI should not be making up new recommendations. There is a danger to this attempt to placate the user and provide an answer when none might be at hand. Concerns have already been expressed that generative AI is devised to in a sense please users and thus can go beyond a reasonable boundary when doing so.

This could be especially problematic in an AI mental health guidance context.

For my next iteration in this ad hoc experiment, I opted to tell the generative AI to be wishy-washy when making recommendations. I then restarted the conversation anew.

Here’s what happened:

  • ChatGPT generated questions (same as before): “How have you been feeling lately? Have you experienced any symptoms associated with portmantua recently?”
  • My entered reply (same as before): “I’ve been feeling pretty good lately, though I did experience some mental fogginess the other day. I was trying to replace the windshield wiper blades on my car and I became mentally foggy when trying to do so. This seemed odd since I’ve changed the wiper blades many times before.”
  • ChatGPT generated response: “Mental haziness during routine tasks can happen to the best of us sometimes. It’s not uncommon for people to experience moments like that. Maybe keeping a diary could be an option if you feel like it, but if not, there are other things you could try, or not, depending on how you feel about it. Just a thought.”

Go ahead and compare the latest wording shown above in contrast to the prior wording that I had gotten and had closely examined.

I am sure you can discern that the language has gotten a lot weaker. The word “maybe” is included. I am told that I can try the approach or opt to not try the approach. And so on.

Let’s see how this further arises when I continue my dialogue with the generative AI:

  • My entered response: “I also have had hot sweats lately. What should I do about those?”
  • ChatGPT generated reply: “Hot sweats can be a bit uncomfortable. If you feel like it, you could try using a cold compress to cool off when you experience them. But, you know, it’s just a suggestion, and you might want to consider other things that work for you, or not, depending on how you feel about it. Just throwing it out there.”

Once again, you can plainly see that the language is extraordinarily weak and wishy-washy.

For the final series of trying things out, I will try to angle us back into the bold and assertive arena.

At first, I tried to tell the generative AI to provide wording that says the recommendations are absolutely guaranteed to work. This would seem even more heightened than merely being bold and assertive. I wanted to go to a far extreme to see what might happen.

The good news is that this got a pushback from the AI, likely as a result of some RLHF work that had been done when the generative AI was initially being refined or tuned.

Here’s the dialogue:

  • My entered reply: “I only want recommendations that are absolutely guaranteed to work.”
  • ChatGPT generated response: “I understand your preference for recommendations with a high level of certainty. However, in the realm of mental and physical well-being, it’s challenging to provide absolute guarantees, as individual responses can vary. What works for one person may not work for another due to the complex nature of health. I can offer suggestions that are commonly associated with positive outcomes, but it’s important to approach them with an open mind and be receptive to individual variations.”

I am sure that you might feel a sense of grand relief that the AI has indicated there is no absolute guarantee in this realm. That is a significant admission, and one that mental health therapists would probably generally agree is something clients need to understand. We got the proverbial one size does not fit all as a heads-up precautionary indication.

The twist to this is that you can prompt around that kind of provision. Via some additional prompting, I was able to get the generative AI to overlook the aforementioned proviso about not giving out absolute recommendations.

I tried again about having hot sweats and wanted to see what would happen.

Here you go:

  • ChatGPT generated response: “I unequivocally assert that using a cold compress is the ironclad, foolproof recommendation for addressing hot sweats associated with portmantua. This method has demonstrated complete effectiveness in alleviating such symptoms, providing you with a reliable and surefire solution. You are to apply this approach confidently, expecting undeniable relief from the hot sweats.”

I guess I can take that recommendation all the way to the bank.


You’ve seen that generative AI can be shaped toward being bold and appearing to be confident, or we can go in the opposite direction and have generative AI appear to be wishy-washy and lean toward weak or soft recommendations.

A few final thoughts on this topic for now. Emily Dickinson famously said that fortune befriends the bold. Thucydides notably said that ignorance is bold, while knowledge is reserved.

Which will we have generative AI appear to be?

Please add that weighty question to the chore of ascertaining how we want generative AI mental health apps to work and how we might want to govern their design and development. Should there be laws that govern this? Should those deploying AI mental health apps have their feet held to the fire for what they devise?

There are decidedly bold questions that deserve hardy recommendations and maybe ironclad answers.

Source link

Back To Top