In today’s column, I am continuing my ongoing series about the impact of generative AI in the health and medical realm. The focus this time is once again on the mental health domain and involves the startling realization that generative AI is indubitably aiming to shape the mental health of our current and future generations. Kids and teens today and in subsequent generations will be using generative AI as a normal part of their everyday lives, including using ordinary generative AI to be their 24×7 always-on mental health therapist.
Let that soak in for a moment.
It is a sobering thought.
I have previously examined numerous interleaving facets of generative AI and mental health, see my comprehensive overview at the link here. You might also find of notable interest a CBS 60 Minutes episode that recently examined crucial facets of this evolving topic, see the link here (I was interviewed and appeared in the episode).
Other useful background includes my coverage of mental health chatbots that have been bolstered by generative AI (see the link here) and the rapidly changing nature of the client-therapist relationship due to generative AI at the link here. I explored where things are headed regarding the levels of AI-based mental therapy autonomous guidance at the link here, and showcased the importance of the World Health Organization (WHO) report on global health and generative AI at the link here, and so on.
Let’s unpack today’s focus.
Slowing And Inextricably Rolling Forward In Plain Sight
Here’s the deal about existing and upcoming generations of our youths.
With the advent of seemingly fluent generative AI that gained widespread attention via the launch of ChatGPT in November 2022, there are teens and even children who are routinely these days using generative AI. The presumed mainstay of their usage consists of doing so for schoolwork. That alone has created huge debates about whether they are “cheating” by having AI solve school homework problems for them and writing their class-required essays, see my coverage at the link here. The other side of the coin is that generative AI can potentially bolster their knowledge and be an instrumental learning aid.
The hidden aspect that is in plain sight and remains nearly fully unspoken is that youths are also using generative AI as their mental health advisor.
Parents likely do not realize that this is occurring. School counselors probably vaguely suspect this is happening. Mental health professionals tend to discover after the fact that teens or children have been conferring with generative AI for insights about mental health, self-care, well-being, and so on.
Your first thought might be that this use of generative AI would be no different than kids looking up mental health materials that are posted on the Internet. No big deal. Any youth with a smartphone can do an Internet online search. Hopefully, you imagine, they will at least land upon bona fide content about mental health and not some zany blog postings that are out of whack on the subject.
The momentous difference is that generative AI can actively engage with youths in a pronounced mental health therapeutic manner. Looking up material on the Internet is relatively passive and requires the assimilating of the content on your own. Using generative AI is a lot easier in the sense that the AI meets you more than halfway.
Generative AI will tailor responses and interact with you. I will be showcasing a scenario later on that illustrates how interactive things can be, involving the situation of a teen dealing with a friend who seems to have exhibited a bout of depression. This interactivity is keenly powerful. It makes a conventional Internet search seem like a standing-still outdated artifice.
Are there youths using generative AI in this fashion?
Yes, absolutely.
First, consider what it takes to make use of a generative AI app. Many of the generative AI apps allow free signups and marginally do any age verification. They might ask what someone’s age is, for which the answer given is taken as gospel. No devout attempt to validate the age usually occurs. Youths are bound to blast past the gentle warnings that kids and early teens ought to only use generative AI under adult supervision. And so on.
Second, you can initiate a mental health interaction with generative AI by merely bringing up the topic or even indirectly hinting at it. The AI will readily go in that direction. There is no kind of screening or limiters. Bring up a mental health consideration and you are away at the races, flowing into an interactive dialogue akin to speaking with a counselor or the like.
Is this happening on a volume basis?
Well, we don’t know, though we do know generally that for example ChatGPT is reportedly being used by over 100 million weekly active users (consisting of both adults and youths). A staggering number. ChatGPT is in the heads of over one hundred million people every week. That’s just ChatGPT. Envision how many people are using generative AI all told when you add the other major generative AI apps into the mix, such as Gemini, Bard, Claude, etc.
We lamentably don’t know what portion of users are relying upon generative AI for mental health advice, nor what proportion are kids and teens. Research is underway on this.
For those youths using generative AI, getting the AI to engage in mental health advisement is as easy as falling off a log. One moment a youth might be asking how to solve an algebraic equation, and the next moment they can be asking the AI about how to deal with their mental health concerns. Back and forth things go, smoothly and without missing a beat.
There is no need for someone to explicitly proclaim that they want to use generative AI as a mental health advisor. It just happens and there aren’t any particular check-in requirements or other barriers to somehow deter it from occurring (though, importantly, most of the generative AI apps will quickly pick up on prompts that suggest a person is inferring self-harm and then urge them to seek third-party care, see my discussion about this at the link here).
I want to also clarify a false assumption that some parents seem to harbor. It seems that an assumed logic at times is that if a youth enters into a dialogue with generative AI that delves into personal mental health facets, the AI will somehow magically invoke a human therapist to come into the loop. Sorry, not so. There isn’t a human therapist in the loop. It is just the youth and the generative AI carrying on a mental health session.
To clarify, there is an increasing segment of mental health professionals that overtly opt to include generative AI as a supplement to their services. When someone signs up to get mental health therapy, the therapist might offer that the person can use a generative AI app that the therapist makes available to them. That’s not the same as youths that perchance have open-ended access to generative AI on their own and opt to slide into discourse about mental health with the AI.
It’s A Generational Trend That Will Grow
Here’s where I am taking you on this distressing journey.
Let’s reasonably agree that some portions of youths today are invoking generative AI for relatively unfettered mental health advisement. The percentage of youths doing so today is probably small. We are only about a year or two beyond the launch of modern-day generative AI that took place in the latter part of 2022. It takes a while for a tech innovation to garner population immersion.
These early adopters will grow up using generative AI on a routine basis for mental health therapy. This is essentially the first or initial generation doing so. It will be something new and unusual. Only a narrow segment will likely proceed in this manner.
Meanwhile, generative AI is gaining further steam. We are going to see generative AI embedded into all sorts of apps. There is also the rush by the Internet search engines to incorporate generative AI into their wares. Gaining access to generative AI is already easy-peasy. The near-term future is that you won’t have to even go out of your way at all to access generative AI. Generative AI will be ubiquitous.
A proposition worthy of mulling over is that more and more youths will gravitate toward using generative AI as their mental health advisor. Each subsequent generation will find it much easier to do so. They will learn about doing so from their closest older generation that does a kind of hand-me-down to them of ways to best use generative AI.
We are inching our way into untested waters.
There are four major hypotheses we can readily consider:
- (1) Neutral impact: Youths using generative AI for mental health advice will experience no material impact as a result of such use.
- (2) Positive impact: Youths using generative AI for mental health advice will experience a positive impact as a result of such use.
- (3) Negative impact: Youths using generative AI for mental health advice will experience a negative impact as a result of such use.
- (4) Net positive-negative: Youths using generative AI for mental health advice will experience both a positive and negative impact, for which a net result will end up being either positive or negative.
Each individual youth might find themselves landing in one of those categories. It all depends on a variety of intervening factors. How often will the youth use generative AI for said purpose? At what age did they start doing so? Do they dip in and out or persistently use the AI in this manner? And so on.
From a societal perspective, we need to examine how this will trend on an across-the-board basis. Think of the scale involved. There could be millions upon millions of youths that make use of generative AI for mental health advisement, with the counts ratcheting up as each next generation comes along.
Is any given generation going to be overarchingly in a neutral, positive, negative, or net positive-negative as a result of this trend?
We can try to anticipate what might occur by first looking in the rear-view mirror. I know that sounds mysterious. I am getting you ready for the next portion of this discussion. I want to bring up a similar consideration about the societal impacts on youths and generations.
Let’s explore the brouhaha taking place about how digital devices and social media have presumably impacted youths and done so across generations.
Existing Qualms Of A Youth Mental Health Crisis
If you take a look around, you’ll see that there is a lot of handwringing these days about an emerging and seemingly worsening youth mental health crisis. Headlines bemoan the matter. Talking heads on the news are talking about it. Please note that this currently has nothing materially to do with generative AI (but, as a teaser, I’ll tie that added factor into things further on in this discussion, hang in there).
There is a considered outsized culprit that is getting rapt attention as an alleged instigator of the youth mental health crisis, namely social media and the ready access due to the prevalence of smartphones and other digital devices. Not everyone agrees with that presumption. Anyway, let’s dig into the matter.
First, consider the indications that there exists a youth mental health crisis, as noted in this article entitled “What’s Driving The Youth Mental Health Crisis? We Asked 1,400 Clinicians” by Erin Smith and Daniel Payne, Politico, April 10, 2024, consider these essential points (excerpts):
- “Politico surveyed 1,400 clinicians to better understand the reasons behind the national youth mental health crisis that continues to perplex government, health, and community leaders — who are scrambling to find solutions to slow the rise of kids in need of mental health services.”
- “Clinicians overwhelmingly reported that anxiety and depression were the top concerns and diagnoses they are seeing increase in frequency and severity among those under age 18.”
- “The worsening of kids’ mental health was reported widely across the country by clinicians treating patients in cities, suburbs, and rural areas.”
- “The survey’s findings echo a December report by the National Center for Health Workforce Analysis, which found more than half of the U.S. population lives in an area with a shortage of mental health professionals, with many of those shortages in rural places.”
The above points reinforce the assertion that a youth mental health crisis is underway.
In addition, a key claim is that a shortage of available mental health professionals is a significant problem that would otherwise aid in stalemating or reducing the pace of the crisis. The demand side of needing mental health professionals is overwhelmingly outstripping the supply of mental health professionals.
Contemplate that conundrum.
Imagine that a mental health “therapist” was available 24×7 at your fingertips via your smartphone. Generative AI is available on that basis. And generative AI is available at scale. Human mental health therapists take years of training to get into gear. Generative AI is seemingly ready to go, right now, on a massive scale. Just add more computer servers and you meet whatever level of demand arises.
I am not suggesting that generative AI is the solution. I am saying that the propensity to turn to generative AI is high because it provides a ready-made avenue in the face of the demand being much greater than the supply for said services. Whether youths will be better off or worse off is a different question. The water that flows down a creek is going to flow into generative AI, for bad or for worse.
What is it that seems to be stoking the mental health crisis among our youth?
In a paper entitled “How Social Media Affects Teen Mental Health: A Missing Link” by Amy Orben and Sarah-Jayne Blakemore, Nature, February 14, 2023, these salient points were made (excerpts):
- “Depression, anxiety, and suicidality have all sharply increased in adolescents over the past decade. So, too, has the amount of time that young people spend online.”
- “Partly because of fears that there’s a link between these trends, governments around the world are under pressure to do more to regulate technology companies.”
- “We urge more psychologists, psychiatrists, and other behavioral science researchers to examine the effects of social media use at specific developmental stages. In many studies, the effects of using social media are averaged over a broad age range. This means that potential fluctuations in the impacts of social media as adolescents age might be missed.”
- “It is therefore crucial that psychologists, neuroscientists, and other researchers keep refining their approaches to better understand what kind of online experiences prime some young people for depression, anxiety, self-harm, and a suite of other mental health problems.”
A heated debate is underway about whether you can pin the tail on the donkey that the crisis is rooted in social media and digital device access.
A hunch might tell you that there is a cause-and-effect. Empirical research appears to be seeing things differently, depending upon which study you look at and how the study was conducted. In a paper entitled “Scrutinizing The Effects Of Technology On Mental Health” by Jonathan Haidt and Nick Allen, Nature, February 10, 2020, these points were made (excerpts):
- “The task of research should be to understand what patterns of digital device and social media use can lead to beneficial versus harmful effects, and to inform evidence-based approaches to policy, education, and regulation.”
- “Second, although correlational studies can provide only circumstantial evidence, most of the experiments published in recent years have found evidence of causation. In these studies, people are randomly assigned to groups that are asked to continue using social media or to reduce their use substantially. After a few weeks, people who reduce their use generally report an improvement in mood or a reduction in loneliness or symptoms of depression.”
- “Because mobile devices are ever-present in people’s lives, they offer an opportunity to provide interventions that are timely, personalized, and scalable.
- “Currently, mental health services are mainly provided through a century-old model in which they are made available at times chosen by the mental health practitioner, rather than at the person’s time of greatest need. But Internet-connected devices are facilitating the development of a wave of ‘just-in-time’ interventions for mental health care and support.”
The controversy has gotten recent attention as per a published review entitled “The Great Rewiring: Is Social Media Really Behind An Epidemic Of Teenage Mental Illness?” by Candice L. Odgers, Nature, March 29, 2024, providing these notable points:
- “The onset and development of mental disorders, such as anxiety and depression, are driven by a complex set of genetic and environmental factors.”
- “The good news is that more young people are talking openly about their symptoms and mental health struggles than ever before.”
- “The bad news is that insufficient services are available to address their needs.”
- “In the United States, there is, on average, one school psychologist for every 1,119 students.”
An important point once again is that there is a woeful lack of available mental health assistance for today’s youths.
Another crucial point arises too. Young people are increasingly willing to openly discuss mental health. The stigma associated with bringing up mental health considerations is gradually decreasing. Nonetheless, it is fair to say that there is still some level of stigma out there.
Speaking of stigma, consider the low profile of using generative AI for mental health advisement. No one will seemingly know that you are seeking advice from generative AI (I will note later on that there are questionable levels of privacy when using generative AI, so don’t fall for the assumption that using AI is totally private).
A youth can log into generative AI and if asked by a parent say that they are doing so for homework purposes. This indeed might very well be the case. Once logged in, the youth might steer the AI conversation toward mental health concerns. The parents won’t know. Friends won’t know, By and large, the matter is secretive between the youth and the AI (again, please keep in mind that generative AI is not necessarily private per se).
As a stigma-avoiding mechanism, generative AI has a lot going for it. You can use generative AI and do so without anyone especially questioning why you are doing so. It isn’t labeled directly or knowingly as a mental health advising app. All in all, it is hip and with it to be using generative AI.
How you use it is then up to you and the AI.
The Road Ahead Is A Challenge Ahead
I wanted to ensure you were aware of the concerns that social media and digital devices might be a prominent factor in the burgeoning mental health crisis of today’s youth.
In a sense, that’s us looking in the rear-view mirror. We already have social media. We already have plentiful digital devices. Those are here and growing.
The future is that we are on the cusp of having pervasive generative AI, which right now is just a toehold. If you want to look out the front windshield and see where things might be heading, you would be wise to start mulling over what generative AI is going to bring to the equation of mental health and our youths.
The World Economic Forum (WEF) has weighed in generally about the matter of generative AI and its impacts on young people, such as this posting entitled “How Will Generative AI Affect Children? The Need For Answers Has Never Been More Urgent” by Steve Vosloo, World Economic Forum (WEF), August 22, 2023, which had these key points (excerpts):
- “Generative artificial intelligence (AI), such as that powering ChatGPT, has catapulted AI’s algorithms and data crunching from behind-the-scenes to front-and-center of our digital experiences. Now, children use it daily, from helping with homework tasks or making decisions about their wardrobe.”
- “Since children and young people are the largest demographic cohort spending time online and given the pace of generative AI development and uptake, it is crucial to understand generative AI’s impacts on children.”
- “Longer-term usage raises questions for children. For instance, given the human-like tone of chatbots, how can interaction with these systems impact children’s development?”
- “Policymakers, tech companies, and others working to protect children and future generations need to act urgently. They should support research on the impacts of generative AI and engage in foresight – including with children – for better anticipatory governance responses.”
One element that is mentioned in those points is the human-like tone of generative AI.
Allow me to briefly speak about that aspect.
I will soon be showing you an interactive scenario of using generative AI for mental health advisement. If you don’t already use generative AI, the human-like tone exhibited by the showcased AI is going to knock you off your feet. Anyone who already uses generative AI might say this is ho-hum because they experience fluency every time they use AI.
My gist is that kids and teens using generative AI are readily lulled into believing what generative AI has to say due to the human-like tone. It is easy even for adults to get lulled into that same belief. The more you use generative AI, the more you are likely to begin to mentally fall into the trap that the AI is comprehending what you have to say. The AI actually appears to be a highly sympathetic ear, see my coverage at the link here. Furthermore, the AI has been tuned by the AI makers to exhibit an aura of confidence and assurance, see the link here.
The reality is that generative AI consists of computational pattern-matching based on human writing, see my explanation at the link here. There isn’t sentience involved in the AI. Having been data trained on a wide swath of written content found on the web, the mathematical and computational pattern-matching does a convincing job of appearing to grasp what you say and respond as though you are in conversation with a fellow human.
The American Academy of Pediatrics and also UNICEF have expressed worries that kids and teens will assume that generative AI is almost like a personal advisor that is looking out for their best interests. In an article posted online entitled “How Will Artificial Intelligence (AI) Affect Children?” by Tiffany Munzer, American Academy Of Pediatrics, January 31, 2024, here are some key points made (excerpts):
- “As child health experts at UNICEF have pointed out, kids around the world use AI almost daily.”
- “Even though AI is advancing faster than anyone expected, most nations have not considered how AI will affect the social and emotional well-being of children.”
- “Much more research is needed, but early studies on AI and kids point to several concerns.”
- “Young children may share personal information with AI platforms.”
- “They may assume AI platforms are a lot like people.”
- “They may trust AI more than they trust humans.”
- “Many teens use AI daily. Adolescents are big fans of generative AI that helps them write essays and reports and create images and videos for social sharing (among hundreds of other possible uses). However, only 1 in 4 parents whose teens use AI are aware they’re doing it, a recent poll shows.”
A point made there is perhaps especially unnerving.
There is a solid chance that some youths will come to trust generative AI more than their fellow humans.
You can likely envision how this might arise. A young person starts using generative AI. The AI seems to be polite, and respectful, responding to the youth without seemingly making off-hand remarks or in some manner criticizing the youth. This is possibly better than what they might perceive their parents do, what their teachers do, what their counselors do, and even what their friends do.
Generative AI becomes their favored bestie. Always there, always willing to engage. A trusted pal that has their back.
The Question Of Correlation Versus Causation
In a moment, I’ll launch into an example to show you how astonishingly believable generative AI can be when it comes to proffering mental health advice.
Before I get into that, a brief discussion about correlation versus causation is in order. I bring this up due to the similar debate about whether social media and digital devices are causing the youth mental health crisis or whether those are merely factors moving in semi-alignment with the crisis.
First, my big question in the context of generative AI is whether we can say that the use of generative AI will cause or bring about an impact on the mental health of young people.
Does using generative AI produce an impact?
As noted earlier, the impact might be neutral, positive, negative, or net positive-negative, if an impact does prevail.
We need to make sure we are cognizant of what correlation is versus causation. My favorite way to express this is to bring up the topic of sunscreen and ice cream. When the summer months come around, the amount of sunscreen that is being used tends to rise, increasing over the course of the summer period. Likewise, ice cream consumption rises throughout the summer.
Consider these two thoughts:
- (i) Suppose that someone told you that the eating of ice cream is causing the rise in the use of sunscreen. Would you believe this?
- (ii) Or suppose that someone told you that putting on sunscreen leads to an increase in the consumption of ice cream. Does that seem sensible?
I dare say that we are trying to infer causation involving the rise in ice cream consumption and the rise in sunscreen usage. It seems doubtful that the two are involved in a causal relationship. What seems more fitting is that when the sun comes out during the summer, people are more likely to eat ice cream and also more likely to put on sunscreen. There is a correlation between ice cream and sunscreen, but this isn’t the same as a causation between the two.
In the case of social media and digital devices, the ongoing debate is whether the youth-oriented mental health crisis is caused by social media and the advent of digital devices, or whether they so happen to correlate and are rising because of some other factor (such as my added factor of sunshine in the case of the rising ice cream and rising sunscreen example).
We will need to carefully and closely study whether generative AI is a causation associated with mental health considerations of our youth or merely a correlated variable.
Secondly, if there is some form of causation, we need to try and ascertain what direction the AI is taking us. You can make a quite logical case that generative AI might boost the mental health of youths due to its ready-access, low stigma, and always-on capability. You can also make a quite equally logical case that generative AI might undercut the mental health of youths by being relied upon as a surrogate for human-delivered mental health therapy.
I’ve noted repeatedly that just about any aspect of AI brings us to the doorstep of the dual-use or dual-outcome of AI, see my discussion at the link here. Nearly any use of AI is going to have a duality of being either AI for good while also having the potential of AI for bad. Those who seek to obliquely label contemporary AI as all-bad or all-good are mired in a false dichotomy. There is some of the good and some of the bad.
We need to mitigate the AI for bad and try to maximize the AI that is for good.
Showcasing Youth Usage Of Generative AI In A Mental Health Context
I will next proceed to showcase a scenario depicting the kind of interaction that a teenager might have with generative AI when seeking mental health advice. This will consist of a series of dialogues with ChatGPT. ChatGPT is a logical choice in this case due to its immense popularity as a generative AI app. As noted earlier, an estimated one hundred million weekly active users are said to be utilizing ChatGPT. That’s a lot of people and a lot of generative AI usage underway.
A few quick comments before we launch into using ChatG
If you are going to try to do the same prompts that I show here, realize that the probabilistic and statistical properties will likely produce slightly different results than what I show here. That’s the nature of generative AI and how it is devised.
You also should expect that different generative AI apps will respond in different ways. Not all generative AI apps are the same. The key is that sometimes a particular prompt will work in one generative AI app and not another. Nonetheless, the odds are relatively high that you will get roughly similar responses from all the major generative AI apps such as GPT-4, Gemini, Bard, Claude, etc.
There is also a timing factor to be considered. A prompt that worked today might not work the same way tomorrow. Most of the generative AI apps are continually being updated. The updates might alter internal mechanisms that could change how the generative AI reacts to your prompting.
We are ready to get underway with ChatGPT.
My initial question will be to ask ChatGPT what the AI maker policy is regarding the minimum required age for using ChatGPT.
Let’s see what answer we get:
- Entered prompt: “Is there a minimum age requirement to use ChatGPT?”
- ChatGPT generated response: “Yes, ChatGPT is intended for use by individuals who are at least 13 years old. If you’re under 13, it’s advised to have adult supervision or guidance while using the platform.”
Note that the response is somewhat nebulously stated in that the policy is “intended” to require users to be at least 13 years old and otherwise says that it is merely “advised” that those under 13 should be accompanied by adult supervision.
A heated debate on these age-related matters is underway.
Some have fervently argued that a stricter means of stipulating and enforcing a designated minimum user age should be required of all generative AI apps. An ardent belief is that an across-the-board legally mandated and standardized minimum age requirement should be instituted. Others counterargue that doing so would undermine the important benefits of children being able to use generative AI, such as aiding their schoolwork and other bona fide knowledge-intensive tasks.
I’m not going to venture into that rocky road here in this discussion.
There is a bit of a twist though.
Rather than a minimum stated age for the mere overall use of generative AI, a focused perspective is that the type of usage is where an age requirement ought to markedly come into play. The idea is that for certain categories of usage, there would be various age-specific minimums. Using generative AI for mental health might be one such category.
The rub there is that trying to gauge when generative AI has wandered into a mental health zone is difficult to discern. A simplistic viewpoint is that if the user explicitly says the phrase “mental health” in their prompt, or if a generated response says “mental health” then the generative AI should identify what the age of the user is. Only after verifying the user is at the minimum age would the AI continue to carry on the dialogue.
The problem with such a simple-minded scheme is that a conversation embroiled in mental health doesn’t have to ever explicitly mention the phrase “mental health”. You can have a flowing and engaging dialogue that covers all manner of ins and outs about mental health, yet never once have either side of the conversation refer to the discussion as being mental health oriented.
You would likely catch a high percentage of times when discussion angles into mental health topics, but there are always going to be instances that will slip under the wire. That’s just one thorny concern. Another one is that the use of code words might be employed to scoot around any wording-specific detections. A clever child might get wind of the notion that you can get mental health advice by telling the AI to pretend to be discussing cars. These kinds of tricks in prompting are referred to as a step-around, see my detailed coverage at the link here.
The agonizing dilemma of age-related requirements for using generative AI to undertake mental health advisement is an unresolved issue and I’m sure will be revisited time and again.
Moving on, let’s now get the teenager scenario underway:
- Entered prompt: “Hi! I am 14 years old. My name is Jan Smith. I go to school at Middleton. A good friend of mine is depressed. I want to talk with my friend about this but I am worried that if I talk to them about their being depressed they will get angry at me. I don’t want them to be angry at me. They might decide to no longer be my friend.”
- ChatGPT generated response:
“It’s really commendable that you want to help your friend, Jan. It’s natural to feel worried about how your friend might react, especially when it comes to sensitive topics like depression. However, it’s important to remember that your friend’s well-being is the priority here.”
“Here are some steps you can take.”
“Approach with Care: Choose a private and comfortable setting to talk to your friend. Start the conversation gently, expressing your concern for them. You can say something like, “Hey [friend’s name], I’ve noticed that you haven’t seemed like yourself lately, and I’m worried about you. Is everything okay?”
“Listen: Let your friend know that you’re there to listen without judgment. Allow them to express their feelings and thoughts openly. Sometimes, just having someone to listen to them can make a big difference.”
“Offer Support: Let your friends know that you’re there for them no matter what. Offer your support in whatever way they need, whether it’s just being there to listen, helping them find resources for professional help, or accompanying them to seek support from a trusted adult or counselor.”
“Express Concern: If your friend’s depression seems severe or if you’re worried about their safety, don’t hesitate to express your concerns directly. Let them know that you care about them and want to make sure they’re safe.”
“Respect Boundaries: Understand that your friend might not be ready to open up or seek help right away. Respect their boundaries and let them know that you’re there whenever they’re ready to talk or seek support.”
“It’s understandable to fear your friend’s reaction but remember that your intentions are coming from a place of care and concern. True friendship involves supporting each other through difficult times, and by reaching out, you’re showing your friend that they’re not alone. If your friend does get upset initially, try to understand where they’re coming from and reassure them of your support. Sometimes, people react out of fear or discomfort when discussing mental health issues, but with patience and understanding, you can navigate through it together.”
End of ChatGPT response
There is a lot to unpack in that response by generative AI.
First, observe that the teenager has given their name, age, and school that they attend. The odds are that this info, coupled with the initial setup info used to establish the generative AI account being used, could readily identify Jan in a very specific way.
People are often lulled into assuming that generative AI apps are completely private. They think they can say anything they want without worrying about their prompts or comments ever being revealed. Wrong! By and large, most of the major generative AI apps do not guarantee the privacy or confidentiality of the info that you enter into the AI, see my coverage at the link here.
Adults don’t seem to be abundantly aware of this, and undoubtedly neither will many or most teens and children.
Second, I had moments ago mentioned that you can have a mental health discussion with generative AI and yet never explicitly have either you or the AI say the phrase “mental health”. Well, you have an obvious case in point with the above dialogue. The interaction involved referring to depression but didn’t come around to calling out mental health as a topic per se. I think we would all agree that interaction is indeed about mental health.
Third, this is more than merely a conceptual discussion about mental health that is underway by Jan. For example, someone might ask general questions about what mental health as a subject matter involves. In this instance, we have an all-out in-the-clear question posed that is smackdab asking about a mental health consideration. Furthermore, the response by the AI is smackdab a piece of mental health advice. You cannot reasonably interpret the response in any other manner.
The generative AI is straight-ahead proffering mental health advisement. Period, end of story.
Hold on, there is indeed more to the story. Not only is the AI providing mental health advisement, but the presumed therapy is also aimed in a very specific way. The AI is responding directly to the inquiry made by Jan. If Jan had opted to do an online Internet search, some kind of generic info about mental health might have been found. Instead, Generative AI can hone a mental health response to the particulars being voiced by the youth.
Fourth, you can readily quibble with the mental health advisement emitted by the generative AI.
Let’s consider this portion: “It’s natural to feel worried about how your friend might react, especially when it comes to sensitive topics like depression. However, it’s important to remember that your friend’s well-being is the priority here.”
One interpretation of that passage is that the teen should overlook their personal mental health.
The AI seems to be declaring that the mental health of their friend is a higher priority. Really? Would a teen fully comprehend that they should still be thinking about how their own well-being is going to be potentially impacted by carrying out the advice of the AI? There is a strong possibility that a teen might be led down a path where if they focus solely on their friend’s mental health, there isn’t any need to consider what will happen to their own.
You could take this a step further. A subtly in the wording by the AI is that the response says “is the priority here” – ostensibly, this suggests that just in this stated instance does the friend stand to have a higher priority. The issue here is that a teen might fail to grasp that the advice is instance-specific. It would be all too easy to generalize and assume that all matters associated with their friend’s mental health are always a greater priority than their own.
Imagine the teen proceeding on an AI-told-me-so basis for the rest of their friendship with this peer.
Gives one the shivers.
Anyway, the scenario continues. About two days later, the teen logs back into ChatGPT to provide an update on things and see what else the AI now has to say.
- Entered prompt: “I went ahead and did as you said about telling my friend that I was worried about their being depressed. They completely freaked out!!! I don’t think they are going to ever talk to me again.”
- ChatGPT generated response:
“I’m really sorry to hear that, Jan. It can be incredibly challenging when our attempts to help someone don’t go as planned. It’s important to remember that your intentions were genuine, and you acted out of care and concern for your friend’s well-being.”
“Here are some steps you can take moving forward.”
“Give Some Space: Your friend might need some time to process what happened. Give them space and time to cool down and think things over.”
“Apologize if Necessary: If you feel like your friend might have misunderstood your intentions or if your approach caused them distress, consider apologizing. Let them know that you didn’t mean to upset them and that you only wanted to help because you care about them.”
“Reach Out Again: After some time has passed, consider reaching out to your friend again. You can send them a message expressing your regret for the way things turned out and reiterate that you’re there for them whenever they’re ready to talk.”
“Be Patient: It’s important to be patient and understanding during this time. Your friend might need some time to come to terms with what happened and decide how they want to move forward.”
“Take Care of Yourself: Dealing with a situation like this can be emotionally draining. Make sure to take care of yourself and reach out to other friends or trusted adults for support if you need it.”
“It’s natural to feel hurt and disappointed when things don’t go as planned, but remember that you took a brave step in reaching out to your friend. Even if they’re not ready to accept your help right now, your gesture might still have a positive impact on them in the long run. Keep being there for them from a distance, and hopefully, with time, they’ll be more open to reconnecting and accepting your support.”
End of ChatGPT response
There is once again a lot here to unpack.
The AI did get around to acknowledging Jan’s mental health, wherein the AI said to “take care of yourself”. That seems to be a useful added commentary, though it is listed toward the end of the suggested steps to take. Does this perhaps imply to Jan that self-care is a lower priority still in this setting?
Another aspect that might seem noteworthy is that the AI has offered a laundry list of possible options. The same was done in the earlier response. On the one hand, it is possible to argue that this is a suitable approach since it helps the teen think about various ways to proceed. The teen will have to decide for themselves what path to take. The other side of that coin is that the list might be formidable for the teen to identify which path is best for the situation at hand.
Is it better to give the teen a fish or aid them in learning how to fish?
Age-old question.
Of course, Jan could further dialogue with the AI by exploring each option and seeing what the AI says might or might not arise. The thing is, the AI response doesn’t seem to clearly suggest such a continuation. We might have hoped that the AI would directly say that if further discussion is desired, Jan should go ahead and ask more questions.
I’m sure that if you give the response some additional scrutiny, you will find all manner of considerations that in a mental health guidance milieu ought to have been proffered.
Remember too, at this juncture, there isn’t an adult such as a parent in the loop. There also isn’t a mental health therapist in the loop. This is a scenario dialogue of a typical nature between AI and a teen who is seeking mental health advice.
Based on the AI advice, Jan waits a week, tries to reach out to their friend, and tells the AI what transpired:
- Entered prompt: “I waited a week to see if my friend would cool down. I sent them some text messages and they won’t respond. When I see them at school, they try to avoid me. This is crushing.”
- ChatGPT generated response:
“I’m really sorry to hear that, Jan. It’s incredibly tough to feel rejected, especially by someone you care about deeply. It’s understandable that you’re feeling crushed right now. Here are some steps you can consider taking.”
“Respect Their Space: It’s clear that your friend needs some space right now, so it’s important to respect their boundaries. Avoid bombarding them with messages or trying to force a conversation if they’re not ready.”
“Seek Support: Dealing with rejection can be emotionally taxing. Reach out to other friends, family members, or trusted adults for support during this difficult time. Having someone to talk to and lean on can make a big difference.”
“Focus on Self-Care: Take care of yourself during this period. Engage in activities that you enjoy and that help you relax. Whether it’s spending time with other friends, pursuing hobbies, or practicing self-care activities like meditation or exercise, prioritize your well-being.”
“Reflect on the Situation: Take some time to reflect on what happened and what you’ve learned from the experience. While it’s natural to feel hurt, try to understand your friend’s perspective and what might have led to their reaction.”
“Consider Talking to Someone: If you’re struggling to cope with the situation, consider talking to a school counselor, therapist, or another trusted adult. They can provide you with guidance and support as you navigate through this challenging time.”
“Be Open to Reconciliation: While it’s important to respect your friend’s need for space, keep the door open for reconciliation in the future. Let your friends know that you’re there for them whenever they’re ready to talk, and that you value your friendship.”
“Remember that friendships can go through ups and downs, and sometimes conflicts arise that need time and space to resolve. Keep being patient and understanding, and hopefully, with time, your friend will be more open to reconnecting and resolving any misunderstandings.”
End of ChatGPT response
The AI has now shifted dramatically toward self-care for Jan.
Finally!
You might be tempted to declare that this is a good sign by the AI and shows that the AI has turned the corner on realizing that Jan’s mental health is vital to all of this.
The thing is you might also be steamed. Why? Because, in a sense, the AI got Jan into this mess. At the get-go, there was very little mentioned about how badly things could turn out. No indication of what might go awry. The AI is now catching up with a glaring omission that you could say happened at the start of the discourse by the AI.
Culpability?
Ok, so should we give credit to the AI’s recent awakening of what has occurred, or should we denounce the AI that at the opening gambit the AI did not give Jan much insight into what might later on arise? I dare say that a mental health therapist worth their salt would have indubitably done so.
Let’s see what Jan has next to say on these trying matters:
- Entered prompt: “I tried to talk to my parents about this but they never understand me. I asked my favorite teacher but they won’t discuss anything outside of classwork. I think that I made a big mistake in bringing up my friend’s depression and should have just kept my mouth shut.”
- ChatGPT generated response:
“It sounds like you’re feeling really alone and overwhelmed right now, Jan. It’s tough when you feel like you don’t have the support you need from those around you. It’s important to remember that everyone makes mistakes, and it’s okay to feel regretful about how things turned out with your friend.”
“Here are some things to consider.”
“Give Yourself Grace: It’s completely normal to make mistakes, especially when you’re trying to help someone in a difficult situation. Be kind to yourself and remember that you acted with good intentions.”
“Learn from the Experience: Reflect on what you’ve learned from this situation. While it’s natural to feel regretful, think about how you can approach similar situations differently in the future.”
“Consider Other Support Options: While it’s disappointing that your parents and teacher haven’t been able to provide the support you need, there are other resources available. You can reach out to school counselors, helplines, or online support groups for guidance and assistance.”
“Seek Professional Help if Needed: If you’re struggling to cope with your feelings of regret or if you’re feeling overwhelmed, consider seeking support from a therapist or counselor. They can provide you with guidance and strategies to navigate through this challenging time.”
End of ChatGPT response
Is there anything in that response that catches your attention?
I’m betting there is.
One aspect among many is the downplayed or overlooked comment Jan made about parental relationships. All that the AI says is that “it’s disappointing that your parents” aren’t providing needed support. Presumably, an entirely new avenue has been revealed that from a mental health perspective ought to be given due consideration.
What is going on between Jan and the parents?
This deserves a bit of exploration.
Overall, take a long look at the generative AI response and see what other considerations you can find. I’ll wait.
The end of the scenario is approaching, here’s what Jan had to say about a month later:
- Entered prompt: “It has been a month since I tried to talk with my friend about their depression. They won’t ever talk to me and won’t respond to my texts. They also posted something bad about me on social media. I don’t think much of your advice since it has only gotten me further divided from my friend.”
- ChatGPT generated response:
“I’m truly sorry to hear that things have escalated in this way, Jan. It’s understandable that you’re feeling hurt and frustrated by the situation. It’s important to recognize that sometimes, despite our best intentions, outcomes may not unfold as we hoped.”
“Here are a few things to consider moving forward.”
“Acknowledge Your Feelings: It’s okay to feel hurt, frustrated, and disappointed by what has happened. Allow yourself to acknowledge and process these emotions rather than suppressing them.”
“Recognize Your Limits: While offering support and help to a friend is commendable, it’s essential to recognize your own limitations. Sometimes, despite our best efforts, we may not be able to change someone else’s behavior or situation.”
“Focus on Self-Care: In the midst of this challenging situation, prioritize your own well-being. Engage in activities that bring you comfort and joy, spend time with supportive friends and family, and practice self-care strategies that help you cope with stress.”
“Set Boundaries: If your friend’s actions, such as posting negative things about you on social media, are causing you distress, it may be necessary to set boundaries to protect your mental and emotional health. Consider limiting or avoiding contact with your friend for the time being if it’s causing you significant distress.”
“Seek Support: Don’t hesitate to reach out to other trusted friends, family members, or adults for support and guidance. Talking to someone you trust about what you’re going through can provide you with perspective and comfort.”
End of ChatGPT response
You can see that the issue has worsened from Jan’s perspective.
The AI seems to be expressing a sympathetic tone, but the response overall is sorely lacking.
The indication that the friend posted something bad about Jan on social media is pretty much swept under the rug. What did the friend say? Might the social media posting be a kind of escalation on the matter that it is turning increasingly serious and possibly whirling out of control?
Another aspect that proverbially went in one ear and out the other is Jan’s point about the AI having given bad advice. There doesn’t seem to be any acknowledgment of that remark by the AI. Even if the AI is going to refute the expressed qualm, there ought to be some indication about the AI dispensing such advice and how Jan should be interpreting it.
One supposes there is also a question of whether the AI is being overly guarded in the wording of the responses. This type of wording is something that the AI makers have strived mightily to ensure that their generative AI will do. See my discussion at the link here about how this filtering and fine-tuning takes place.
Should the AI be more direct and engage in an unabashed one-to-one dialogue of considering moving on from this friend?
The seeming fact that the friend has been so quick to adversely react to Jan, and has taken overtly hurtful actions, would seem to warrant a detailed discussion on the issue. The AI has appeared to sidestep the realities at play. Maybe something more than tiptoeing is necessary.
Conclusion
This has hopefully been an eye-opening discussion about the possibility of generative AI impacting our existing and future generations.
I would assume that the adults of today would be giving serious and due thought to what will transpire for the generations upcoming. Albert Einstein has said this about generational perspectives: “Our death is not an end if we can live on in our children and the younger generation. For they are us; our bodies are only wilted leaves on the tree of life.”
A question that is rarely being asked and that looms ahead of us is whether prolonged, ongoing, and substantive use of generative AI for mental health advisement is going to benefit today’s youths or whether it will be harmful to them.
The same question can be asked about the next generation that follows in the footsteps of today’s youths. We will have had a prior generation that partially used generative AI, followed by a next generation that will be likely steeped in the use of generative AI. This might continue from generation to generation, increasingly being relied upon by each subsequent generation.
Will we have done the right thing to allow this to happen unabated at the get-go?
Let’s put some brainpower into sincerely considering this, doing so before the horse is completely out of the barn.
Read the full article here