NHacker Next
login
▲'It cannot provide nuance': UK experts warn AI therapy chatbots are not safetheguardian.com
154 points by distalx 1 days ago | 189 comments
Loading comments...
hy555 1 days ago [-]
Throwaway account. My ex partner was involved in a study which said these things were not ok. They were paid not to publish by an undisclosed party. That's how bad it has got.

Edit: the study compared therapist outcomes to AI outcomes to placebo outcomes. Therapists in this field performed slightly better than placebo, which is pretty terrible. The AI outcomes performed much worse than placebo which is very terrible.

neilv 1 days ago [-]
Sounds like suppressing research, at the cost of public health/safety.

Some people knew what the tobacco companies were secretly doing, yet they kept quiet, and let countless family tragedies happen.

What are best channels for people with info to help halt the corruption, this time?

(The channels might be different than usual right now, with much of US federal being disrupted.)

hy555 1 days ago [-]
Start digging into psychotherapy research and tearing their papers apart. Then the SPR. Whole thing is corrupt to the core. A lot of papers drive public health policy outside the field as it's so vague and easy to cite but the research is only fit for retraction watch.
neilv 1 days ago [-]
Being paid to suppress research on health/safety is potentially a different problem than, say, a high rate of irreproducible results.

And if the alleged payer is outside the field, this might also be relevant to the public interest in other regards. (For example, if they're trying to suppress this, what else are they trying to do. Even if it turns out the research is invalid.)

hy555 1 days ago [-]
Both are a problem. I should not conflate the two.

I agree. Asking questions which are normal in my own field resulted in stonewalling and obvious distress. The worst thing being this leading to the end of what was a good relationship.

neilv 1 days ago [-]
If the allegation is true, hopefully your friend speaks up.

If not, you might consider whether you have actionable information yourself, any professional obligations you have (e.g., if you work in science/health/safety yourself), any societal obligations, whether reporting the allegation would be betraying a trust, and what the calculus is there.

cjbgkagh 1 days ago [-]
I figured it would be related in that it's a form of p-hacking. Do 20 studies, one gives you the 'statistically significant' results you want, suppress the other 19. Then 100% of published studies support what you want. Could be combined with p-hacking within the studies to compound the effect.
genewitch 19 hours ago [-]
97% of all scientists named steve agree that global warming is happening!
ilaksh 23 hours ago [-]
Which model exactly? What type of therapy/prompt? Was it a completely dated model, like in the article where they talk about a model from two years ago? We have had massive progress in two years.
raverbashing 23 hours ago [-]
Honestly none of the companies are tuning their model to be better at therapy.

Also it is not expected that the training material for the model deals with the actual practical aspects of therapy, only some of the theoretical aspects are probably in that material

jdietrich 21 hours ago [-]
>none of the companies are tuning their model to be better at therapy

BrickLabs have developed an expert-fine-tuned model specifically to provide psychotherapy. Their model has shown modestly positive results in a reasonably large preregistered RCT.

https://trytherabot.com/

https://ai.nejm.org/doi/full/10.1056/AIoa2400802

raverbashing 12 hours ago [-]
Yeah but 99% of people trying "AI mental health" are using free ChatGPT, etc
ilaksh 22 hours ago [-]
The leading edge models are trainable via instructions. That's why agents are possible. Many online therapy or therapy companies are training or instructing their agents in this domain.
ktallett 11 hours ago [-]
That still wouldn't allow for edge/unusual cases and situations which I would say having experienced group therapy for many years, most significant therapy users have quite a few of.
sorenjan 1 days ago [-]
What did they use for placebo? Talking to somebody without education, or not talking to anybody at all?
hy555 1 days ago [-]
Not talking to anyone at all.
zargon 1 days ago [-]
What did they do then? If they didn't do anything, how can it be considered a placebo?
phren0logy 1 days ago [-]
It's called a "waitlist" control group, and it's not intended to represent placebo. Or at least, it shouldn't be billed that way. It's not an ideal study design, but it's common enough that you could use it to compare one therapy to another based on their results vs a waitlist control. Placebo control for psychotherapy is tricky and more expensive, and can be hard to get the funding to do it properly.
23 hours ago [-]
risyachka 1 days ago [-]
Does it matter? The point is AI made it worse.
trod1234 1 days ago [-]
That seems like a very poor control group.
hy555 1 days ago [-]
That is one of my concerns.
twobitshifter 18 hours ago [-]
They should do ELIZA as the control or at least include it to see how far we have or haven’t advanced.
rsynnott 21 hours ago [-]
I'm quite curious how the placebo in a study like this works.
derbOac 8 hours ago [-]
Usually in psychotherapy controls, there's either:

waitlist control, where people get nothing

psychoeducational, where people get some kind of educational content about mental health but not therapy

existing nonpsychological service, like physical checkups with a nurse

existing therapy, so not placebo but current treatment

pharmacological placebo, where they're given a placebo pill and told its psychiatric medication for their concern

A kind of "nerfed" version of the therapy, such as supportive therapy where the clinician just provides empathy etc but nothing else

How to interpret results depends on the control.

It's relevant to debates about general vs specific effects in therapy (rapport, empathy, fit) versus specific effects (effects due to specific techniques of a specific therapy).

Bruce Wampold has written a lot about types of controls although he has a hard nonspecific/general effects take on therapy.

cube00 1 days ago [-]
The amount of free money sloshing around the AI space is ridiculous at the moment.
21 hours ago [-]
22 hours ago [-]
scotty79 23 hours ago [-]
I've heard of some more modern research with llms that had a result that Ai therapist was straight up better than human therapists across all measures.
caseyy 19 hours ago [-]
I know many pro-LLM people here are very smart, but sometimes it's wise to heed the words of world-renowned experts on a subject.

Otherwise, you may end up defending this and it's really foolish:

> “Seriously, good for you for standing up for yourself and taking control of your own life,” it reportedly responded to a user, who claimed they had stopped taking their medication and had left their family because they were “responsible for the radio signals coming in through the walls”.

HPsquared 33 minutes ago [-]
It's a bit like talking to a random friend. I could see a friend giving that response, for various reasons. All depends on the context and how the idea was introduced. Even the quote could have been selective, followed by a big "... BUT this is a bad idea".
mvdtnz 18 hours ago [-]
As much as I tend to defer to experts, you must also be weary of experts whose very livelihoods are at risk. They may not have your interests at heart.
52-6F-62 10 hours ago [-]
And the tech bros pushing magic chatbots that they nor anyone else has any insight into but from which the same tech bros derive an even higher salary and more opulent livelihood and collect additional rent from certainly do have your interests at heart?

Fuck me. Maybe that guy on the street corner selling salvation or “cuckane” really was dealing in the real thing, too, eh?

krainboltgreene 18 hours ago [-]
Hell yeah, rail against those profiteering…therapists.

Man I hate this modern shift of “actually anyone who is an expert is also trying to deceive me”. Extremely healthy shit for a civilization.

mvdtnz 18 hours ago [-]
Is there something about therapists that makes them inherently noble and not prone to the same incentives as everyone else?
63 18 hours ago [-]
The implication was that the low salary selects for people who value helping people more than they value money.
zahlman 18 hours ago [-]
Indeed says that therapists in Toronto, Canada make about CAD $55/hour. That's not FAANG level nor what you'd expect for an MD, but it's not what I'd call low, either.

That said, I certainly don't see therapists as profiteering, in the sense of trying to convince people to pay for therapy they don't need. They might plausibly feel threatened by AI, but they'd absolutely be justified in calling out examples like those in TFA.

yurishimo 8 hours ago [-]
$55/hr in Toronto… not to mention the overhead to pay for the building and support staff, the fact that you can’t bill for 40 hours per week, and the mentally demanding nature of the job listening to people’s mental shit all day.

Hot take: Therapists should earn more than most software devs.

krainboltgreene 14 hours ago [-]
The implication is that it's insane to simply apply a "everyone wants to grift from you" angle to anyone who as an expert without any evidence or analysis.
mvdtnz 13 hours ago [-]
I didn't say "grift". Of course therapists are going to warn you against technology that replaces therapists for a fraction of the cost, regardless of how effective it is. That's just human nature. There's nothing wrong with self-preservation, we just need to be on the lookout for it.
m_fayer 8 hours ago [-]
Given the incredible gap between demand and supply in many places, I think many therapists would welcome a stopgap solution for people on waiting lists or struggling with costs. And they would not feel their livelihoods threatened one bit. That is, if they trusted that stopgap to at worst do no harm.
ndsipa_pomu 5 hours ago [-]
There's an inherent limit in how many people they can be treating - even group therapy sessions will be limited by number. As such, there's not many "exploits" that they can use to gain more and more power/money. Also, the job is far more likely to attract people that are interested in helping rather than exploiting people. People that want to exploit others are going to want to expand their audience.
ekianjo 18 hours ago [-]
Experts have a direct and obvious inventive to justify their existence. Radio experts warned us about TV. TV experts warned us about the Internet. If you live long enough you see it over and over again
sweetjuly 18 hours ago [-]
The existence of a similar contradictory example does not disprove the original point. It's okay to be suspicious and cynical, but nuance is still important.

Assuming that anyone who has anything to gain by you believing them is out to get you is rash and leads you only to those who are more willing to lie about their motivations. Yes, doctors and Big Pharma (tm) are financially motivated to sell you cures, but the guy selling you a juice cleanse ""at cost"" for your cancer is still not trustworthy.

ekianjo 17 hours ago [-]
> Yes, doctors and Big Pharma (tm) are financially motivated to sell you cures, but the guy selling you a juice cleanse ""at cost"" for your cancer is still not trustworthy.

Two things can be true at the same time. Don't trust anyone because nobody is transparent about their incentives. Your doctor does not disclose to you that they were at a congress in Hawaii for company X when he prescribes you to use company X drug for your ailment.

musicale 15 hours ago [-]
> TV experts warned us about the Internet

If they warned that it could become a distillation of the worst aspects of television... maybe they weren't wrong.

harvey9 12 hours ago [-]
This reads like an attempt to restate the buggy whip idiom. It doesn't work well even though your point has some merit.
kelseyfrog 17 hours ago [-]
It's terrifying that this applies to doctors, teachers, firefighters, and entrepreneurs.
ekianjo 17 hours ago [-]
We were not talking about engineers so thanks for your strawman
kelseyfrog 15 hours ago [-]
Maybe we live in different worlds. I see all of those professions justifying their existence. That's exactly why we should distrust them. They have an incentive to do so.
krainboltgreene 14 hours ago [-]
Brother you just described every worker.
casey2 11 hours ago [-]
The very fact that the "world class experts" are warning people not to use it means they have already been replaced in most fields that matter.

They didn't feel threatened by systems like cleverbot or GPT-3.5

cmsj 5 hours ago [-]
Congrats, you have trapped yourself in an ideological bubble where nobody can ever tell you that AI is a bad fit for a given application.

Try this on for size: I am not a therapist, but I will happily tell you that a statistical word generating LLM is a truly atrocious substitute for the hard work of a creative, empathetic and caring human being.

simonw 18 hours ago [-]
That one was (genuinely) a bug. OpenAI rolled it back. https://openai.com/index/expanding-on-sycophancy/

(But yeah, relying on systems that can have bugs like that for your mental health is terrifying.)

vrighter 13 hours ago [-]
you cannot really roll back a bug in a black box system you don't understand
clncy 11 hours ago [-]
Exactly. More like changing the state of the system to reduce the observed behaviour while introducing other (unknown) behaviours
17 hours ago [-]
lurk2 1 days ago [-]
I tried Replika years ago after reading a Guardian article about it. The story passed it off as an AI model that had been adapted from one a woman had programmed to remember her deceased friend using text messages he had sent her. It ended up being a gamified version of Smarter Child with a slightly longer memory span (4 messages instead of 2) that constantly harangued the user to divulge preferences that were then no-doubt used for marketing purposes. I thought I must be doing something wrong, because people on the replika subreddit were constantly talking about how their replika agent was developing its own personality (I saw no evidence at any point that it had the capacity to do this).

Almost all of these people were openly in (romantic) love with these agents. This was in 2017 or thereabouts, so only a few years after Spike Jonze’s Her came out.

From what I understand the app is now primarily pornographic (a trajectory that a naiver, younger me never saw coming).

I mostly use Copilot for writing Python scripts, but I have had conversations with it. If the model was running locally on your own machine, I can see how it would be effective for people experiencing some sort of emotional crisis. Anyone using a Meta AI for therapy is going to learn the same hard lesson that the people who trusted 23 and Me are currently learning.

mrbombastic 1 days ago [-]
“I thought I must be doing something wrong, because people on the replika subreddit were constantly talking about how their replika agent was developing its own personality (I saw no evidence at any point that it had the capacity to do this).”

People really like to anthropomorphize any object with even the most basic communication capabilities and most people have no concept of the distance between parroting phrases and a full on human consciousness. In the 90s Furbys were a popular toy that said started off speaking furbish and then eventually spoke some (maybe 20?) human phrases, many people were absolutely convinced you could teach them to talk and learn like a human and that they had essentially bought a very intelligent pet. The NSA even banned them for a time because they thought they were recording and learning from surroundings despite that being completely untrue. Point being this is going to get much worse now that LLMs have gotten a whole lot better at mimicking human conversations and there is incentive for companies to overstate capabilities.

trod1234 1 days ago [-]
This actually isn't that surprising.

There are psychological blindspots that we all have as human beings, and when stimulus is structured in specific ways people lose their grip on reality, or rather more accurately, people have their grip on objective reality ripped away from them without them realizing it because these things operate on us subliminally (to a lesser or greater degree depending on the individual), and it mostly happens pre-perception with the victim none the wiser. They then effectively become slaves to the loudest monster, which is the AI speaking in their ear more than anyone else, and by extension to the slave master who programmed the AI.

One such blindspot is the consistency blindspot where someone may induce you to say something indicating agreement with something similar first, and then ask the question they really want to ask. Once you say something that's in agreement, and by extension something similar is asked, there is bleedover and you fight your own psychology later if you didn't have defenses to short circuit this fixed action pattern (i.e. and already know), and that's just a surface level blindspot that car salesman use all the time; there are much more subtle ones like distorted reflected appraisal which are used by cults, and nation states for thought reform.

To remain internally consistent, with distorted reflected appraisal, your psychology warps itself, and you as a person unravel. These things have been used in torture, but almost no one today is taught what the elements of torture are so they can recognize it, or know how it works. You would be surprised to find that these things are everywhere today, even in K12 education and that's not an accident.

Everyone has reflected appraisal because this is how we adopt the cultural identity we have as people from our parents while we are children.

All that's needed for torture to break someone down are the elements, structuring, and clustering.

Those elements are isolation, cognitive dissonance, coercion with perceived or real loss, and lack of agency to remove with these you break in a series of steps rational thought receding, involuntary hypnosis, and then psychological break (disassociation or a special semi-lucid psychosis capable of planning); with time and exposure.

Structuring uses diabolical structures to turn the psyche back on itself in a trauma loop, and clustering includes any multiples of these elements or structures within a short time period, as well as events that increase susceptibility such as narco-analysis/synthesis based in dopamine spikes triggered by associative priming (operant conditioning). Drug use makes one more susceptible as they found in the early 30s with barbituates, and its since been improved so you can induce this is in almost anyone with a phone.

No AI will ever be able to create and maintain a consistent reflected appraisal for the people they are interacting with, but because the harmful effects aren't seen immediately, people today have blinded themselves and discount the harms that naturally result. The harms from the unnatural loss of objective reality.

tbrownaw 19 hours ago [-]
> when stimulus is structured in specific ways people lose their grip on reality, or rather more accurately, people have their grip on objective reality ripped away from them without them realizing it because these things operate on us subliminally

The world would like quite different if this was true.

lurk2 1 days ago [-]
Very interesting. Could you recommend any further reading?
trod1234 23 hours ago [-]
Robert Cialdini is probably the lightest book and covers most of the different blindspots we have, except distorted reflected appraisal in his book on Influence. He provides the principles but leaves most of the structure up to the person's imagination.

The coursework in an introduction to communication class may provide some foundational details (depending on the instructor), Sapir-Whorf has basis in blindspots.

Robert Lifton touches on the detailed case studies of torture from the 1950s (under Mao), in his book "Thought Reform and the Psychology of Totalism", and I've heard in later books he creates a framework that classifies cultures as Protean (self-direction, growth, self-determination/agency), or Totalism (towards control which eventually fails Darwin's fitness).

I haven't actually read his later books yet though his earlier books were quite detailed. I believe the internet archive has a copy of this available for reading as a pdf but be warned this is quite dark.

Joost Meerloo in his, "Rape of the Mind" as an overview touches on how Totalitarianism grows in the setting of WW2 and some Mao, though takes Freudian look at things (dating certain aspects which we know to be untrue now).

From there it branches out depending on your interest. The modern material itself while based on these earlier works often has the origins obscured following a separation of objectionable concerns.

There are congressional reports on COINTELPRO and you may find notice it has modern iterations (touching on protest/activist activity harassment), as well as the history of East German Stasi, and Zersetzung where governments use this to repress the population.

There are aspects in the Octalysis Framework (gamification/game design).

Paulo Freire used some of this material in developing his critical pedagogy which was used in the 70s to replace teaching method from a reduction of first principles (based in rome and the greeks) to what's commonly known as rote-based teaching, and later called "Lying to Children", which takes the reversal of that approach following more closely to gnosticism.

The approach is basically you give a flawed useless model which includes both true and false things. Students learn to competence, then are given a new model that's less flawed, where you have to learn and unlearn things already learned. You never actually unlearn anything and it induces frustration and torture destroying minds in the process. Each step towards gnosis becomes more useful but only the most compliant and blind make it to the end with few exceptions. Structures that burn bridges induce failure in math, and the effect is this acts as a filter to gatekeep the technical fields.

The water pipe analogy of voltage in electronics as an example of the latter instead of the first principled approach using diffusion which is more correct.

Disney and Dreamworks uses distorted reflected appraisal tailored towards destructive interference of identity, which some employees have blown the whistle on (for the latter), aimed at children and sneak things past their adult guardians. There's quite a lot if you look around but its not under any single name but scattered. Hopefully that helps.

The Dreamworks whistleblower interview can be found here: https://www.youtube.com/watch?v=vvNZRUtqqa8

All indexed references of it seem to now have been removed from search. I'm glad now that I kept a reference link in a text file.

Update: Dreamworks isn't Pixar, I misremembered,they are owned by Universal Studios, whereas Disney own's Pixar. Pixar and Disney appear to do the same things.

lurk2 23 hours ago [-]
This is all very interesting. The pedagogy you mentioned tracks with how I can remember a lot of my schooling, but it’s also how I would teach. The pedagogical term is “scaffolding,” I think; you assess the student’s current understanding and then use (necessarily imperfect) metaphors to cement the knowledge. It sounds like you’re pointing to something more nefarious (“Do this because I said so.” - authoritarian parenting rather than authoritative, diplomatic, or permissive parenting styles).

I’m not sure I understand how this relates to gnosticism, however. Are you comparing the “Lying to Children” model to gnostic initiation, and asserting that this model selects for the compliant? What is your proposed alternative here?

Particularly,

> Structures that burn bridges induce failure in math, and the effect is this acts as a filter to gatekeep the technical fields.

Sounds compelling, but it strikes me more as a limitation of demand for good math teachers outstripping their supply. I’ve seen this in English language learning a lot; even if the money was there (and it’s not), there are simply far more people with a desire to learn English than there are people qualified to teach it.

trod1234 22 hours ago [-]
You are right, scaffolding seems like a better descriptor.

> It sounds like you're pointing to something more nefarious.

Well the structure itself is quite nefarious in a way. You have to constantly fight against it to progress and don't really have a choice at the beginning, which often leads to learned helplessness and PTSD in the dropouts. As a teacher you also have to constantly fight against this because any shortfall of effort on your part leaves your students behind in one of those pitfalls, and its largely dependent on the students ability to overcome the torture. You generally aren't given sufficient resources to do this because there's no way out; only through. This is why the structure is nefarious and at the root of the problem.

The unlearning process after learning to competence is imperfect and induces what amounts to self-torture sessions. The imposition of psychological stress (torture) actually lowers the ability for rational thought, and may permanently warp people at vulnerable stages of their lives. Children tend to have a period where they try on various personas after which their identity crystallizes which they carry forward. Adopting learned helplessness at this point makes them a resource drain on everyone. You see these effects in the youth today where they can't even read in many cases.

The sequences in math for example rely on a undisclosed change in grading criteria resulting from this path, a gimmick if you will. There is the sequence, Algebra->Geometry->Trigonometry. Algebra is graded based on correct process, whereas Trig is graded based on correct process and correct answer. When the process differs between classes because the process taught was a flawed version, and you pass Geometry, you can't go back. Its outside the scope of the Trig teacher to reteach two classes prior, and they'll just say: "If you are having trouble with this material you should choose a career that doesn't require this", and leave it up to them. This was actually pushed for adoption by the NEA in the 90s, where they were going to strike if the administration didn't cave.

There are similar structures used in weed-out classes in college as well. Physics used to use a non-standard significant figure calculation when the questions were related by a property of causality (1st answer is used for the 2nd, and the 2nd for 3rd, 2 tests, you can only get 1 question wrong on one test to pass. It must be either of the last two on either test). Using a correct method to reduce propagation of error would cause you to fail, and the right answer was passed around to only the professor's favorites, hence very similar to gnosticism where the only the experts determine who may receive the secret knowledge.

An excellent teacher that constantly bucks the norm will naturally sidestep many of the pitfalls, but an average teacher who is overburdened from lack of resources, and ground down who has sunk to the lowest common denominator of work production won't provide a bridge over the pitfall and these things happen through simple lack of action as a consequence of the adopted structure.

When people speak of nefarious and maliciousness there's often an assumed intent, and in a way negligence can be intent but while some could argue these type of plans conform to this based on things our nation's enemies have said, its probably equally if not more a result of degradation and corruption from within as a result of the flaws inherent in centralized systems.

The history about how this came about is particularly muddied. To give some context, Sputnik in the 1960s shocked the US, and they wrote a blank check for Academia towards more engineers and math alumni. It was a problem you can't fix though using money, and when that was noticed the hiring standards which were quite high in the 1960s, were lowered. Whether the lower standards caused this, or subversives snuck in as an attack on the next generation, no one will know. The effect though is by 1978 there is a marked difference in the academic material published prior and after with lower quality resources being available after which conform to the mentioned flawed pedagogy.

The proposed alternative is to go back to the classical pedagogical approach. Use real systems, teach the process of reducing those systems to first principles (in guided fashion), creating models, and then predicting the future behavior of those systems, identifying the limitations. Some professors still do this, but they are in such a minority that you may only see on or two in a local geography (driving range/county) across all areas of study.

> Sounds compelling but it strikes me more as a limitation of demand for good math teachers.

I've known quite a lot of extremely intelligent people who have been hobbled because they couldn't get through the education, the few that have are often unable to apply the knowledge outside a very limited scope. Its a bit of a chicken egg problem, you need the chicken first.

The hiring standards were never raised back up and remain low, and the materials used to teach those have degraded, there is also no incentive towards improvement of teachers. Basic performance metrics are eschewed from collection. You see this particularly in colleges where they may collect pass rates but won't differentiate a person who has taken the class in the past from a new student.

There are also other incentives which are covered quite plainly in the documentary "Waiting for Superman" in the Lemon walk. If you don't fire your lowest performers, and they are effectively guaranteed wages without the appropriate level of work, they end up driving the higher performers out through social coercion, harassment, and corruption. The higher performers make the lower performers look bad.

kelseyfrog 19 hours ago [-]
Is this different from identifying social constructionism in schools and media? It sounds really Althusserian in the sense of pinpointing the specific ideological state apparatuses - media and school specifically - that build a specific reality by engineering it in the population.
harvey9 12 hours ago [-]
The other lesson here is that general audience news sites are pretty bad at technology coverage.
mrcsharp 17 hours ago [-]
> "I personally have the belief that everyone should probably have a therapist,” he said last week. “It’s like someone they can just talk to throughout the day, or not necessarily throughout the day, but about whatever issues they’re worried about and for people who don’t have a person who’s a therapist, I think everyone will have an AI.”

He seems so desperate to sell AI that he forgot such thing already exists. It's called family or a close friend.

I know there are people who truly have no one and they could benefit from a therapist. Having them rely on AI could prove risky specially if the person is suffering from depression. What if AI pushes them towards committing suicide? And I'll probably be told that OpenAI or Meta or MS can put guardrails against this. What happens when that fails (and we've seen it fail)? Who'll be held accountable? Does an LLM take the hippocratic oath? Are we actually abandoning all standards in favour of Mark Zuckerberg making more billions of dollars?

fy20 16 hours ago [-]
> It's called family or a close friend.

It's good you are socially privileged, but a lot of people do not have someone close who they can feel secure to confide in. Even a therapist doesn't help here, as a lot of people have pre-existing conditionings about what a therapist is "I'm not crazy, why do I need a therapist?".

Case in point, my father's cousin lived alone and didn't have any friends. He lived in the same house his whole life, just outside London by himself, with no indoor toilet or hot water. A few years ago, social services came after the neighbours called, because his roof collapsed and he was just living as if nothing was wrong. My father was his closest living family, but they'd not spoken in 20 years or more.

I feel this kind of thing is more common than you think. Especially with older people, they may have friends from the outside, but they aren't close with them that they can talk about whatever is on their mind.

mrcsharp 15 hours ago [-]
I did address the fact that not everyone has a family or a close friend.

What you described isn't a good fit for using AI. What would an LLM do for him?

The fact his roof collapsed and he didn't think much of it indicates a deeper problem only a human can begin to tackle.

We really shouldn't be solving deep societal problems by throwing more tech at them. That experiment has already failed.

casey2 11 hours ago [-]
Having arms and legs isn't "physically privileged". If one is unable to create and maintain relationships then they likely have some cocktail of physical and mental disabilities. Most functioning adults can go to a bar.

The point being fixing your own life is going to bring much more in the way of benefits than the government or Sam trying to fix it for you. If one are a complete social reject then no amount of AGI will save them. People without close relationships are zombies that walk among us, in most ways they are already dead.

cmsj 5 hours ago [-]
Ultimately what therapists do is lead you through an exploration of yourself, in which you actually do all of the work.

I 100% do not doubt the usefulness of therapy for those who are suffering in some way, but I feel like the idea that "everyone should probably have a therapist" is kinda odd - if you're generally in a good place, you can explore your feelings/motivations yourself with little risk.

Xcelerate 21 hours ago [-]
I have two lines of thought on this:

1) Chatbots are never going to be perceived as safe or effective as humans by default, primarily due to human fiat. Professionals like counselors (and lawyers, doctors, software engineers, etc.) will always claim that an LLM cannot do their job, namely because acknowledging such threatens their livelihood. Determining whether LLMs genuinely provide therapeutic value to humans would require rigorous, carefully controlled experiments conducted over many years.

2) Chatbots definitely cannot replace human therapists in their current state. That much seems quite obvious to me for various reasons already argued well by others on here. But I had to highlight point #1 as devil's advocate, because adopting the mindset that "humans are inherently better by default" due to some magical or scientifically unjustifiable reason will prevent forward progress. The goal is to eliminate the (quite reasonable) fear people have of eventually losing their job to AI by enacting societal change now rather than denying into perpetuity that chatbots are necessarily inferior, at which point everyone will in fact lose their jobs because we had no plan in place.

HPsquared 8 hours ago [-]
The other rhetorical hazard is to insist that the new thing has to be a 1:1 replacement for a human therapist in the system we have currently. Who's to say it can't take a different form? There are so many ways a text generator could be used for therapeutic purposes.
cmsj 5 hours ago [-]
The fact that you (correctly) called it a text generator, should tell you everything you need to know about why it can't replace a skilled human who takes the time to genuinely understand and empathise with you.
HPsquared 44 minutes ago [-]
Indeed. But they can still provide information and perhaps advice. They provide a place to work through an issue as a kind of "responsive diary" that gives its own input. That makes it much easier for someone to write their thoughts and feelings out when they might not otherwise, possibly gaining insight or catharsis.
cmsj 5 hours ago [-]
I am not a lawyer or a doctor or a counsellor. I will gladly claim that an LLM should not replace any of those professions.

It may be able to assist those professionals, but that is as far as I am willing to go, because I am not blinded by the shine of the statistical turks we are deploying right now.

senordevnyc 18 hours ago [-]
Agreed. Also, LLMs are already better than 80% of therapists. I don’t think most people understand the delta between a good therapist and a bad one, and how few really are very good.
jdietrich 21 hours ago [-]
In the UK (and many other jurisdictions outside the US), psychotherapy is completely unregulated. Literally anyone can advertise their services as a psychotherapist or counsellor, regardless of qualifications, experience or their suitability to work with potentially vulnerable people.

Compared to that status quo, I'm not sure that LLMs are meaningfully more risky - unlike a human, at least it can't physically assault you.

https://www.bacp.co.uk/news/news-from-bacp/2020/6-march-gove...

https://www.theguardian.com/society/2024/oct/19/psychotherap...

pornel 19 hours ago [-]
UK doesn't protect the term psychotherapy, but there's a distinction between services of counsellors and (regulated) psychologists.

For counselling, people are encouraged to choose counsellors accredited by professional orgs like BACP.

jdietrich 18 hours ago [-]
"Psychologist" is not a protected title and anyone can use it. "Clinical psychologist" is a protected title, and one that requires an extremely high level of training and very strict professional standards. I imagine that the overwhelming majority of the population are completely oblivious to this distinction.

The BACP's standards really aren't very high, as you can qualify for membership after a one-year part-time course and a few weeks of work experience. Their disciplinary procedures are, in my opinion, almost entirely ineffectual. They undertake no meaningful monitoring of accredited members, relying solely on complaints from members of the public. Out of tens of thousands of registered members, only a single-digit number are subject to disciplinary action every year. The findings of the few disciplinary hearings they do actually conduct suggest to me that they are perfectly happy to allow lazy, feckless and incompetent practitioners to remain on their register, with only a perfunctory slap on the wrist.

BACP membership is of course entirely voluntary and in no way necessary in order to practice as a counsellor or psychotherapist.

https://www.hcpc-uk.org/news-and-events/blog/2023/understand...

https://www.bacp.co.uk/about-us/protecting-the-public/profes...

kbelder 1 days ago [-]
I think a lot of human therapists are unsafe.

We may just need to start comparing success rates and liability concerns. It's kind of like deciding when unassisted driving is 'good enough'.

th0ma5 22 hours ago [-]
That's not exactly a following reasoning to use for LLMs ... In automation studies things are most dangerous just before full automation due to bias. Why tap the brakes when surly the car will do it on its own when that isn't a guarantee.
timewizard 23 hours ago [-]
The therapist controls the extent of the relationship which determines profits. A disinterested third party should be involved.
sheepscreek 21 hours ago [-]
That’s fair but there’s another nuance that they can’t solve for. Cost and availability.

AI is not a substitute for traditional therapy, but it offers an 80% benefit at a fraction of the cost. It could be used to supplement therapy, for the periods between sessions.

The biggest risk is with privacy. Meta could not be trusted knowing what you’re going to wear or eat. Now imagine them knowing your deepest darkest secrets. The advertising business model does not gel well with providing mental health support. Subscription (with privacy guarantees) is the way to go.

zdragnar 19 hours ago [-]
> The biggest risk is with privacy

No, the biggest risk is that it behaves in ways that actively harm users in a fragile emotional state, whether by enabling or pushing them into dangerous behavior.

Many people are already demonstrably unable to handle normal AI chatbots in a healthy manner. A "therapist" substitute that takes a position of authority as a counselor ramps that danger up drastically.

caseyy 19 hours ago [-]
> 80% benefit at a fraction of the cost

I'm sure 80% of expert therapists in any modality will disagree.

At best, AI can compete with telehealth therapy, which is known for having practically no quality standards. And of course, LLMs surpass "no quality standards" with flying colors.

I say this very rarely because I think such statements should be used with caution, but in this case: saying that LLMs can do 80% of a therapist's work is actually harmful for people who might believe it and not seek effective therapy. Going down this path has a good probability of costing someone dearly.

sheepscreek 19 hours ago [-]
My statement is intended for individuals who cannot afford therapy. That’s why my comment centers on cost and availability (accessibility). It’s a frequently overlooked reason why people hesitate to seek therapy.

Given that, AI can be just as good as talking to a friend when you don’t have one (or feel uncomfortable discussing something with one).

GreenWatermelon 9 hours ago [-]
> AI can be just as good as talking to a friend when you don’t have one

This sentence effectively reads "AI cam be just as good as (nothing)" since you can't talk to a friend when you don't have one.

Of course, I understand the point you were trying to make, which is that AI is better than absolutely nothing; but I disagree in the vain that AI will give you a false since of companionship that might lead you further towards bad outcomes.

caseyy 19 hours ago [-]
> AI can be just as good as talking to a friend when you don’t have one

This is not true, and it's not even wrong. You almost cannot argue with such a statement without being ridiculous. The best I can say is: natural language synthesis is not a substitute for friends.

If we are debating these things, it's evidence we adopted LLMs with far too little forethought.

I mean, on a technicality, you could say "my friend synthesizes plausible language, this can do it, too. So it can substitute a little bit!" but at that point I'm pretty sure we're not discussing friendship in its essence, and the (emotional, physical, social, etc) support that comes with it.

sheepscreek 18 hours ago [-]
I think we can dissect the arguments philosophically in many ways, even getting quite nitpicky if we like. So please indulge me for a moment.

“A friend” can also serve as a metaphor for an acquaintance you feel comfortable seeking counsel from.

mvdtnz 18 hours ago [-]
No one said it was a substitute for a friend. The comment you're responding to is saying it's a substitute for no friends at all.
rsynnott 21 hours ago [-]
> AI is not a substitute for traditional therapy, but it offers an 80% benefit at a fraction of the cost.

That... seems optimistic. See, for instance, https://www.rollingstone.com/culture/culture-features/ai-spi...

No psychologist will attempt to convince you that you are the messiah. In at least some cases, our robot overlords are doing _serious active harm_ which the subject would be unlikely to suffer in their absence. LLM therapists are rather likely to be worse than nothing, particularly given their tendency to be overly agreeable.

sarchertech 21 hours ago [-]
Does it offer 80% of the benefit? An AI could match what a human therapist would say 80% (or 99%) of the time and still provide negative benefit.

Therapy seems like the last place an LLM would be beneficial because it’s very hard to keep an LLM from telling you what you want to hear. I can see anyway you could guarantee that a chatbot cause severe damage to a vulnerable patient by supporting their neurosis.

We’re not anywhere close to an LLM which is trained to be supportive and understanding in tone but will never affirm your irrational fears, insecurities, and delusions.

pitched 21 hours ago [-]
Sometimes, the process of gathering our thoughts enough to article them into a prompt is where the benefit is. AI as the rubber duck has a lot of value. Understanding that this is what’s needed vs. something deeper, is beyond the scope of what AI can handle.
sarchertech 21 hours ago [-]
And that’s fine as long as the person using it has a sophisticated understanding of the technology and a company isn’t selling it as a “therapist”.

When an AI therapist from a health startup confirms that a mentally disturbed person is indeed hearing voices from God, or an insecure teenager uses meta AI as a therapist because Mark Zuckerberg said they should and it agrees with them that yes they are unloveable, then we have a problem.

pitched 20 hours ago [-]
That last 20% of “missing nuance” is really important if someone is in that state! For the rest of us, the value of an AI therapist roughly matches journaling.
sxyuan 20 hours ago [-]
If it's about gathering our thoughts, there's meditation. Or journaling. Or prayer. Some have even claimed that there is an all-powerful being listening to you on the other side with that last one. (One might even call it an intelligence, just not an artificial one.)

There's also talking to a friend. Sure, they could also steer you wrong, but at least they won't be impersonating a therapist, and they won't be doing it to try to please their investors.

singpolyma3 20 hours ago [-]
I mean most forms of professional therapy the therapist shouldn't say much at all and certainly shouldn't give advice. The point is to have someone listen in a way that feels like they are really listening
caseyy 19 hours ago [-]
> most forms of professional therapy the therapist shouldn't say much at all

This is very untrue. Here is a list of psychotherapy modalities: https://en.wikipedia.org/wiki/List_of_psychotherapies. In most (almost every) modalities, the therapist provides an intervention and offers advice (by definition: guidance, recommendations).

There is Carl Rogers' client-centered therapy, non-directive supportive therapy, and that's it for low-intervention modalities off the top of my head. Two out of over a hundred. Hardly "most" at all.

sheepscreek 17 hours ago [-]
This is very cool. Reading through the list, I discovered:

https://en.m.wikipedia.org/wiki/Person-centered_therapy

That sounds an awful lot like what current gen AIs are capable of.

I believe we are in the very early stages of AI-assisted therapy, much like the early days of psychology itself. Before we understood what was generally acceptable and what was not, it was a Wild West with medical practitioners employing harmful techniques such as lobotomy.

Because there are no standards on what constitutes an emotional support AI, or any agreed upon expectations from them, we can only go by what it seems to be capable of. And it seems to be capable of talking intelligently and logically with deep empathy. A rubber ducky 2.0 that can organize your thoughts and even infer meaning from them on demand.

sarchertech 19 hours ago [-]
Therapists don’t give advice in that they won’t tell you whether you should quit your job, or should you propose to your girlfriend. They will definitely give you basic guidance and confirm that your fears are overblown.

They will not under any circumstances tell you that “yes you are correct, Billy would be more likely to love you if you drop 30 more pounds by throwing up after eating”, but an LLM will if it goes off script.

sheepscreek 17 hours ago [-]
You can create an LLM to keep a check on the LLM interacting with people. This is basically what all “safety” etc models do - they work as gatekeepers for the more powerful model.

This is an implementation problem and not really a technical limitation. If anything, by focusing on a particular domain (like therapy), the do’s and don’ts become more clear.

sarchertech 15 hours ago [-]
Sure you might be able to do that. Or it could turn out that the amount of harmful responses are so varied that trying to block all of them makes the therapy AI useless.

There is a very fine line between being understanding and supportive and enabling bad behavior. I’m not confident that a team of LLMs is going to be able to walk that line consistently anytime soon.

We can’t even get code generating LLMs to stop hallucinating APIs and code is a much narrower domain than therapy.

casey2 11 hours ago [-]
Telling that you need to make up some BS about LLMs while you say nothing about the many clients who have been assaulted, raped, or killed by their therapist.

How can you so confidently claim that "Therapists will do this and that, they won't do any evil". Did you even read what you posted?

sarchertech 4 hours ago [-]
If you could prove that your LLM was only as likely to provide harmful responses as a therapist was to murder you, you might have a point.
zahlman 18 hours ago [-]
In a field like personal therapy, giving good advice 80% of the time is nowhere near 80% benefit on net.
HPsquared 1 days ago [-]
Sometimes an "unsafe" option is better than the alternative of nothing at all.
tredre3 1 days ago [-]
Sometimes an "unsafe" option is not better than the alternative of nothing at all.
Y_Y 1 days ago [-]
Sounds like we need more information than safe/not safe to make a sensible decision!

This is something that bugs me about medical ethics, that it's more important not to cause any harm than it is to prevent any.

bildung 1 days ago [-]
I you look at the horrible things that happened in medical history, e.g. https://en.wikipedia.org/wiki/Tuskegee_Syphilis_Study it's pretty clear why the ethics care more about not causing harm...
justlikereddit 23 hours ago [-]
[flagged]
jrapdx3 23 hours ago [-]
Actually, concern about doing harm is central to current concepts of medical ethics. The idea may be ancient but still highly relevant. Ethics declare a primary obligation of healers is "above all do no harm".

That of course doesn't exclude doing good, being helpful, using skills and technologies to produce favorable outcomes. It does mean that healers must exercise due vigilance for unintended adverse consequences of therapies, let alone knowingly providing services that cause harm.

The problem with "safe/not safe" designation is simply that these states are more often than not indistinct. Or put another way, it depends on subtle contextual attributes that are hard to discern. Furthermore individual differences can make it difficult to predict safety of applying a procedure.

As a result healers should be cautious in approaching problems. Definitely prevention is better than cure, it's simply that relatively little is known about preventing burdensome conditions. Exercising what is known is a high priority.

zahlman 18 hours ago [-]
> Actually, concern about doing harm is central to current concepts of medical ethics. The idea may be ancient but still highly relevant. Ethics declare a primary obligation of healers is "above all do no harm".

I think GP understands this, and disagrees with the principle.

caseyy 17 hours ago [-]
And often, the "unsafe" option is severely worse than nothing at all: https://www.rollingstone.com/culture/culture-features/ai-spi...
drdunce 1 days ago [-]
As with many things in relation to technology, perhaps we simply need informed user choice and responsible deployment. We could start by not using "Artificial Intelligence" - that makes it sound like a some infallible omniscient being with endless compassion and wisdom that can always be trusted. It's not intelligent, it's a large language model, a convoluted next word prediction machine. It's a fun trick, but shouldn't be trusted with Python code, let alone life advice. Armed with that simple bit of information, the user is free to choose how they use it for help, whether it be medical, legal, work etc.
trial3 1 days ago [-]
> simply need informed user choice and responsible deployment

the problem is that "responsible deployment" feels extremely at odds with, say, needing to justify a $300B valuation

davidcbc 20 hours ago [-]
> As with many things in relation to technology, perhaps we simply need informed user choice and responsible deployment.

The average person will never have the required experience to make an informed decision on the efficacy and safety of this.

singpolyma3 20 hours ago [-]
To be fair a therapist shouldn't be giving you advice either
EA-3167 1 days ago [-]
What we need is the same thing we've needed for a long time now, ethical standards applied across the whole industry in the same way that many other professions are regulated. If civil engineers acted the way that software engineers routinely do, they'd never work again, and rightly so.
immibis 9 hours ago [-]
http://thecodelesscode.com/case/118
drdunce 7 hours ago [-]
Are you suggesting working flexibly (maybe "agile"), is at odds with working ethically?
immibis 4 hours ago [-]
> If civil engineers acted the way that software engineers routinely do, they'd never work again, and rightly so.

This part is what I take issue with. Software is simply different from buildings.

EA-3167 2 hours ago [-]
Software still gets people killed, a lesson that should have been learned decades ago with the Therac-25. Of course "software is different from buildings," but the responsibility to build both ethically and responsibly isn't one of the differences. Granted the impacts of software are often less direct than a building collapse, but they still exist and the people involved in making it need to stop pretending that this is still the digital wild West.
pavel_lishin 23 hours ago [-]
A recent Garbage Day newsletter spoke about this as well, worth reading: https://www.garbageday.email/p/this-is-what-chatgpt-is-actua...
nickdothutton 11 hours ago [-]
Perhaps experts can somehow moderate or contribute training data awarded higher weights. Dont let perfect be the enemy of good.
bigmattystyles 1 days ago [-]
The problem is they are cheap and immediately available.
distalx 1 days ago [-]
It just feels a bit uncertain trusting our feelings to AI we don't truly understand.
jobigoud 1 days ago [-]
You don't truly understand the human therapist either.
codr7 23 hours ago [-]
You do however have a hell of a lot more in common with them than with a profit driven algorithm that even its creators have no clue how it really works.
AaronAPU 22 hours ago [-]
The thing about all these arguments is they all apply to humans. We are all an opaque mess of conflicts of interests, inconsistencies and bias.

Not sure if people aren’t thinking that through or if they’re vastly overestimating the trustworthiness and transparency of your average professional human.

CrimsonRain 11 hours ago [-]
you have no clue how "them" work but you do know they are driven by profit as well.
squigz 23 hours ago [-]
> even its creators have no clue how it really works.

What does this mean?

codr7 22 hours ago [-]
Not having that discussion, go argue with someone else.
52-6F-62 24 hours ago [-]
They aren’t truly cheap
harvey9 12 hours ago [-]
Far cheaper than a human therapist, ignoring that they are entirely different things of course.
codr7 23 hours ago [-]
Not even close, it's the most expensive waste of resources I can think of atm.

We used to worry about Bitcoin, now Google is funding nuclear plants.

miki123211 17 hours ago [-]
So here's my nuanced take on this:

1. The effects of AI should not be compared with traditional therapy, instead, they should be compared with receiving no therapy. There are many people who can't get therapy, for many reasons, mostly financial or familial (domestic abuse / controlling parents). Even for those who can get it, their therapist isn't infinitely flexible when it comes to time and usually requires appointments, which doesn't help with immediate problems like "my girlfriend just dumped me" or "my boss just berated me in front of my team for something I worked 16-hour days on."

AI will increase the amount of therapy that exists in the world, probably by orders of magnitude, just like the record player increased the amount of music listening or the jet plane increased the amount of intercontinental transportation.

The right questions to ask here are more like "how many suicides would an AI therapist prevent, compared to the number of suicides it would induce?", or "are all human therapists licensed in country / state X more competent than a good AI?"

2. When a person dies of suicide, their cause of death is, and will always be, listed as "suicide", not "AI overregulation leading to lack of access to therapy." In contrast, if somebody dies because of receiving bad AI advice, that advice will ultimately be attributed as the cause of their death. Statistics will be very misleading here and won't ever show the whole picture, because counting deaths caused by AI is inherently a lot easier than counting the deaths it prevented (or didn't prevent).

It is much safer for companies and governments to prohibit AI therapy, as then they won't have to deal with the lawsuits and the angry public demanding that they do something about the new problem. This is true even if AI is net beneficial because of the increased access to therapy.

3. Because of how AI models work, one model / company will handle many more patients than any single human therapist. This means you need to rethink how you punish mistakes. Even if you have a model that is 10x better than an average human, let's say 1 unnecessary suicide per 100000 patients instead of 1 per 10000, imprisonment after a single mistake may be a suitable punishment for humans, but is not one in the API space, as even a much better model is bound to cause a mistake at some point.

4. Another right question to ask is "how does effectiveness of AI at therapy in 2025 compare to the effectiveness of AI at therapy in 2023?" Where it's at right now does't matter, what matters is where it's going. If it continues at the current rate of improvement, when, if ever, will it surpass an average (or a particularly bad) licensed human therapist?

5. And if this happens and AI genuinely becomes better, are we sure that legislators and therapists have the right incentives to accept that reality? If we pass a law prohibiting AI therapy now, are we sure we have the mechanisms to get it repealed if AI ever gets good enough, considering points 1-3? If the extrapolated trajectory is promising enough (and I have not run the necessary research, I have no idea if it is or not), maybe it's better to let a few people suffer in the next few years due to bad advice, instead of having a lot of people suffer forever due to overzealous regulation?

James_K 1 days ago [-]
Respectfully, no sh*t. I've talked to a few of these things, and they are feckless yes-men. It's honestly creepy, they sound like they want something from you. Which I suppose they do: continual use of their services. I know a few people who use these things for therapy (I think it is the most popular use now) and I'm downright horrified at the sort of stuff they say. I even know a person who uses the AI to date. They will paste conversations from apps into the AI and ask it how to respond. I've set a rule for myself; I will never speak to machines. Sure, right now it's obvious that they are trying to inflate my ego and keep using the service, but one day they might get good enough to trick me. I already find social media algorithms quite addictive, and so I have minimise them in my life. I shudder to think what a trained agent like these may be capable of.
52-6F-62 24 hours ago [-]
I’ve also experimented with them in that capacity. I like to know first hand. I play the skeptic but I tend to feed the beast a little blood in order to understand it, at least.

As a result, I agree with you.

It gives me pause when I stop to think about anyone without more context placing so much trust in these. And the developers engaged in the “industry” of it demanding blind faith and full payment.

citizenkeen 18 hours ago [-]
Look, make the companies offering AI therapy carry medical malpractice insurance at the same risk as human therapists. If they tell someone to go off their meds, let a jury see those transcripts and see if the company still thinks that’s profitable and feasible.
23 hours ago [-]
j45 24 hours ago [-]
Where the experts are the ones who's incomes would be threatened, there is likely some merit in what they're saying, but also some digital literacy skills.

I don't know that AI "advisory" chatbots can replace humans.

Could they help an individual organize their thoughts for more productive time with professionals? Probably.

Could such tech help individuals learn about different terminology, their usage and how to think about it? Probably.

Could there be .. a net results of spending fewer hours (and cost if the case) for the same progress? And be able to make it further with advice into improvement?

Maybe the baseline of advisory expertise in any field exists more around the beginner stage than not.

codr7 23 hours ago [-]
You see the same thing with coding. People with actual experience and enough of a perspective to see the problems are ignored because obviously they're just afraid to lose their jobs. Which is not true, it's not even on my list of things that I should be aware of.

Experience matters, that's something we seem to be forgetting fast.

j45 20 hours ago [-]
Sometimes it’s leadership and managements job to not understand the problem.
rdm_blackhole 1 days ago [-]
I think the core of the problem here is that the people who turn to chat bots for therapy sometimes have no choice as getting access to a human therapist is simply not possible without spending a lot of money or waiting 6 months before a spot becomes available.

Which begs the question, why do so many people currently need therapy? Is it social media? Economic despair? Or a combination of factors?

HaZeust 1 days ago [-]
I always liked the theory that we're living in an age where all of our needs can be reasonably met, and we now have enough time to think - in general. We're not working 12 hour days on a field, we're not stalking prey for 5 miles, we have adequate time in our day-to-day to think about things - and ponder - and reflect; and the ability to do so leads to thoughts and epiphanies in people that therapy helps with. We also have more information at our disposal than ever, and can see new perspectives and ideas to combat and cope with - that one previously didn't need to consider or encounter.

We've also stigmatized a lot of the things that folks previously used to cope (tobacco, alcohol), and have loosened our stigma on mental health and the management thereof.

mrweasel 1 days ago [-]
> we have adequate time in our day-to-day to think about things - and ponder - and reflect;

I'd disagree. If you worked in the fields, you have plenty of time to think. We fill out every waking hour of our day, leaving no time to ponder or reflect. Many can't even find time to workout and if they do they listen to a podcast during their workout. That's why so many ideas come to us in the shower, it's the only place left where we don't fill out minds with impressions.

52-6F-62 24 hours ago [-]
Indeed. I had way more time to think working a factory kine than I have had in any other white collar role.
genewitch 18 hours ago [-]
at the factory they told me not to think. I thought, and i snapped a bunch of tap die. So maybe there was something to it?
squigz 23 hours ago [-]
I think GP means more that we generally don't have to worry about survival on a day to day (or seasonal) basis anymore, so we have more time to think about bigger issues, like politics or social issues - which I agree with, personally.
const_cast 19 hours ago [-]
Politics and social issues, sure, but introspection? Personally, I don't see that. I think people will do almost anything to keep themselves from introspecting.

It's just so much easier to externalize everything and constantly be looking to your environment and how it influences your life, as opposed to looking within. It's very uncomfortable to try to figure out why you are the way that you are and what you can do about it.

zdragnar 19 hours ago [-]
Take away social media, and most people have plenty of time for it. Most people fill their hours avoiding their problems rather than confronting them. That's half the reason therapy exists.
90s_dev 4 hours ago [-]
> We're not working 12 hour days on a field, we're not stalking prey for 5 miles, we have adequate time in our day-to-day to think about things

There's so much history that shows that people have always been able to think like this, and so much written proof that they have, and to the same proportion as they do today.

Besides, in 12 hour days on a field, do you not have another 4 hours to relax and think? While stalking prey for 5 miles, is it not quiet enough for you to reflect on what you're doing and why?

I do think you're onto something though when you say it's related to our material needs all being relatively met. It seems that's correlational and maybe causal.

johnisgood 2 hours ago [-]
> We're not working 12 hour days on a field

Actually, around here, you are lucky to find a job that is NOT 12 hours a shift.

mrweasel 1 days ago [-]
Probably a combination of things, I wouldn't pretend to know, but I have my theories. For men, one half-backed thought I've been having revolved around social circles, friends and places outside work or home. I'm a member in a "men only" sports club (we have a few exceptions due to a special program, but mostly it's men only). One of the older gentlemen, probably in his early 80s, made the comment: "It's important for men to socialise with other men, without women. Young and old men have a lot in common, and have a lot to talk about. An 18 year old woman, and an 80 year old man have very little in of shared interests or concerns."

What I notice is that the old members keep the younger members engaged socially, teach them skills and give them access to their extensive network of friends, family, previous (or current) co-workers, bosses, managers. They give advise, teach how to behave and so on. The younger members help out with moving, help with technology, call an ISP, drive others home, to the hospital and help maintain the facilities.

Regardless of age, there's always some dude you can talk to, or knows who you need to talk to, and sometimes there's even someone who knows how to make your problems go away or take you in if need by.

A former colleague had something similar, a complete ready so go support network in his old-boys football team. Ready to support in anyway they could, when he started his own software company.

The problem: This is something like 250 guys. What about the rest? Everyone needs a support network, if your alone, or your family isn't the best, you only have a few superficial friends, if any, then where do you go? Maybe the people around you aren't equipped to help you with your problems, not everyone is, some have their own issues. The safe spaces are mostly gone.

We can't even start up support networks, because the strongest have no reason to go, so we risk creating networks of people dragging each other down. The sports clubs works because members are from a wider part of society.

From the article:

> > Meta said its AIs carry a disclaimer that “indicates the responses are generated by AI to help people understand their limitations”.

That's a problem, because most likely to turn to an LLM for mental support don't understand the limitations. They need strong people to support and guide them, and maybe tell them that talking to a probability engine isn't the smartest choice, and take them on a walk instead.

layer8 21 hours ago [-]
How do you figure that it’s “currently”, and the need hasn’t always been there more or less?
more_corn 21 hours ago [-]
But it’s probably better than no therapy at all.
taormina 18 hours ago [-]
The study is versus no therapy at all.
deadbabe 23 hours ago [-]
I used ChatGPT for therapy and it seems fine, I feel like it helped, and I have plenty of things fucked up about myself. Can’t be much worse than other forms of “therapy” that people chase.
emptyfile 1 days ago [-]
The idea of people talking to LLMs in this way genuinely disturbs me.
booleandilemma 1 days ago [-]
[flagged]
distalx 1 days ago [-]
Forget safety for a moment, Zuckerberg's push for Meta AI emotional support looks like a clear play for data and control.
1 days ago [-]
lurk2 1 days ago [-]
Not what the article is about.
julienreszka 1 days ago [-]
[flagged]
kurtis_reed 1 days ago [-]
[flagged]
chownie 1 days ago [-]
In the same way a doctor might step in on your sick plan to give yourself a piercing with your keychain, yeah. They probably should be saying it.
phreno 24 hours ago [-]
[flagged]
ilaksh 23 hours ago [-]
[flagged]
davidcbc 20 hours ago [-]
Spreading this bullshit is actively dangerous because someone might believe it and try to rely on a chatbot for their mental health.
simplyinfinity 23 hours ago [-]
Even today, leading LLMS Claude 3.7 and ChatGPT 4, take your questions as "you've made mistake, fix it" instead of answering the question. People consider a much broader context of the situation, your body language, facial expressions, and can come up with unusual solutions to specific situations and can explore vastly more things than an LLM.

And the thing when it comes to therapy is, a real therapist doesn't have to be prompted and can auto adjust to you without your explicit say so. They're not overly affirming, can stop you from doing things and say no to you. LLMs are the opposite of that.

Also, as a lay person how do i know the right prompts for <llm of the week> to work correctly?

Don't get me wrong, i would love for AI to be on par or better than a real life therapist, but we're not there yet, and i would advise everyone against using AI for therapy.

sho_hn 22 hours ago [-]
Even if the tech was there, for appropriate medical use those models would also have to be strenously tested and certified, so that a known-good version is in use. Cf. the recent "personality" changes in a ChatGPT upgrade. Right now, none of these tools is regulated sufficiently to set safe standards there.
ilaksh 22 hours ago [-]
I am not talking about a layperson building their own therapist agent from scratch. I'm talking about an expert AI engineer and therapist working together and taking their time to create them. Claude 3.7 will not act in a default way given appropriate instructions. Claude 3.7 can absolutely come up with unusual solutions. Claude 3.7 can absolutely tell you "no".
creata 22 hours ago [-]
Have you seen this scenario ("an expert AI engineer and therapist working together" to create a good therapy bot) actually happen, or are you just confident that it's doable?
ilaksh 22 hours ago [-]
I've built a therapy agent running my own agent framework with Claude 3.7 based on research into CBT (research aided by my agent). I have verified that the core definition and operation of therapy sessions matches descriptions of CBT that I have been able to find online.

I am very experienced with creating prompts and agents, and good at research, and I believe that my agent along with the journaling tool would be more effective than many "average" human therapists.

It seems effective in dealing with my own issues.

Obviously I am biased.

sho_hn 22 hours ago [-]
I assume you realize you're not the first person to self-medicate while conveniently professing to be an expert on medicine.
simplyinfinity 21 hours ago [-]
You're verifying your own claims. That's not good enough.

> research aided by my agent Also not good enough.

As an example: Yesterday i asked Claude and ChatGPT to design a circuitry that monitors pulses form S0 power meter interface. It designed a circuit that didn't have any external power to the circuit. When asked it said "ah yes, let me add that" and proceeded to confuse itself and add stuff that are not needed, but are explained and sounds reasonable if you don't know anything. After numerous attempts it didn't produce any working design.

So how can you verify that the therapist agent you've built will work with something as complex as humans, when it can't even do basic circuitry with known laws of physics and spec & data sheets of no more than 10 components?

sho_hn 22 hours ago [-]
> Leading LLMs in 2025 can absolutely do certain core aspects of cognitive behavioral therapy very effectively given the right prompts and framework and things like journaling tools for the user.

What makes you qualified to assert this?

(Now, I dislike arguments from authority, but as an engineer in the area of life/safety-critical systems I've also learned the importance of humility.)

ilaksh 22 hours ago [-]
If they are an average person who wants to talk something out and get practical advise about issues, it is generally not safety critical, and LLMs can help them.

If they are mentally ill, LLMs cannot help them.

stefan_ 20 hours ago [-]
I see, your confidence stems from "I made it the fuck up"?

I don't know man, at least the people posting this stuff on LinkedIn generally know its nonsense. They are not drinking the kool-aid, they are trying to get into the business of making it.

andy99 22 hours ago [-]
The failure modes from 2023 are identical to those today. I agree with the now deleted post that there has been essentially no progress. Benchmark scores (if you think they are a relevant proxy for anything) obviously have increased, but (for example) from 50% to 90% (probably less drastically), not the 99% to 99.999% you'd need for real assurance a widely used system won't make mistakes.

Like in 2023, everything is still a demo, there's nothing that could be considered reliable.

thih9 22 hours ago [-]
> Leading LLMs in 2025 can absolutely do certain core aspects of cognitive behavioral therapy very effectively given the right prompts and framework and things like journaling tools for the user.

But when the situation gets more complex or simply a bit unexpected, would that model reliably recognize it lacks knowledge and escalate to a specialist? Or would it still hallucinate instead?

ilaksh 22 hours ago [-]
SOTA models can actually handle complexity. Most of the discussions I have had with my therapy agent do have a lot of layers. What they can't handle is someone who is mentally ill and may need medication or direct supervision. But they can absolutely recognize mental illness if it is evident in the text entered by the user and insist the user find a medical professional or help them search for one.
timewizard 23 hours ago [-]
> 2023 is ancient history in the LLM space.

Okay, what specifically has improved in that time, which would allay the doctors specific concerns?

> do certain core aspects

And not others? Is there a delineated list of such failings in the current set of products?

> given the right prompts and framework

A flamethrower is perfectly safe given the right training and support. In the wrong hands it's likely to be a complete and total disaster in record short time.

> a weak prompt that was not written by a subject matter expert

So how do end users ever get to use a tool like this?

ilaksh 22 hours ago [-]
The biggest thing that has improved is the intelligence of the models. The leading models are much more intelligent and robust. Still brittle in some ways, but totally capable of giving CBT advise.

The same way end users ever get to use a tool. Open source or an online service, for example.

22 hours ago [-]
computerthings 22 hours ago [-]
[dead]
bitwize 22 hours ago [-]
I dunno, man, M-x doctor made me take a real hard long look at my life.
Buttons840 1 days ago [-]
Interacting with a LLM (especially one running locally) can do something a therapist cannot--provide an honest interaction outside the capitalist framework. The AI has its limitations, but it is an entity just being itself doing the best it can, without expecting anything in return.
kurthr 1 days ago [-]
The word "can" is doing a lot of work here. The idea that any of the current "open weights" LLMs are outside the capitalist framework stretches the bounds of credulity. Choose the least capitalist of: OpenAI, Google, Meta, Anthropic, DeepSeek, Alibaba.

You trust Anthropic that much?

Buttons840 1 days ago [-]
I said the interaction exists outside of any financial transaction.

Many dogs are produced by profit motive, but their owners can have interactions with the dog that are not about profit.

andy99 22 hours ago [-]
Dogs aren't rlhf'd and fine tuned to enforce behaviors designed by companies.
trod1234 24 hours ago [-]
With respect, I think you should probably re-examine the meaning of the words you use here. You use words in a way that doesn't meet their established definition.

It would meet objective definition if you replaced 'capitalist' with 'socialist', which may have been what you meant, but that's merely an observation I make, not what you actually say.

The entire paragraph is quite contradictory, and lacks truth, and by extension it is entirely unclear what you mean, and it appears like you are confused when you use words and make statements that can't meet their definition.

You may want to clarify what you mean.

In order for it to be 'capitalist' true to its definition, you need to be able to achieve profit with it in purchasing power, but the outcomes of the entire business lifecycle resulting from this, taken as a whole, instead destroy that ability for everyone.

The companies involved didn't start on their merits seeking profit, they were funded by non-reserve debt issuance or money-printing which is the state picking winners and losers.

If they were capitalist they wouldn't have released model weights to the public. The only reason you would free a resource like that is if your goal was something not profit-driven (i.e. contagion towards chaos to justify control or succinctly totalism).

rochav 23 hours ago [-]
I think operating under the assumption that AI is an entity bring itself and comparing it to dogs is not really accurate. Entities (not as in legal, but in the general sense) are beings, living beings that are capable of emotion, of thought and will, are they not? Whether dogs are that could be up to debate (I think they are, personally), but whether language models are that is just is not. The notion very notion that they could be any type of entity is directly tied to the value the companies that created it have, it is part of the hype and capitalist system and I, again personally, don't think anyone could ever turn that into something that somehow ends up against capitalism just because the AI can't directly want something in return for you. I understand the sentiment and the distrust of the mental health care apparatus, it is expensive, it is tied to capitalism, it depends on trusting someone that is being paid to influence your life in a very personal way, but it's still better than trusting it on the judgment of a conversational simulation that is incapable of it, incapable of knowing you and observing you (not just what is written, but how you physically react to situations or to the retelling, like tapping your foot or disengaging) and understanding nuance. Most people would be better served talking to friends (or doing their best trying to make friends they can trust if they don't have any), and I would argue that people supporting people struggling is one way of truly opposing capitalism.
Buttons840 22 hours ago [-]
Feel free to substitute in whatever word you think matches my intent best then. You seem to understand my intent well enough--I'm not interested in discussing the definition of individual words though.
delichon 1 days ago [-]
How is it possible for a statistical model calculated primarily from the market outputs of a capitalist society to provide an interaction outside of the capitalist framework? That's like claiming to have a mirror that does not reflect your flaws.
NitpickLawyer 1 days ago [-]
If I understand what they're saying, the interactions you have with the model are not driven by "maximising eyeballs/time/purchases/etc". You get to role-play inside a context window, and if it went in a direction you don't like you reset and start over again. But during those interactions, you control whatever happens, not some 3rd party that may have ulterior motives.
Draiken 6 hours ago [-]
> the model is not driven by "maximising eyeballs/time/purchases/etc".

Do you have access to all the training data and the reinforcement learning they went through? All the system prompts?

I find it impossible for a company seeking profit to not build its AI to maximize what they want.

Interact with a model that's not tuned and you'll see the stark difference.

The matter of fact is that we have no idea what we're interacting with inside that role-play session.

Buttons840 1 days ago [-]
The same way an interaction with a pure bread dog can be. The dog may have come from a capitalistic system (dogs are bred for money unfortunately), but your personal interactions with the dog are not about money.

I've never spoken to a therapist without paying $150 an hour up front. They were helpful, but they were never "in my life"--just a transaction--a worth while transaction, but still a transaction.

germinalphrase 1 days ago [-]
It’s also very common for people to get therapy at free or minimal cost (<$50) when utilizing insurance. Long term relationships (off and on) are also quite common. Whether or not the therapist takes insurance is a choice, and it’s true that they almost always make more by requiring cash payment instead.
amanaplanacanal 1 days ago [-]
The dogs intelligence and personality were bred long before our capitalist system existed, unlike whatever nonsense an LLM is trying to sell you.
tuyguntn 1 days ago [-]
I think you are right, on one hand we have human beings with own emotions in life and based on their own emotions they might impact negatively others emotion

on the other hand probabilistic/non-deterministic model, which can give 5 different advises if you ask 5 times.

So who do you trust? Until determinicity of LLM models gets improved and we can debug/fix them while keeping their deterministic behavior intact with new fixes, I would rely on human therapists.