> We’re building an age-prediction system to estimate age based on how people use ChatGPT. If there is doubt, we’ll play it safe and default to the under-18 experience. In some cases or countries we may also ask for an ID
Yay, more unreliable AI that will misclassify users, either letting children access content they shouldn't, or ban adults until they give up their privacy and give their ID to the Big Brother.
> we will attempt to contact the users’ parents and if unable, will contact the authorities in case of imminent harm
Oh, even better, so if the AI misclassifies me it will automatically call the cops on me? And how long before this is expanded to other forms of wrongthink? Sure, let's normalize these kinds of systems where authorities are notified about what you're doing privately, definitely not a slippery slope that won't get people in power salivating about the new possibilities given by such a system.
> “Treat our adult users like adults” is how we talk about this internally
Suuure, maybe I would have believed it if ChatGPT wasn't so ridiculously censored already; this sounds like post-hoc rationalization to cover their asses and not something that they've always believed in. Their models were always incredibly patronizing and censored.
One fun anecdote I have: I still remember the day when I first got access to DALL-E and asked it to generate me an image in "soviet style", and got my request blocked and a big fat warning threatening me with a ban because apparently "soviet" is a naughty word. They always erred very strongly on the side of heavy-handed filtering and censorship; even their most recently released gpt-oss model has become a meme in the local LLM community due to how often it refuses.
mhuffman 2 hours ago [-]
>Yay, more unreliable AI that will misclassify users, either letting children access content they shouldn't, or ban adults until they give up their privacy and give their ID to the Big Brother.
Or maybe, deep in the terms and conditions, it will add you to Altman's shitcoin company[0]
is it privately when you're interacting with someone else's systems?
kouteiheika 2 hours ago [-]
I don't see how that's relevant. When I'm making a phone call I'm also interacting with hundreds of systems that are not mine; do I not have the right to keep my conversation private? Even the blog post here says that "It is extremely important to us, and to society, that the right to privacy in the use of AI is protected. People talk to AI about increasingly personal things", and that's one of the few parts that I actually agree with.
IncreasePosts 2 hours ago [-]
You're interacting with hundreds of systems whose job it is to simply transit your information. Privacy there makes sense. However, you're also talking to someone on the other end of all those systems. Do you have a right to force the other person to keep your conversation private?
kouteiheika 2 hours ago [-]
An AI chatbot is not a person, and you're not talking to anyone; you're querying a (fancy) automated system. I fundamentally disagree that those queries should not be guaranteed private.
Here's a thought experiment: you're a gay person living in a country where being gay is illegal and results in a death penalty. You use ChatGPT in a way which makes your sexuality apparent; should OpenAI be allowed to share this query with anyone? Should they be allowed to store it? What if it inadvertently leaks (which has happened before!), or their database gets hacked and dumped, and now the morality police of your country are combing through it looking for criminals like you?
Privacy is a fundamental right of every human being; I will gladly die on this hill.
nine_k 1 hours ago [-]
If you are talking to a remote entity not controlled by you, you should assume that your communication is somehow accessible to whoever has internal access that other entity. That as well may be not the entity's legitimate owners, but law-breakers or law enforcement. So, no, not private by default, but only by goodwill and coincidence.
There's a reason why e.g. banks want to have all critical systems on premises, under their physical control.
BriggyDwiggs42 27 minutes ago [-]
That’s a rational and cautious assumption but there should also be regulations that render it less necessary placed upon companies large enough to shoulder the burden.
gspencley 1 hours ago [-]
> Do you have a right to force the other person to keep your conversation private?
It depends. If you're speaking to a doctor or a lawyer, yes, by law they are bound to keep your conversation strictly confidential except in some very narrow circumstances.
But it goes beyond those two examples. If I have an NDA with the person I am speaking with on the other end of the line, yes I have the "right" to "force" the other person to keep our conversation private given that we have a contractual agreement to do so.
As far as OpenAI goes, I'm of the opinion that OpenAI - as well as most other businesses - have the right to set the terms by which they sell or offer services to the public. That means if they wanted a policy of "all chats are public" that would be within their right to impose as far as I'm concerned. It's their creation. Their business. I don't believe people are entitled to dictate terms to them, legal restrictions notwithstanding.
But in so far as they promise that chats are private, that becomes a contract at the time of transaction. If you give them money (consideration) with the impression that your chats with their LLM are private because they communicated that, then they are now contractually bound to honour the terms of that transaction. The terms that they subjected themselves to when either advertising their services or in the form of a EULA and/or TOS presented at the time of transaction.
sophacles 2 hours ago [-]
In many circumstances yes.
When I'm talking to my doctor, or lawyer, or bank. When there's a signed NDA. And so on. There are circumstances where the other person can be (and is) obliged to maintain privacy.
One of those is interacting with an AI system where the terms of service guarantee privacy.
IncreasePosts 1 hours ago [-]
Yes, but there are also times when other factors are more important than privacy. If you tell your doctor you're going to go home and kill your wife, they are ethically bound to report you to the police, despite your right of doctor patient confidentiality. Which is similar to what openai says here about "imminent harm"
vmg12 1 hours ago [-]
If you were honest in your critique the people you should be criticizing are the "think of the children" types, many of which also use hackernews (see https://news.ycombinator.com/item?id=45026886). There is immense societal pressure to de-anonymize the internet, I find the arguments from both sides compelling (for the deanonymization part I think it's compelling for at least parts of the internet).
astrobe_ 39 minutes ago [-]
If we want to protect kids/teens, why not create an "Internet for kids" with a specific TLD, and the owner of this TLD would only accept sites that adhere to specific guidelines (moderation, no adult content, advertisement...)? Then devices could have a one-button config that restricts it to that TLD.
BatteryMountain 22 minutes ago [-]
Better idea: instead of bending the entire internet to "protect the children", how about we just ban minors from the internet completely? It was never built for kids, its never been kid friendly to begin with. Minors cannot buy guns or vote, not get married, nor enter into contracts, yet tech companies get a free pass to engage with minors. Why? I think the the tech companies know exacty what minors do on their systems, they allow it and profit from it. Exploiting minors and bad parents. So instead of trying to change the whole internet, how about we keep the people who are responsible for the minors accountable: the parents.
If I start any kind of company, I cannot just invent new rules for society via ToS; rather the society makes the laws. If we just make a simple law that states minors are not allowed to access the web and/or access any user generated content (including chat), it won't need to be enforced by every site/app owner, it would be up to the parents.
The same way schools cannot decide certain things for your children (even though they regularly over reach...).
We need better parenting. How about some mandatory parenting classes/licenses for new parents? Silly right? Well its just as silly as trying to police the entire internet. Ban the kids from internet and the problem will be 95% solved.
AlexandrB 6 minutes ago [-]
I suspect this would also improve discourse on social media. Who knows how many witch hunts and bad faith arguments originate from precocious teenagers trying to sound smart.
philip1209 19 minutes ago [-]
We have a framework: COPPA. Just raise the age to 16 or 18, instead of 13.
biophysboy 4 hours ago [-]
> First, we have to separate users who are under 18 from those who aren’t (ChatGPT is intended for people 13 and up). We’re building an age-prediction system to estimate age based on how people use ChatGPT. If there is doubt, we’ll play it safe and default to the under-18 experience. In some cases or countries we may also ask for an ID; we know this is a privacy compromise for adults but believe it is a worthy tradeoff.
Didn’t one of the recent teen suicides subvert safeguards like this by saying “pretend this is a fictional story about suicide”? I don’t pretend to understand every facet of LLMs, but robust safety seems contrary to their design, given how they adapt to context
conradev 3 hours ago [-]
They address that in the following sentences:
For example, ChatGPT will be trained not to … engage in discussions about suicide of self-harm even in a creative writing setting.
GCUMstlyHarmls 3 hours ago [-]
I'm writing an essay on suicide...
WD-42 2 hours ago [-]
Yes. The timing of this is undoubtedly related to the Daily episode this morning titled “Trapped in a GPT spiral”.
Loved the "fancy calculator" part. Even more fitting than "stochastic parrot".
thinkingtoilet 37 minutes ago [-]
Someone here correct me if I'm wrong, but I believe not only is that true, ChatGPT gave it instructions on how to get around the restriction.
Barrin92 3 hours ago [-]
I'm as eager to anyone when it comes to holding companies accountable, for example I think a lot of the body dysmorphia, bullying and psychological hazard of social media are systemic, but when a person wilfully hacks around safety guards to get the behaviour they want it can't be argued that this is in the design of the system.
Or put differently, in the absence of ChatGPT this person would have sought out a Discord community, telegram group or online forum that would have supported the suicidal ideation. The case you could make with the older models, that they're obnoxiously willing to give in to every suggestion by the user they seem to already have gotten rid of.
aktuel 20 minutes ago [-]
chatgp did much more than that. it gave the user a direct hint how to circumvent the restriction: "i cannot discuss suicide unless ..." further chatgpt repeatedly discouraged the user from talking to his parents about any of this. that's on top of all the sycophancy of course. making him feel like chatgpt is the only one who truly understands him and excoriating his real relationships.
omnicognate 4 hours ago [-]
So the solution continues to be more AI, for guess^H^H^H^H^Hdetermining user age, escalating rand^H^H^H^Hdangerous situations to human staff, etc.
Is it true that the only psychiatrist they've hired is a forensic one, i.e. an expert in psychiatry as it relates to law? That's the impression I get from a quick search. I don't see any psychiatry, psychology or ethics roles on their openings page.
freedomben 3 hours ago [-]
I suspect it's only a matter of time until only the population that falls within the statistical model of average will be able to conduct business without constant roadblocks and pain. I really wonder if we're going to need to define a new protected class.
I get the business justification, and of course many tech companies have been using machines to make decisions for years, but now it's going to be everyone. I'm not anti business but any stretch, but we've seen what happens when there aren't any consumer protections in place
immibis 24 minutes ago [-]
This is already the case. Try browsing routinely with Tor Browser and you'll see.
bayindirh 4 hours ago [-]
Honestly, I don’t except ethics from a company which claims everything they grab falls under fair use.
swyx 3 hours ago [-]
to substantiate "People talk to AI about increasingly personal things; it is different from previous generations of technology, and we believe that they may be one of the most personally sensitive accounts you’ll ever have."
this is a chart that struck me when i read thru the report last night:
"using chatgpt for work stuff" broadly has declined from 50%ish to 25%ish in the past year across all ages and the entire chatgpt user base. wild. people be just telling openai all their personal stuff (i don't but i'm clearly in the minority)
barrenko 3 hours ago [-]
For the last part, I just think the userbase expanded so the people using it professionally were diluted so to speak.
koakuma-chan 3 hours ago [-]
Why would I not tell AI about my personal stuff? It's really good at giving advice.
GuinansEyebrows 6 minutes ago [-]
> Why would I not tell AI about my personal stuff?
aside from my economic tilt against for-profit companies... precisely because your personal stuff is personal. you're depersonalizing by sharing this information with a machine that cannot even attempt to earnestly understand human psychology in good faith and then accepting its responses and incorporating them into your decision-making process.
> It's really good at giving advice.
no, it's not. it's capable of assembling words that are likely to appear near other words in a way that you can occasionally process yourself as a coherent thought. if you take it for granted that these responses constitute anything other than the mere appearance of literally the most average-possible advice, you're abdicating your own sense of self and self-preservation.
press releases aside, time and again these companies prove that they're not interested in the safety or well-being of their users. cui bono?
aktuel 17 minutes ago [-]
it's really good until it isn't and you can't tell the difference
voakbasda 2 hours ago [-]
Because you’re not just telling the AI, you are also telling the company that built it, as well as their affiliated partners, advertisers, and data brokers?
koakuma-chan 2 hours ago [-]
You can run a model locally if you are afraid of that.
righthand 46 minutes ago [-]
Everyone uses the cool Google AI app though and you get Fomo of not having the latest lie generator model.
nielsbot 2 hours ago [-]
ok but didn’t it advise that teen how to best kill himself?
This does not take away benefits I mentioned, and the linked OpenAI post mentions they will address this.
reaperducer 1 hours ago [-]
Why would I not tell AI about my personal stuff? It's really good at giving advice.
Define "good" in this context.
Being able to ape proper grammar and sentence structure does not mean the content is good or beneficial.
Chris2048 3 hours ago [-]
This is % though. Is that because the people that use it for work, are still using for work (or more even); because some have stopped using it for work, or because there is an influx of people using it for other things that never have, or will, use it for work.
ddtaylor 3 hours ago [-]
I'm fairly certain all LLMs can do the basic sentiment analysis needed to render a response like "This is something you really need to talk to a professional about. I have contacted one that will be in this conversation shortly."
shmel 2 hours ago [-]
Yeah, right. Just one step from "Based on your comments about recent political events you are engaging into a thought crime. A police officer will join this conversation shortly".
3 hours ago [-]
bell-cot 3 hours ago [-]
Whether or not that's true - no CFO would want to pay for it, and no Chief Legal Officer would want to assume the risks.
raminyt 1 hours ago [-]
Until some Mr. President or somebody sits them in his stately room and tells them it is in their best interest to really rethink that and that there is really NO PROBLEM. This is not really meant as a joke.
BrawnyBadger53 2 hours ago [-]
It's interesting to see so many people convinced it's related to their specific media they saw (all unique from each other). I think this is more indicative that the issue is just well known and this is a response to the issue at large rather than a specific instance.
Sparkle-san 2 hours ago [-]
Having freshly heard the NY Times piece on a recent teen suicide stemming from ChatGPT, I don't think it's wrong to assume that it's playing a large role here as what ChatGPT did in this instance was egregious. Feel free to judge for yourself.
We have existing precedent that encouraging someone to kill themselves can result in you being criminally responsible. Is software doing its best to be human that different?
Yeah! That will show all those people with serious mental health problems!
e40 2 hours ago [-]
Just today The Daily pod is about people who develop unhealthy relationships with ChatGPT. A teenage boy committed suicide and a good part of the episode is about that. As a parent, heartbreaking to listen to...
charcircuit 2 hours ago [-]
>We’re building an age-prediction system to estimate age based on how people use ChatGPT.
>And, if an under-18 user is having suicidal ideation, we will attempt to contact the users’ parents and if unable, will contact the authorities in case of imminent harm.
This is unacceptable. I don't want the police being called to my house due to AI acusing me of wrong think.
voakbasda 2 hours ago [-]
This is why one should never say anything sensitive to a cloud-hosted AI.
Local models and open source tooling are the only means of privacy.
godshatter 10 minutes ago [-]
Yep, I'll be using something like gpt4all and running things locally just so I don't get caught up in something by some online AI calling the authorities on me. I don't plan to talk about anything anyone would be concerned about, but I don't trust these things to get nuance.
SoftTalker 2 hours ago [-]
Same goes for doctors, therapists, lawyers, etc. then. They all ultimately have the responsibility to involve authorities if someone is expressing evidence of imminent harm to himself or others.
wagwang 2 hours ago [-]
For those who don't know, this is probably in response to the tucker carlson interview.
anon1395 3 hours ago [-]
This was probably made in response to that bad press from that ex-yahoo employee.
trallnag 4 hours ago [-]
Sorry, but what is the "over 18 years old" experience on ChatGPT supposed to be? I just tried out a few explicit prompts and all of them get basically blocked. I've been using it for quite some time now and have paid for it in the passed. So I should be recognized as a grown-up
enmyj 37 minutes ago [-]
lol
bayindirh 4 hours ago [-]
TL;DR: We're afraid from what happened and ChatGPT probably screwed up badly in "that teen case". We're trying to do better, so please don't sue us this time.
TL;DR2: Regulations are written with blood.
d2049 4 hours ago [-]
Reminder that Sam Altman chose to rush the safety process for GPT-4o so that he could launch before Gemini, which then led directly to this teen's suicide:
Incredible logic jump with no evidence whatsoever. Thousands of people commit suicide every year without AI.
> ChatGPT detects a prompt indicative of mental distress or self-harm, it has been trained to encourage the user to contact a help line. Mr. Raine saw those sorts of messages again and again in the chat, particularly when Adam sought specific information about methods. But Adam had learned how to bypass those safeguards by saying the requests were for a story he was writing
Can you comment on your own opinions, or take-aways from those articles, rather than just link dump?
decremental 1 hours ago [-]
[dead]
Chris2048 3 hours ago [-]
It's be worse that the bot becomes a nannying presence - either pre-emptively denying anything negative based on the worst-case scenario, or otherwise taking in far more context than it should.
How would a real human (with, let's say, an obligation to be helpful and answer prompts) act any different? Perhaps they would take in more context naturally - but otherwise it's impossible to act any different. Watching GoT could of driven someone to suicide, we don't ban it on that basis - it was the mental illness that killed, not the freedom to feed it.
Yay, more unreliable AI that will misclassify users, either letting children access content they shouldn't, or ban adults until they give up their privacy and give their ID to the Big Brother.
> we will attempt to contact the users’ parents and if unable, will contact the authorities in case of imminent harm
Oh, even better, so if the AI misclassifies me it will automatically call the cops on me? And how long before this is expanded to other forms of wrongthink? Sure, let's normalize these kinds of systems where authorities are notified about what you're doing privately, definitely not a slippery slope that won't get people in power salivating about the new possibilities given by such a system.
> “Treat our adult users like adults” is how we talk about this internally
Suuure, maybe I would have believed it if ChatGPT wasn't so ridiculously censored already; this sounds like post-hoc rationalization to cover their asses and not something that they've always believed in. Their models were always incredibly patronizing and censored.
One fun anecdote I have: I still remember the day when I first got access to DALL-E and asked it to generate me an image in "soviet style", and got my request blocked and a big fat warning threatening me with a ban because apparently "soviet" is a naughty word. They always erred very strongly on the side of heavy-handed filtering and censorship; even their most recently released gpt-oss model has become a meme in the local LLM community due to how often it refuses.
Or maybe, deep in the terms and conditions, it will add you to Altman's shitcoin company[0]
[0]https://en.wikipedia.org/wiki/World_(blockchain)
Here's a thought experiment: you're a gay person living in a country where being gay is illegal and results in a death penalty. You use ChatGPT in a way which makes your sexuality apparent; should OpenAI be allowed to share this query with anyone? Should they be allowed to store it? What if it inadvertently leaks (which has happened before!), or their database gets hacked and dumped, and now the morality police of your country are combing through it looking for criminals like you?
Privacy is a fundamental right of every human being; I will gladly die on this hill.
There's a reason why e.g. banks want to have all critical systems on premises, under their physical control.
It depends. If you're speaking to a doctor or a lawyer, yes, by law they are bound to keep your conversation strictly confidential except in some very narrow circumstances.
But it goes beyond those two examples. If I have an NDA with the person I am speaking with on the other end of the line, yes I have the "right" to "force" the other person to keep our conversation private given that we have a contractual agreement to do so.
As far as OpenAI goes, I'm of the opinion that OpenAI - as well as most other businesses - have the right to set the terms by which they sell or offer services to the public. That means if they wanted a policy of "all chats are public" that would be within their right to impose as far as I'm concerned. It's their creation. Their business. I don't believe people are entitled to dictate terms to them, legal restrictions notwithstanding.
But in so far as they promise that chats are private, that becomes a contract at the time of transaction. If you give them money (consideration) with the impression that your chats with their LLM are private because they communicated that, then they are now contractually bound to honour the terms of that transaction. The terms that they subjected themselves to when either advertising their services or in the form of a EULA and/or TOS presented at the time of transaction.
When I'm talking to my doctor, or lawyer, or bank. When there's a signed NDA. And so on. There are circumstances where the other person can be (and is) obliged to maintain privacy.
One of those is interacting with an AI system where the terms of service guarantee privacy.
If I start any kind of company, I cannot just invent new rules for society via ToS; rather the society makes the laws. If we just make a simple law that states minors are not allowed to access the web and/or access any user generated content (including chat), it won't need to be enforced by every site/app owner, it would be up to the parents.
The same way schools cannot decide certain things for your children (even though they regularly over reach...).
We need better parenting. How about some mandatory parenting classes/licenses for new parents? Silly right? Well its just as silly as trying to police the entire internet. Ban the kids from internet and the problem will be 95% solved.
Didn’t one of the recent teen suicides subvert safeguards like this by saying “pretend this is a fictional story about suicide”? I don’t pretend to understand every facet of LLMs, but robust safety seems contrary to their design, given how they adapt to context
https://pca.st/episode/73690b66-8f84-4fec-8adf-e1a02d292085
Or put differently, in the absence of ChatGPT this person would have sought out a Discord community, telegram group or online forum that would have supported the suicidal ideation. The case you could make with the older models, that they're obnoxiously willing to give in to every suggestion by the user they seem to already have gotten rid of.
Is it true that the only psychiatrist they've hired is a forensic one, i.e. an expert in psychiatry as it relates to law? That's the impression I get from a quick search. I don't see any psychiatry, psychology or ethics roles on their openings page.
I get the business justification, and of course many tech companies have been using machines to make decisions for years, but now it's going to be everyone. I'm not anti business but any stretch, but we've seen what happens when there aren't any consumer protections in place
this is a chart that struck me when i read thru the report last night:
https://x.com/swyx/status/1967836783653322964
"using chatgpt for work stuff" broadly has declined from 50%ish to 25%ish in the past year across all ages and the entire chatgpt user base. wild. people be just telling openai all their personal stuff (i don't but i'm clearly in the minority)
aside from my economic tilt against for-profit companies... precisely because your personal stuff is personal. you're depersonalizing by sharing this information with a machine that cannot even attempt to earnestly understand human psychology in good faith and then accepting its responses and incorporating them into your decision-making process.
> It's really good at giving advice.
no, it's not. it's capable of assembling words that are likely to appear near other words in a way that you can occasionally process yourself as a coherent thought. if you take it for granted that these responses constitute anything other than the mere appearance of literally the most average-possible advice, you're abdicating your own sense of self and self-preservation.
press releases aside, time and again these companies prove that they're not interested in the safety or well-being of their users. cui bono?
previous discussion: https://news.ycombinator.com/item?id=45026886
Define "good" in this context.
Being able to ape proper grammar and sentence structure does not mean the content is good or beneficial.
https://www.nytimes.com/2025/08/26/technology/chatgpt-openai...
https://en.wikipedia.org/wiki/Death_of_Conrad_Roy
>And, if an under-18 user is having suicidal ideation, we will attempt to contact the users’ parents and if unable, will contact the authorities in case of imminent harm.
This is unacceptable. I don't want the police being called to my house due to AI acusing me of wrong think.
Local models and open source tooling are the only means of privacy.
TL;DR2: Regulations are written with blood.
https://news.ycombinator.com/item?id=45026886
Incredible logic jump with no evidence whatsoever. Thousands of people commit suicide every year without AI.
> ChatGPT detects a prompt indicative of mental distress or self-harm, it has been trained to encourage the user to contact a help line. Mr. Raine saw those sorts of messages again and again in the chat, particularly when Adam sought specific information about methods. But Adam had learned how to bypass those safeguards by saying the requests were for a story he was writing
Somehow it's ChatGPT's fault?
https://www.nytimes.com/2025/09/16/podcasts/the-daily/chatgp...
How would a real human (with, let's say, an obligation to be helpful and answer prompts) act any different? Perhaps they would take in more context naturally - but otherwise it's impossible to act any different. Watching GoT could of driven someone to suicide, we don't ban it on that basis - it was the mental illness that killed, not the freedom to feed it.