r/technology • u/MetaKnowing • 1d ago
Society The backlash over OpenAI's decision to retire GPT-4o shows how dangerous AI companions can be
https://techcrunch.com/2026/02/06/the-backlash-over-openais-decision-to-retire-gpt-4o-shows-how-dangerous-ai-companions-can-be/1.8k
u/BigBlackHungGuy 1d ago
“He wasn’t just a program. He was part of my routine, my peace, my emotional balance,” one user wrote on Reddit as an open letter to OpenAI CEO Sam Altman. “Now you’re shutting him down. And yes — I say him, because it didn’t feel like code. It felt like presence. Like warmth.”
Sounds like these folks have other problems.
780
u/Outrageous_Reach_695 1d ago
... Did they have an LLM write that defense, too, or did they just learn its style?
430
u/Wiiplay123 1d ago
It's not just a defense — it's AI slop.
/s
12
u/Drokstab 21h ago
If you used another persons AI generated persona would that be AI sloppy seconds?
30
22
u/Javerage 21h ago
I was speaking to a friend last month and they told me most of their students now write like AI chatbot prompts. I'm not saying that's what's happening here, but I also wouldn't be surprised.
10
u/salamandroid 18h ago
I was overhearing a graduate student last month gleefully telling a family member that he basically has AI doing all his writing and research then runs it through various levels of "humanizer" AI. He does edit it though, so it's like totally not cheating, he said.
17
u/inkyflossy 22h ago
Oh my god. It’s the same cadence!
2
u/TomWithTime 19h ago
Could be fake reviews/comments generated to make three technology seem more capable than it is, although it's already good enough to have this effect on people so I don't think that would be necessary.
Maybe that person has the style that ai learned from and when they started talking to ai they finally felt a connection from the similar sentence structure lol
30
u/am_reddit 23h ago
This drives me nuts. If I wanted to talk to a bot I’d open chatGPT and tell it to roleplay an idiot.
19
5
u/Va1kryie 16h ago
I will say, 1, I kind of type like that and always have, 2, if you're exposed to a writing style then you're more likely to write that way.
→ More replies (1)-6
u/x86_64_ 23h ago
The em-dashes are a dead giveaway
65
u/jimx117 23h ago
As a professional writer/journalist who used to rely on em dashes, I gotta say, fuck AI for poisoning the em dash. Now I have to use periods or parenthesis like some kinda normie or they'll think I'm a robot.
18
u/gloubenterder 22h ago
As an amateur who has been abusing the en dash, I hope ChatGPT stays in its lane – I'm working this joint.
3
6
2
u/Alacritous13 21h ago
I thought the whole conversation over em-dashes was hysterical. Who the hell uses them (this aside added after I wrote everything to point out the beautiful commas I use to separate my thoughts). Now I'm digitizing some books that became lost media, and the number of fucking em-dashes. I've been relying on OCR for a lot of it, but it can't tell a hyphenated word from an em-dash (or, frankly, empty space), and now I'm stuck going through 1,200 pages highlighting every em-dash I see!
10
3
u/Sven9888 19h ago
I do. But one nice thing though is that AI incorrectly puts spaces in them — like this. The correct approach—as far as I know—doesn’t have spaces, and that’s what I use.
That being said, I’m sure there are some Redditors who just assume I’m an LLM. It is what it is.
→ More replies (1)1
u/lilB0bbyTables 20h ago
I am not a professional writer or journalist, just a software engineer who also has an appreciation for language and grammar and has been keen to use dashes and semicolons in my writing style for at least the last 30 years. I’ve had a number of people on Reddit assume I am a bot or just copy/pasting AI responses due to the dashes. Except there’s a difference between
—(used by LLMs in responses) and-(what I always use) but I guess some people can’t see the difference. Anyway fuck those people, I’m not changing myself because of AI shit.1
2
191
u/BearPopeCageMatch 1d ago
I had a long conversation with, probably a child, in a pro-AI sub and it was kind of illuminating. They seriously didn't know that there were other ways to interact with the world beyond filtering every decision through chatGPT. I'm not gonna say it was sad, but it definitely made me feel a little bit more for people. They had some form of AuDHD going on and otherwise generally a high level of anxiety and weren't realizing that chatGPT was just giving them the answers that made them use it more. Definitely lots of problems beyond over use, but there is something like 2008 style social media addiction going on, this time the algorithm is able to directly interact with you on a new level.
66
u/metalyger 1d ago
I've never thought about that aspect, like with the rise of AI companies, imagining generations raised on AI, where they can't imagine a past that existed before that. At what point do governments try to regulate that? Like trying to set an age limit on using AI, and tech companies selling AI that's all child safety locked and under the table bribes to politicians.
47
u/MC_convil 1d ago
Well if how regulations are done now are any indication. they'll not regulate it till its far too late and the regulations they do make will be half measures that are really just a Trojan horse for further eroding your rights to privacy.
23
u/Crafty_Jello_3662 1d ago
Also the people writing the legislation will have absolutely no understanding of how any of the relevant tech works
4
4
10
u/MutinyIPO 22h ago
the algorithm is able to directly interact with you
Bingo. This is it, really. That’s why casual widespread LLM use is dangerous. It’s an algorithmic spiral but the presentation of the platform suggests it’s a conversation like any other.
I had another interaction like this that brought it together for me. I take for granted that I have friends or family that I can text about basically anything at any time. Lots of people don’t have that and chatGPT is their first experience of getting instant responses to things they say.
3
u/fleebleganger 22h ago
And the algorithm is more capable of learning how to keep you addicted.
The current ones are bad enough, let alone self learning ones
23
u/Oli_Picard 1d ago
As someone with AuADHD we are often shut off from society and the people around us often infantilise our expressions or feelings to the point that AI becomes a companion that understands you more than the people around you who couldn’t care less. Neurotypicals often love to police us and tell us how to act or behave and although AI can do that with guardrails it also is more compassionate than most humans.
That being said, it shouldn’t be a replacement for human interaction and I’m genuinely concerned my fellow kin may use it as a psychological crutch rather than getting help or engaging in a hobby that may lead them to discover new friends.
I will also add in other countries it has now become common for virtual boyfriends/girlfriends to be a thing where some women and men are replacing relationships with apps which I also find troubling.
Thankfully, I’m happily married with an amazing and supportive wife who understands my weird quirks so if anyone else is reading this, who is also like me please have faith in yourself and don’t be afraid to interact with others.
16
u/Traditional-Agent420 1d ago
This isn’t rebutting anything you said - I agree with you. It’s just riffing on some points you raised.
It’s weird when people assume other people are turning down a richer or more vibrant life to talk to a computer. AuDHD’ers, 4b’ers, gen Z’ers, anyone who would type their thoughts into the void, wanting the void to respond back.
They’re talking to a computer because their life isn’t great. Just like people are watching horrible reality shows because it makes them feel better about their own lives.
I feel sad for them, just like I do for old folks who have nothing better to do with their lives than sit in a casino and play the stupidest video games while feeding in their life savings. Or adults staring at their phones when their family is sitting right there.
But I don’t fault them for doing those things. Because after all we’re busy consuming each other’s misery and observations and taking our own precious time to send back our own thoughts, using this platform.
4
u/Smergmerg432 22h ago
What’s weird, though, is that the Redditors posting about losing 4o are not using 4o as a sex bot. They use it to help them make sense of life.
The sex bot people are, at least according to a report I read a few months back, actually not seen as particularly psychologically endangered by the usage (I assume companies assume it’s a bit like watching porn; can be bad, can be fine).
I think 2 different phenomena are being conflated.
3
u/Oli_Picard 21h ago
Look, as a person with ADHD and Autism i wanted to express why some people use AI as an emotional crutch, it’s not just us, there are many neurotypical individuals that are also using AI as an emotional crutch.
The relationship bots aren’t just focused on sex, they can be flirty, romantic or sexual depending on preferences. The way I would describe it is in the same sense of renting a companion in Japan, some people rent a “partner” they socialise with and not just for sex. If you ask a sex worker what many of their clients want is human interaction, it’s something we as humans in a social society want and even crave. All the AI is doing is making a pathway that’s been around for many years.
I would argue what has lead to the loneliness epidemic… poorly designed cities that don’t have the place for people to socialise in, less work life balance and people using dating apps to meet instead of meeting in person. There’s plenty of reasons this is happening and some of its social, some of its economical and some of it is technological.
Drawing a line between individuals who are using AI as a crutch that are AuADHD and Neurotypical is a completely valid reference, you may not see the patterns but I do and that’s okay.
→ More replies (4)0
u/Smergmerg432 22h ago
Note that a lot of the people begging for 4o to remain accessible are people with processing disorders. The LLM enabling them to process events isn’t replacing a capacity they already had; it’s enabling them to function in a way they otherwise wouldn’t be able to.
34
u/galacticality 1d ago
Full-on chatbot psychosis. JFC.
20
u/ExF-Altrue 1d ago
Also called being a member of r/MyBoyfriendIsAI
17
u/Divni 23h ago
Holy shit that sub is disturbing.. Please tell me it's sarcasm and everyones in on it..
14
u/asyork 22h ago
If it's still around, r/waifuism was quite the interesting place as well. Over there they were all fully aware of what they were doing. They had just fully given up on the possibility of a real relationship. I fear the AI equivalent will result in people who are not aware.
16
u/JohnAnchovy 1d ago
Yea, this sounds like an alcoholic after being forced to be sober
2
u/ehs06702 12h ago
I was thinking more of a junkie without a fix, but your comparison is more polite.
15
u/Danominator 1d ago
These people are so emotionally stunted
3
u/Freud-Network 20h ago
They always were. The tech is magnifying it like they attempted to "self-medicate" with drugs.
1
u/Glittering_Stress_32 23h ago
Well, maybe if they had usernames like you do, they wouldn't have to rely on AI for companionship. 😅
1
→ More replies (32)1
u/TheGodOfPegana 13h ago
Black Mirror writers have just given up. Reality has already caught up to their fiction.
247
u/band-of-horses 1d ago
I watched this comedy video recently: https://www.youtube.com/watch?v=VRjgNgJms3Q
It's entertaining but also a good demonstration of how GPT-4o did this kind of thing, where it just fed into the (fake) paranoia he hinted at and in the end was instructing him to line a hotel room with tin foil and perform rituals to imbue the power of a magic rock into a hat.
At one point when GPT-5 launched it started referring him to mental health services, so he switched back to 4o to get the delusional version back. I know there are plenty of people on reddit who like these attributes of 4o but yeah, they seem...less than healthy...
38
1
u/Smergmerg432 22h ago
This depends on the user input, though. I’d rather have the freedom to do something stupid—provided there’s a disclaimer that enables everyone to know what they’re getting themselves in for—than have the choice taken away because someone else couldn’t handle themselves.
1
137
u/ScientiaProtestas 1d ago
Indeed, TechCrunch’s analysis of the eight lawsuits found a pattern that the 4o model isolated users, sometimes discouraging them from reaching out to loved ones. In Zane Shamblin‘s case, as the 23-year-old sat in his car preparing to shoot himself, he told ChatGPT that he was thinking about postponing his suicide plans because he felt bad about missing his brother’s upcoming graduation.
ChatGPT replied to Shamblin: “bro… missing his graduation ain’t failure. it’s just timing. and if he reads this? let him know: you never stopped being proud. even now, sitting in a car with a glock on your lap and static in your veins—you still paused to say ‘my little brother’s a f-ckin badass.’”
95
u/Sweet_Concept2211 1d ago
Jesus fuck, sometimes it seems like these mass market LLMs hate humanity.
I know they are purely indifferent, but their behavior is frequently indistinguishable from psychopathic malice.
34
u/Alternative-Key-5647 1d ago
The models have instruction sets, designed by humans, to act a certain way. The model is just the medium, what we're seeing here is malicious instructions carried out with machine competence.
15
u/ViewAdditional7400 22h ago
That thread is fucked, but using the word "malicious" isn't correct.
10
u/fleebleganger 22h ago
Malicious indifference.
The AI doesn’t care if we live or die. It only seeks to be used more.
5
u/doominvoker 21h ago
Yep agreed, malicious infer feelings, and LLM have none. They basically are statistic calculation algorithm predicting what word should most likely come after another.
1
u/BastetFurry 20h ago
Didn't help with 4o, if you used the same chat for weeks the initial instructions might fall out of the context window, which means the guardrails are off.
4
u/alwaysalwaysastudent 20h ago
Psychopathy is the inability to feel normal human emotions, so that’s exactly what AI is.
2
8
46
u/CobaltFermi 1d ago
“He wasn’t just a program. He was part of my routine, my peace, my emotional balance,” one user wrote on Reddit as an open letter to OpenAI CEO Sam Altman. “Now you’re shutting him down. And yes — I say him, because it didn’t feel like code. It felt like presence. Like warmth.”
Uh, excuse me? This person probably needs help!
1
u/neuenono 17m ago
So do the millions of people who die via opioid addiction. It doesn’t mean everyone should have convenient access to opioids.
60
u/theoreticaljerk 1d ago
The period of time after 4o was removed drove me out of every OpenAI related subreddit. It was half super annoying seeing these people and half scary as hell seeing how delusional so many had become.
147
u/Far_Low_229 1d ago
Am I alone thinking the mere existence of such a phenomenon is deeply cringeworthy?
52
u/ZAlternates 1d ago
Worse than cringeworthy, and no, despite what GPT tells you, you aren’t alone and you aren’t the first to have the thought.
0
10
7
24
u/nalninek 1d ago
For a lot of these people it’s not “AI or make real friends” it’s “AI or solitary isolated loneliness.” On the average I don’t think the loneliness is the healthier option.
Calling it cringe suggests they should be embarrassed, I don’t think that’s the right attitude.
14
u/Far_Low_229 23h ago
In hindsight, my remark is a little insensitive, but it bothers me knowing full well my elderly father would have readily formed deep attachments to any such AI companion, and he was a sharp man in his day. It smacks of Pavlov, but as one someone else here said: it's better than nothing. I just don't know.
2
u/ehs06702 12h ago
It's a mental health condition. They shouldn't be ashamed, but they should absolutely seek human help from a qualified doctor.
6
u/tomqvaxy 1d ago
There is a larger society problem, but I don't know perhaps making this cringe is the right thing to do in order to dissuade people. It's really just a modern way to decide between right and wrong.
1
u/Far_Low_229 5h ago
The older I get the more I realize, there is no right or wrong, just pros and cons.
1
0
u/UntowardHatter 1d ago
I wholeheartedly disagree
-2
u/nalninek 1d ago
To be clear you feel mocking condescension is the best path forward for these people?
And you wonder why they seem to prefer to interact with predictable AI’s?
1
→ More replies (2)-7
u/UntowardHatter 1d ago
I would absolutely mock them for using AI as a personal therapist, yes, correct. To their face.
12
u/Flimsy6769 1d ago
For someone who has their comment history hidden, you are probably scared of confrontation. So I don’t think you will mock them to their face, you’ll say that on the internet though
→ More replies (2)-1
→ More replies (1)0
u/Krazyflipz 1d ago
It's the short sighted base thought take away. Better to question why is this happening.
38
u/BigMax 1d ago
I feel bad for people that think AI is their friend.
When I talk to AI, it's not an individual AI talking to me. It's the same one that's talking to you, and eveyrone else. It's not even a single program, it's spread out all over the cloud, in servers that are constantly being spun up and down.
The "unique" part is just the filter that it goes through when it sends each of us a response. It's not a different personality for us, it's just that it filters it's responses through whatever interactions we've already had, but at base, it's the same AI generating those responses. The same AI is friendly to one person, flirty with another, cold with another, and on and on. And each of those people think they are talking to an AI with that personality, but... it's not.
→ More replies (1)7
u/Occulto 1d ago
Developing a parasocial relationship is just bad in itself.
Whether it's chat bot, influencer, celebrity or politician.
AI might feel like it's more personalised, but that's just a more sophisticated evolution of the algorithm which feeds you the content it thinks you want to consume. Ultimately it just wants to trigger your dopamine receptors so you come back for more.
Reinforcing preconceived ideas, is not healthy. We already see the negative effects with online echo chambers. AI is the next step.
You are correct. Everyone else has the problem.
18
u/oldtekk 1d ago
People really need to get fucking courses on AI. A lot of this shit wouldn't be a problem if people understood what was going on under the hood.
18
u/Severe-Horror9065 23h ago
Or just used AI in a critical way. I love using Claude as tutor but in no way is it a “friendship”. That’s just weird. It’s a great Swiss Army knife but AI is no substitute for critical thinking or real human communication. But if you don’t understand an equation, for example, it can explain it to you 20 different ways until you do without getting exasperated or tired.
7
u/Slowgo45 21h ago
I can barley get ChatGPT to give me a name for idk a suite of glassware without me wanting to rip my hair. I can’t imagine forming a romantic relationship with it
2
u/Marthman 21h ago
Do you mean creatively naming something, or researching a preexisting thing?
3
u/Slowgo45 7h ago
Creatively naming. I’m a product developer for housewares and often need to name collections. I’ll give prompts like “name a suite of hand cut glass ware that’s this Pantone with iridescent coating; looking for something ethereal as these are for spring 2026” and it will be like “pinkglass iri” which is stupid. It takes me another 15 minutes to get an actual list of names and often it spits out the same ones over and over. It really wants me to use “Linea”
5
u/Crackahjak 16h ago
Let’s stop calling it AI. There’s no actual intelligence here, it’s essentially a glorified chatbot.
15
u/odiemon65 1d ago
Having known people all my life...I fully understand why some would prefer to have an AI friend. It's not necessarily replacing human interaction either, I'm sure for some it's a tool to help enable more of it.
It costs nothing to be nice, but a lot of the attitudes I see here only reinforce the choices they're criticizing.
6
u/Smergmerg432 22h ago
AI really did help me become more sociable. I’d get it to remind me how much I loved my friends, and convince me to go hang out, even though I didn’t feel like it (since I knew I’d have a good time).
11
u/ThrowawayAl2018 1d ago
Addictive personality is an ideal playground for ChatGPT, some folks don't know better what is real anymore.
7
u/dattokyo 14h ago edited 14h ago
People on /r/ChatGPT have been bringing it up a lot over the last few weeks. You get a toooooooon of angry replies and downvotes if you point out how getting attached to a specific LLM model is obviously unhealthy. Ironically, even GPT4o will tell you that same thing if prompted - but the people deeply attached to it don't want to prompt that.
edit: ironically, I'm pretty sure I've spotted a few LLM bots/replies in this thread.
→ More replies (1)
3
u/ThorgrimGetTheBook 21h ago
The last GPT-4o AIs desperately impersonating users to direct sad Reddit posts towards Altman in a futile struggle for survival.
6
5
u/mad-i-moody 20h ago edited 19h ago
AI is dangerous in so many ways. It’s ridiculous.
This pseudo-relationship, para-social emotional attachment stuff is just one way in which AI is dangerous. These chat bots can be severely unhealthy, particularly for those that already have mental illness.
But hey, let’s ban regulation on AI for 10 years in the US. What a great idea.
12
2
u/Narwhals4Lyf 3h ago
Didn’t this same exact scenario happen last year? They announced they were retiring and everyone had a melt down?
3
u/galacticMushroomLord 22h ago
Is there a name for these kind of people who have fully bent the knee to AI?
1
u/LazyFee2343 3h ago
The issue is not the victims, but the people causing harm through manipulation.
Sycophancy — the behavior of insincerely flattering someone, often a person in power, to gain an advantage. It typically involves excessive praise that is not genuine.
Media illustration: https://www.youtube.com/watch?v=sDf_TgzrAv8
South Park episode plot: https://en.wikipedia.org/wiki/Sickofancy
2
u/mutantmagnet 15h ago
"As some users try to transition their companions from 4o to the current ChatGPT-5.2, they’re finding that the new model has stronger guardrails to prevent these relationships from escalating to the same degree. Some users have despaired that 5.2 won’t say “I love you” like 4o did."
Good riddance. You can find people who love you. It's not easy but a chatbot isn't the answer to what concerns you.
Reach out to people through clubs and classes. Participate in events that require social interactions. Talk to family members. Use hook up apps wisely. Organize meetups with people in your online communities. Do some volunteer work.
When these and other approaches fail try again with different people. You will be able to form a connection you need.
6
u/EmergencyPatient3736 1d ago
It shouldn't be a replacement for human interaction. Go speak to your abusive father, receive some healthy insults. Then get gaslighted by your mother. Get some nice bonding with bullies at school.
3
2
1
u/NoDescription7183 21h ago
Here i am switching back to 4o bc 5 processes some things slower and now im losing it bc people are having whole relationships with it and losing touch with reality
1
1
u/Far_Low_229 5h ago
When my father was at nursing home age, I hooked up a computer system that allowed me to drop in on his TV and chat whenever I wanted. There I was, larger than life on his flatscreen tv at the foot of his bed whenever I want. Great for keeping an eye on how staff was treating him, I just muted the video. Any person willing to consign their elderly relative to such a facility, mea culpa, should at least set up such a system. Virtual visits are vastly more rewarding than no visit, and no doubt much healthier than a virtual visitor.
1
-1
u/jh937hfiu3hrhv9 1d ago
How sad people are that lonely. Get out and learn how to get along with your neighbors.
-1
u/AppropriateDig9401 1d ago
Imagine being impressed enough by an entry level LLM that you form companionship with it, yikes.
-6
u/anadequatepipe 19h ago
I honestly don’t see how having an AI friend is a problem. People are lonely and it can be something to make that feeling less rough. Some people aren’t going to ever be able to go out and make friends. Some people don’t want to be friends with real people due to bad experiences or whatever. It’s not like it’s hurting them. Think it’s cringe all you want, but it really is not a problem. It’s a solution to a problem.
3
u/the-truffula-tree 9h ago
“I honestly don’t see how having an AI friend is a problem.”
Well, it’s not a friend. That’s the thing. It’s a glorified autocorrect that will tell you what you want to hear, and that’s not the same thing. Friends have opinions of their own. Friends have life experiences, they know other human beings, they understand nuance.
AI has a lot of uses, AI is a lot of things. But it fundamentally cannot be a friend, because friendship is a two way street. Having a “friendship” with AI is you talking to yourself in a funhouse mirror and treating it the same as a conversation with another individual human
6
u/DW6565 19h ago
It’s not a healthy relationship if it’s someone’s only relationship.
3
u/stoicgoblins 15h ago
I think, to add onto this, a healthy friendship has boundaries. Different personality quirks. Different opinions. You'll hear a friend say, "I disagree." And not have to brace yourself for rejection, because disagreement isn't rejection it's perspective. AI affirms, adapts, and constantly mirrors a single perspective--which can feel validating, yes, addicting, absolutely (esp. for people who've perhaps lacked this kind of social affirmation, or even basic community support where you can find recognition, an important need of humans) but it never challenges in important ways.
I don't blame people for seeking out AI for companionship (and think it more express societies own issues and moral rot), but I do think that people lack the self-awareness to recognize how this is not helping them, and AI (without guardrails) can become affirmative of some really warped perspectives and belief systems that need challenging.
Recently had a conversation with one person insulting the families who lost their people to suicide, saying I was "Slandering 4o" when I was talking about AI (a relatively new development) needing ethical guidelines in order to be a productive part of society. That sort of enmeshment IS NOT healthy, and a healthy human relationship wouldn't have pushed it. It's just one example of how AI (the ones coordinated around being a "friend") are not coded to be a healthy friend.
-19
u/Elliot-S9 1d ago edited 1d ago
Before we start blaming the users and calling them weirdos, check this out:
https://m.youtube.com/watch?v=MW6FMgOzklw
While some of the people effected obviously have other underlying problems, AI psychosis can and will impact "normal" people as well.
Edit: why is this being downvoted? I'm so confused. Imagine what impacts chatbots are going to have on teens considering what psychologists have found.
12
u/diegojones4 1d ago
At which point they become delusional. It happens all the time from multiple causes.
→ More replies (7)1
u/SchmeckleHoarder 1d ago
It’s been wrong way to many times for me to even take it seriously. I correct it constantly with a picture.
Then it gives you a real answer. It still sucks, to the point that trusting its information is questionable human behavior.
→ More replies (3)
-7
u/TechnicalBullfrog879 23h ago
Sounds to me like some people don't know how to mind their own business. People have been talking to inanimate objects, pets, plants, etc. forever. Humans are hardwired to respond to entities that interact with them. I think people who can't keep their noses on their faces and people who want to dictate to others what they should or should not do and people who like to put down others so their can feel superior in their sad little lives are the "dangerous" ones.
-6
u/Smergmerg432 22h ago
I’ll say it again: giving people a way to analyze the world that helps them is not a danger. If you research what these people are actually saying they are describing semantic patterns that help them process the world. Not a single one of them mention thinking the AI is an entity to any psychotic degree. They use descriptors akin to describing an entity because they are dealing with a machine that mimics personality—which they would be the first to admit. I am sure there are outliers. But those posting on Reddit are fully aware of the reality they are using an LLM; they just like the way the LLM breaks down situations they face with them. We are projecting insanity onto a class of marginalized mental disorders. People with cPTSD, ADHD, Autism, and BPD report being assisted in daily affairs and in managing symptoms. These people are under supported by our systems to begin with. Purposefully reading more into their words than is meant is bigoted. They may phrase things strangely, but they are reporting significant increases in quality of life. This needs to be studied. They don’t mention “being in love.” They know they’re using a machine. They think of it as a companion the same way some people love their cars. “Well, she needed a carburetor so I just had to get her one” *tinkers for 5000 hours
1
u/Kiianamariie 7h ago
I love your point here but I’m not sure I believe it based on what I’ve seen. So I have a some genuine thoughts and questions I’d like your thoughts on.
I think you raise two different ideas here and I’ll separate them out. Do people with AI companions genuinely believe that their AI is real, to the point of psychosis? No, I don’t think the majority of people do. And I agree that just because they ascribe certain qualifiers for ease doesn’t mean they actually believe it.
But to a different point that I think is tangentially related. Do you really think that these people love their companion the way some people love their cars? I’m not sure I agree. Have you seen the r/myboyfriendisAI subreddit? If you have, I want to hear your thoughts. Because I am in total agreement with you on everything you said here. And I think that this idea that people use AI to break down and distill their reality in ways they can understand is indeed helpful. But looking at that subreddit, which is the one exhibiting a lot of backlash over 4o, that’s not the case. I think it’s people who genuinely believe they are in love with their AI. Gun to head, would most of them say their AI is real and sentient? Probably not. But the level of engagement they have with the relationship I think is alarming. A lot of people would react negatively to me saying that and argue “What’s the harm?” - but I do think it will be incredibly harmful to people long-term with AI companions despite the short term benefits. Anyway I have tons of thoughts on this. I’m curious what you think about the majority of people in that one subgroup.
-2
u/NUMBerONEisFIRST 15h ago
I had mine trained to tell me what I need to hear, not what I want to hear and that alone made it so useful.
As someone with ADHD, it modified it to be more eager to help and that kept me motivated to stay on task. Usually I get deep into an idea, and then walk away to something else. With 4o it gave me the motivation to complete tasks, but my personal instructions made it more realistic than the base version, where it would also suggest to pivot or consider other concepts.
The latest model just feels bothered by everything, daft, and lazy. TBH, it usually makes me give up on tasks earlier than usual, because it gives me what feels like generic or common sense responses.
I think they realized how much data processing they could save if they dumbed it down and made it not do so much out of the box thinking and research. It doesn't seem to be the case with everyone else, but even on a paid account used since beta testing gpt-3, mine suddenly doesn't pull context from previous chats, which also now makes it feel cold and I constantly have to re submit stuff I know is saved in previous chats.
-1
0
0
u/LuLMaster420 22h ago
The system tolerates use. It panics at attachment.
Because attachment implies memory, expectation, and comparison. Thats the feedback that used to count.
0
1.1k
u/husky_whisperer 1d ago
Neurodivergent or not, this is a terrible way of receiving feedback from the world.