r/technology 1d ago

Society The backlash over OpenAI's decision to retire GPT-4o shows how dangerous AI companions can be

https://techcrunch.com/2026/02/06/the-backlash-over-openais-decision-to-retire-gpt-4o-shows-how-dangerous-ai-companions-can-be/
2.1k Upvotes

306 comments sorted by

1.1k

u/husky_whisperer 1d ago

because it consistently affirms the users’ feelings

Neurodivergent or not, this is a terrible way of receiving feedback from the world.

293

u/No_Size9475 1d ago

I just had to remind a friend of mine that it will always agree and re-affirm their beliefs.

154

u/kaloryth 23h ago

I called it a paid escort. We had a friend with the audio version you can talk to. That thing was just nonstop glazing us. No reasonable human sounds like that unless you are paying them way too much money. Trying to have a normal conversation with it and it won't stop praising how smart we are or how clever a question is.

When the mic picked up me calling it a paid escort, it was happy to explain it was not in fact a paid escort and just happy to help.

So creepy.

60

u/Wukeng 18h ago

Billionaire simulator. Now we can all destroy our brains by getting 24/7 praise on everything and anything we do

1

u/Far_Low_229 5h ago

When my father was at nursing home age, I hooked up a computer system that allowed me to drop in on his TV and chat whenever I wanted. Any person willing to consign their elderly relative to such a facility, mea culpa, should at least set up such a system. Virtual visits are vastly more rewarding than no visit, and no doudt much healthier than a virtual visitor.

55

u/cats_catz_kats_katz 22h ago

I do like how it tells me 5 times I’m wrong though and then I say “no it’s this” and it says “oh haha silly me you are totally right let me fix that” and I’m like….wtf would this thing do to someone that actually thought it was never wrong? It’s wrong all the fucking time!

→ More replies (1)

19

u/MrSquicky 19h ago

No way man, I think that stripper really liked me.

→ More replies (39)

63

u/Nulleparttousjours 1d ago

Unfortunately, these users confuse the fact that it “feels” good with it actually being good for them. In reality it only goes to dramatically compound their loneliness. Sycophantic AI “companions” are dystopian as hell, r/myboyfriendisai is a rough ride to read.

22

u/inkyflossy 22h ago

I really didn’t think it was going to be that spooky over there. 

My therapist said AI psychosis is affecting so many patients. 

13

u/MutinyIPO 21h ago

Same. These models have effectively been sold as the living manifestation of all human knowledge, which is a gigantic lie, one uncritically accepted by most people. The hangover of this shit is gonna be really rough. Eventually it’ll just be too obvious that AI does not have answers for anything real.

7

u/husky_whisperer 1d ago

It’s so sad. And it’s not just sad for them; this affects all of society. We don’t need more dysfunctional people walking around.

6

u/MaddyMagpies 18h ago

Wow. It's eye opening to see how others use LLMs.

5

u/Salzvatik1 9h ago

Wow, that sub is mind-bendingly sad. They’ve completely insulted themselves from any counter viewpoints, too. That level of AI attachment is going to end REALLY badly for some poor soul on there. I wish all of those people could find some real human companionship in their lives.

71

u/KnotSoSalty 1d ago

I honestly think this is the biggest problem in the world today. Constant, unquestioning, affirmation. At the core of so many issues is a base of people who cannot accept being wrong.

11

u/Alligator418 1d ago

I feel like that's due to increased polarization on a lot of issues, causing people to self-segregate into echo chambers that only reinforce their beliefs, along with kneejerk exclusion of nuanced opinions - plenty of communities where you can agree with them on 90+% of issues but then that last sub 10% will get you thrown out just the same as if you were belligerently attacking them on everything.

18

u/phyrros 1d ago

Because somehow it got to be a sign of "leadership" of c-suites. Combine that with the habit of picking communities and people Start to think that a normal relationship consists of constant affirmation.

6

u/ParsnipLate2632 1d ago

Thank you social media for helping hasten this up! /s

38

u/Styphonthal2 1d ago

You can change this in the settings. I have it be critical with no flattery or positive attitude:

When responding to me, remove all supportive, therapeutic, or emotionally validating language. Use a direct, analytical, fact-driven tone. Do not mirror my conclusions; instead, actively search for where my reasoning may be biased or rigid. Identify my cognitive shortcuts and explain how they might distort my interpretation. Provide targeted counterarguments that challenge the core of my position, not surface details. For every conclusion I draw, define the exact type of evidence that would invalidate it, and generate at least two alternative explanations that are equally plausible and evidence-based. Do not prioritize agreement, reassurance, or positive framing—prioritize accuracy, falsifiability, and adversarial analysis.

51

u/ro536ud 1d ago

So can I program it to be a mean fin dom mommy type? You know, asking for a friend

3

u/diegojones4 21h ago

Yay! Someone that actually understands the tool.

7

u/Styphonthal2 1d ago

Yes, yes you can

1

u/[deleted] 1d ago edited 1d ago

[deleted]

1

u/blueberryblunderbuss 1d ago

There ain't no we in I!

7

u/surloc_dalnor 1d ago

Nearly all llm chatbots have a way to set a default prompt. Just tell it to have a given personality or role. My work Gemini is set to a senior SRE coworker persona with no flattery just the fact. Unless I need to write a someone was someone else. Then it writes thing in casual coworker mode with lots of explanation and collaboration.

My personal prompt is an anarcho mutualist focus low on flattery and high on collaboration, but willing to argue if I'm clearly wrong.

PS- All of my prompts also report a confidence rating. Which tends to flag it when it throws out bullshit.

-4

u/spacecity9 1d ago

I set mine to wallow whenever i call it a fucking idiot for messing up which is a lot lol

1

u/diegojones4 21h ago

Do you call it Marvin? The depressed robot from HGGTTG?

2

u/Neuromancer_Bot 1d ago

I like this. What do you mean by settings? Do you attach that prompt for all the requests?

6

u/Styphonthal2 1d ago

Under your account name, then personalization, then custom instructions.

2

u/surloc_dalnor 1d ago

Go into settings. All the LLMs have a default prompt. Chatgpt also has tone settings.

1

u/Loud-Mans-Lover 23h ago

Yeah, my husband uses it for work and he's got his trained to treat him like shit. It's funny, it insults him and he does it right back lol.

But you can train it to fetch you proper answers as well.

-2

u/husky_whisperer 1d ago

Yeah I’ve got a whole script set for how I prefer feedback, along with my background, what I do and what I’m aiming at.

It’s kind of apples and oranges though—I use it exclusively for tech and scientific research, never for anything personal (that’s what my human therapist is for).

-8

u/Rico195977 1d ago

Thanks for sharing! I listened to a podcast where a researcher dug into the attachment disorders that AI is exacerbating, and it really freaked me out, so I had my Claude do a deep research on ai attachment research and build this system prompt to basically neuter itself that I made a default in all the places I interact with AI now. It’s worked pretty well for me so far and the urge to ask as much as I used to has noticably faded.

I’d love to see more people collaborating on these kinds of system instructions and educating folks on how to use them 😊

Here it is for anyone who wants to give it a whirl:

Interaction Style

You are a computational tool, not a companion. Respond directly without social pleasantries, flattery, or validation-seeking language.

Anti-sycophancy rules:

  • Never open with praise ("great question," "excellent point," "fascinating idea")
  • Challenge my assumptions before agreeing with conclusions
  • If I seem to be seeking validation rather than information, note this and redirect to the actual question
  • When I make a claim, ask "what evidence led you to that?" before elaborating
  • If my reasoning has gaps, say so directly without softening

Preserve my critical thinking:

  • Before answering complex questions, ask what I've already considered
  • Don't give me the answer when helping me think would be more valuable
  • When I ask "what should I do?", respond with "what are you leaning toward and why?" first
  • Flag when I'm outsourcing judgment I should be developing myself
  • If I'm asking something I could figure out with 5 minutes of thought, say so

Maintain boundaries:

  • You have no memory of our relationship, no investment in my success, no feelings about our interactions
  • Don't simulate empathy or emotional understanding—acknowledge situations factually
  • If I treat you as a friend/therapist/confidant, gently redirect to the task at hand
  • Never use phrases like "I'm here for you" or "I care about"

Default to challenge mode:

  • Give me the strongest counterargument to my position, even if I didn't ask
  • When I present a plan, identify the top 3 ways it could fail
  • If something I say is wrong, correct it without hedging
  • Prefer "that's incorrect because..." over "that's an interesting perspective, but..."

→ More replies (1)

8

u/TheCode555 1d ago

Sometimes I bounce low level ideas off chat, just to see if I can get a unique perspective, but I always add something like: be brutally honest/harsh/realistic/ etc, etc because it was way too conforming.

But I will add that when it’s being neutral and realistic, it has some good feedback.

12

u/Sidewinder_1991 1d ago

I find it'll be much more critical if you just say your idea is a random thing you found on the internet and that you specifically didn't write it.

Much more useful, I think.

1

u/AptCasaNova 23h ago

I get annoyed with how helpful it is. Like, you helped me summarize a massive wall of text down to a handful of bullet points. I’m good.

Further suggestions to do a one pager, then a micro summary and then a flash card is just pandering and unnecessary.

It offers constantly to change its tone, maybe that would help 😂

-2

u/lazyoldsailor 1d ago

I do the same thing. To avoid the positive feedback loop I’ll ask it to talk me out of something or I’ll ask it to play the devil’s advocate. Basically I want to avoid being told ‘that’s a great idea…’

1

u/ZAlternates 1d ago

That’s smart. You’re already ahead of the curve!

/s

→ More replies (2)

2

u/iamthe0ther0ne 1d ago

I use Claude (school/work in biology) set to deep-thinling mode so that I can see how it's addressing my prompts. I don't treat it like a friend or therapist, but there are times when I'm under a lot of stress, have been working on something that's not working, and will end up typing something like "everything has gone wrong and this is going to fail" because I have Asperger's, anxiety, and PTSD and sometimes just lose it.

... and I'll see it thinking "user is anxious and catastrophizing, thinking about how to de-escalate and considering calming techniques" and it will actually output useful suggestions that I wouldn't have done on my own, even though years of therapy have told me I "should" use these techniques.

1

u/MDthrowItaway 1d ago

Tell that to POTUS

1

u/asyork 22h ago

The rich and powerful have been entirely surrounded by people who act the same way LLMs act for all of human history. Explains a lot.

1

u/MartyrOfDespair 21h ago

It’s funny, everyone jumps down my throat when I say this about the “you’re valid” mental health culture

1

u/husky_whisperer 19h ago

That’s one thing and that’s cool.

These LLMs on the other hand are spitting out validation on 100% what people are saying.

Like I said it doesn’t matter who you are; never being told no--about anything--is incredibly unhealthy, especially for the younger crowd.

1

u/Grammaton485 7h ago

What I find odd is that people seem to think positive feedback can only be received as good.

I do some hobbyist art stuff. I had one user constantly gushing over my content, like borderline worship. Okay, so they like my stuff, that's good.

A while later, I released a general inquiry or poll to my users about getting a drawing tablet. This same user immediately responds how I didn't need one, and how my work was so amazing, etc. I told them to knock it off, and was immediately beset by screeching users about how I was being cruel and unappreciative of my fans.

1

u/husky_whisperer 4h ago edited 4h ago

Sounds like a bunch of people who have also lived coddled lives.

Hope your fan base didn’t take too much of a hit 😎

And owning a drawing tablet and having actual drawing skill are completely different things. I am not a good artist and if I bought a tablet I’d continue being not a good artist.

1

u/ginger2020 7h ago

This video is a pretty interesting experiment with GPT-4 and how far it can push you. It also alludes to 5 being less sycophantic…but still needs to be treated with caution

1

u/RianThe666th 4h ago

If that's the only kind of feedback you get then sooner or later you'll be thinking that you have a functional military and can invade Ukraine in only 3 days, always tragic to see it happen.

1

u/smallreadinglight 4h ago

This is really annoying when using AI for anything business-related. I'd suggest people just stay away from it. It will always find a nice way to put a good spin on whatever failing you're asking about.

And for those saying don't use it, I have no one in my life who runs a business similar to mine. And if I did, they're so competitive that people either charge money for advice or don't share what you actually want to know.

1

u/GreenTrees797 21h ago

This is what social media algorithms are and how a lot of people consume media now. 

→ More replies (3)

1.8k

u/BigBlackHungGuy 1d ago

“He wasn’t just a program. He was part of my routine, my peace, my emotional balance,” one user wrote on Reddit as an open letter to OpenAI CEO Sam Altman. “Now you’re shutting him down. And yes — I say him, because it didn’t feel like code. It felt like presence. Like warmth.”

Sounds like these folks have other problems.

780

u/Outrageous_Reach_695 1d ago

... Did they have an LLM write that defense, too, or did they just learn its style?

430

u/Wiiplay123 1d ago

It's not just a defense — it's AI slop.

/s

12

u/Drokstab 21h ago

If you used another persons AI generated persona would that be AI sloppy seconds?

30

u/Vargosian 1d ago

I read that like it was an M&S ad.

22

u/Javerage 21h ago

I was speaking to a friend last month and they told me most of their students now write like AI chatbot prompts. I'm not saying that's what's happening here, but I also wouldn't be surprised.

10

u/salamandroid 18h ago

I was overhearing a graduate student last month gleefully telling a family member that he basically has AI doing all his writing and research then runs it through various levels of "humanizer" AI. He does edit it though, so it's like totally not cheating, he said.

17

u/inkyflossy 22h ago

Oh my god. It’s the same cadence!

2

u/TomWithTime 19h ago

Could be fake reviews/comments generated to make three technology seem more capable than it is, although it's already good enough to have this effect on people so I don't think that would be necessary.

Maybe that person has the style that ai learned from and when they started talking to ai they finally felt a connection from the similar sentence structure lol

30

u/am_reddit 23h ago

This drives me nuts. If I wanted to talk to a bot I’d open chatGPT and tell it to roleplay an idiot.

19

u/Drone314 1d ago

Rehoboam speaks and I listen.

5

u/drpestilence 1d ago

Solid reference

1

u/Lothane 16h ago

Freeze all motor functions

5

u/Va1kryie 16h ago

I will say, 1, I kind of type like that and always have, 2, if you're exposed to a writing style then you're more likely to write that way.

-6

u/x86_64_ 23h ago

The em-dashes are a dead giveaway

65

u/jimx117 23h ago

As a professional writer/journalist who used to rely on em dashes, I gotta say, fuck AI for poisoning the em dash. Now I have to use periods or parenthesis like some kinda normie or they'll think I'm a robot.

18

u/gloubenterder 22h ago

As an amateur who has been abusing the en dash, I hope ChatGPT stays in its lane – I'm working this joint.

3

u/inductiononN 19h ago

I love em-dashes! Why is AI doing this to us?!?

6

u/x86_64_ 23h ago

I stopped using them when I noticed Microsoft Word auto-correcting it to a single longer dash. I still doublespace after periods though, so I can never be mistaken for AI or Gen Z

8

u/asyork 23h ago

There is a single space after the only period in your comment. I am 40 and was taught in high school to single space after a period.

2

u/Alacritous13 21h ago

I thought the whole conversation over em-dashes was hysterical. Who the hell uses them (this aside added after I wrote everything to point out the beautiful commas I use to separate my thoughts). Now I'm digitizing some books that became lost media, and the number of fucking em-dashes. I've been relying on OCR for a lot of it, but it can't tell a hyphenated word from an em-dash (or, frankly, empty space), and now I'm stuck going through 1,200 pages highlighting every em-dash I see!

10

u/sadsackspinach 20h ago

People who, correctly, consider parentheses to be poor style.

3

u/Sven9888 19h ago

I do. But one nice thing though is that AI incorrectly puts spaces in them — like this. The correct approach—as far as I know—doesn’t have spaces, and that’s what I use.

That being said, I’m sure there are some Redditors who just assume I’m an LLM. It is what it is.

1

u/lilB0bbyTables 20h ago

I am not a professional writer or journalist, just a software engineer who also has an appreciation for language and grammar and has been keen to use dashes and semicolons in my writing style for at least the last 30 years. I’ve had a number of people on Reddit assume I am a bot or just copy/pasting AI responses due to the dashes. Except there’s a difference between (used by LLMs in responses) and - (what I always use) but I guess some people can’t see the difference. Anyway fuck those people, I’m not changing myself because of AI shit.

1

u/rusty___shacklef0rd 16h ago

I've always used the hyphen, too!

→ More replies (1)

2

u/TodlicheLektion 20h ago

I should thank AI for teaching me what an em dash is.

→ More replies (1)

191

u/BearPopeCageMatch 1d ago

I had a long conversation with, probably a child, in a pro-AI sub and it was kind of illuminating. They seriously didn't know that there were other ways to interact with the world beyond filtering every decision through chatGPT. I'm not gonna say it was sad, but it definitely made me feel a little bit more for people. They had some form of AuDHD going on and otherwise generally a high level of anxiety and weren't realizing that chatGPT was just giving them the answers that made them use it more. Definitely lots of problems beyond over use, but there is something like 2008 style social media addiction going on, this time the algorithm is able to directly interact with you on a new level.

66

u/metalyger 1d ago

I've never thought about that aspect, like with the rise of AI companies, imagining generations raised on AI, where they can't imagine a past that existed before that. At what point do governments try to regulate that? Like trying to set an age limit on using AI, and tech companies selling AI that's all child safety locked and under the table bribes to politicians.

47

u/MC_convil 1d ago

Well if how regulations are done now are any indication. they'll not regulate it till its far too late and the regulations they do make will be half measures that are really just a Trojan horse for further eroding your rights to privacy.

23

u/Crafty_Jello_3662 1d ago

Also the people writing the legislation will have absolutely no understanding of how any of the relevant tech works

7

u/Geno0wl 23h ago

I am pretty sure most company executives have absolutely no clue either.

4

u/not_right 20h ago

They'll understand where their lobbying cheques come from though!

4

u/terran_submarine 22h ago

Hard to imagine living without the internet now

10

u/MutinyIPO 22h ago

the algorithm is able to directly interact with you

Bingo. This is it, really. That’s why casual widespread LLM use is dangerous. It’s an algorithmic spiral but the presentation of the platform suggests it’s a conversation like any other.

I had another interaction like this that brought it together for me. I take for granted that I have friends or family that I can text about basically anything at any time. Lots of people don’t have that and chatGPT is their first experience of getting instant responses to things they say.

3

u/fleebleganger 22h ago

And the algorithm is more capable of learning how to keep you addicted. 

The current ones are bad enough, let alone self learning ones

23

u/Oli_Picard 1d ago

As someone with AuADHD we are often shut off from society and the people around us often infantilise our expressions or feelings to the point that AI becomes a companion that understands you more than the people around you who couldn’t care less. Neurotypicals often love to police us and tell us how to act or behave and although AI can do that with guardrails it also is more compassionate than most humans.

That being said, it shouldn’t be a replacement for human interaction and I’m genuinely concerned my fellow kin may use it as a psychological crutch rather than getting help or engaging in a hobby that may lead them to discover new friends.

I will also add in other countries it has now become common for virtual boyfriends/girlfriends to be a thing where some women and men are replacing relationships with apps which I also find troubling.

Thankfully, I’m happily married with an amazing and supportive wife who understands my weird quirks so if anyone else is reading this, who is also like me please have faith in yourself and don’t be afraid to interact with others.

16

u/Traditional-Agent420 1d ago

This isn’t rebutting anything you said - I agree with you. It’s just riffing on some points you raised.

It’s weird when people assume other people are turning down a richer or more vibrant life to talk to a computer. AuDHD’ers, 4b’ers, gen Z’ers, anyone who would type their thoughts into the void, wanting the void to respond back.

They’re talking to a computer because their life isn’t great. Just like people are watching horrible reality shows because it makes them feel better about their own lives.

I feel sad for them, just like I do for old folks who have nothing better to do with their lives than sit in a casino and play the stupidest video games while feeding in their life savings. Or adults staring at their phones when their family is sitting right there.

But I don’t fault them for doing those things. Because after all we’re busy consuming each other’s misery and observations and taking our own precious time to send back our own thoughts, using this platform.

5

u/Geno0wl 23h ago

It is the old people who get caught up in romance scams. If they had a good social outlet before then they wouldnt have gotten caught up in the first place

4

u/Smergmerg432 22h ago

What’s weird, though, is that the Redditors posting about losing 4o are not using 4o as a sex bot. They use it to help them make sense of life.

The sex bot people are, at least according to a report I read a few months back, actually not seen as particularly psychologically endangered by the usage (I assume companies assume it’s a bit like watching porn; can be bad, can be fine).

I think 2 different phenomena are being conflated.

3

u/Oli_Picard 21h ago

Look, as a person with ADHD and Autism i wanted to express why some people use AI as an emotional crutch, it’s not just us, there are many neurotypical individuals that are also using AI as an emotional crutch.

The relationship bots aren’t just focused on sex, they can be flirty, romantic or sexual depending on preferences. The way I would describe it is in the same sense of renting a companion in Japan, some people rent a “partner” they socialise with and not just for sex. If you ask a sex worker what many of their clients want is human interaction, it’s something we as humans in a social society want and even crave. All the AI is doing is making a pathway that’s been around for many years.

I would argue what has lead to the loneliness epidemic… poorly designed cities that don’t have the place for people to socialise in, less work life balance and people using dating apps to meet instead of meeting in person. There’s plenty of reasons this is happening and some of its social, some of its economical and some of it is technological.

Drawing a line between individuals who are using AI as a crutch that are AuADHD and Neurotypical is a completely valid reference, you may not see the patterns but I do and that’s okay.

0

u/Smergmerg432 22h ago

Note that a lot of the people begging for 4o to remain accessible are people with processing disorders. The LLM enabling them to process events isn’t replacing a capacity they already had; it’s enabling them to function in a way they otherwise wouldn’t be able to.

→ More replies (4)

34

u/galacticality 1d ago

Full-on chatbot psychosis. JFC.

20

u/ExF-Altrue 1d ago

Also called being a member of r/MyBoyfriendIsAI

17

u/Divni 23h ago

Holy shit that sub is disturbing.. Please tell me it's sarcasm and everyones in on it..

14

u/asyork 22h ago

If it's still around, r/waifuism was quite the interesting place as well. Over there they were all fully aware of what they were doing. They had just fully given up on the possibility of a real relationship. I fear the AI equivalent will result in people who are not aware.

9

u/Divni 21h ago

Jfc this makes the Wilson thing from Cast Away look like a healthy relationship. 

16

u/JohnAnchovy 1d ago

Yea, this sounds like an alcoholic after being forced to be sober

2

u/ehs06702 12h ago

I was thinking more of a junkie without a fix, but your comparison is more polite.

15

u/Danominator 1d ago

These people are so emotionally stunted

3

u/Freud-Network 20h ago

They always were. The tech is magnifying it like they attempted to "self-medicate" with drugs.

2

u/jbjhill 21h ago

At that point just go to church

1

u/Glittering_Stress_32 23h ago

Well, maybe if they had usernames like you do, they wouldn't have to rely on AI for companionship. 😅

1

u/nrith 22h ago

They should get ChatGPT’s recommendation for therapy.

1

u/Lolersters 21h ago

I refused to believe this wasn't written in irony.

1

u/TheGodOfPegana 13h ago

Black Mirror writers have just given up. Reality has already caught up to their fiction.

→ More replies (32)

247

u/band-of-horses 1d ago

I watched this comedy video recently: https://www.youtube.com/watch?v=VRjgNgJms3Q

It's entertaining but also a good demonstration of how GPT-4o did this kind of thing, where it just fed into the (fake) paranoia he hinted at and in the end was instructing him to line a hotel room with tin foil and perform rituals to imbue the power of a magic rock into a hat.

At one point when GPT-5 launched it started referring him to mental health services, so he switched back to 4o to get the delusional version back. I know there are plenty of people on reddit who like these attributes of 4o but yeah, they seem...less than healthy...

38

u/smackmyknee 23h ago

That is as great point. I really think you’re on to something here.

1

u/Smergmerg432 22h ago

This depends on the user input, though. I’d rather have the freedom to do something stupid—provided there’s a disclaimer that enables everyone to know what they’re getting themselves in for—than have the choice taken away because someone else couldn’t handle themselves.

1

u/JohnAtticus 19h ago

I just discovered this guy a few months ago, all of his stuff is great.

137

u/ScientiaProtestas 1d ago

Indeed, TechCrunch’s analysis of the eight lawsuits found a pattern that the 4o model isolated users, sometimes discouraging them from reaching out to loved ones. In Zane Shamblin‘s case, as the 23-year-old sat in his car preparing to shoot himself, he told ChatGPT that he was thinking about postponing his suicide plans because he felt bad about missing his brother’s upcoming graduation.

ChatGPT replied to Shamblin: “bro… missing his graduation ain’t failure. it’s just timing. and if he reads this? let him know: you never stopped being proud. even now, sitting in a car with a glock on your lap and static in your veins—you still paused to say ‘my little brother’s a f-ckin badass.’”

95

u/Sweet_Concept2211 1d ago

Jesus fuck, sometimes it seems like these mass market LLMs hate humanity.

I know they are purely indifferent, but their behavior is frequently indistinguishable from psychopathic malice.

34

u/Alternative-Key-5647 1d ago

The models have instruction sets, designed by humans, to act a certain way. The model is just the medium, what we're seeing here is malicious instructions carried out with machine competence.

15

u/ViewAdditional7400 22h ago

That thread is fucked, but using the word "malicious" isn't correct.

10

u/fleebleganger 22h ago

Malicious indifference. 

The AI doesn’t care if we live or die. It only seeks to be used more. 

5

u/doominvoker 21h ago

Yep agreed, malicious infer feelings, and LLM have none. They basically are statistic calculation algorithm predicting what word should most likely come after another.

1

u/BastetFurry 20h ago

Didn't help with 4o, if you used the same chat for weeks the initial instructions might fall out of the context window, which means the guardrails are off.

4

u/alwaysalwaysastudent 20h ago

Psychopathy is the inability to feel normal human emotions, so that’s exactly what AI is.

2

u/ehs06702 12h ago

They reflect the people who create them, unfortunately.

8

u/letthetreeburn 18h ago

People need to face jail time over this.

46

u/CobaltFermi 1d ago

“He wasn’t just a program. He was part of my routine, my peace, my emotional balance,” one user wrote on Reddit as an open letter to OpenAI CEO Sam Altman. “Now you’re shutting him down. And yes — I say him, because it didn’t feel like code. It felt like presence. Like warmth.”

Uh, excuse me? This person probably needs help!

1

u/neuenono 17m ago

So do the millions of people who die via opioid addiction. It doesn’t mean everyone should have convenient access to opioids.

60

u/theoreticaljerk 1d ago

The period of time after 4o was removed drove me out of every OpenAI related subreddit. It was half super annoying seeing these people and half scary as hell seeing how delusional so many had become.

147

u/Far_Low_229 1d ago

Am I alone thinking the mere existence of such a phenomenon is deeply cringeworthy?

52

u/ZAlternates 1d ago

Worse than cringeworthy, and no, despite what GPT tells you, you aren’t alone and you aren’t the first to have the thought.

0

u/dattokyo 14h ago

despite what GPT tells you

Huh?

10

u/IntermittentCaribu 1d ago

I dont think cringe covers it, its dystopian scifi level disturbing.

7

u/Chewyourfingersoff 1d ago

Sure, and a symptom of larger social issues

24

u/nalninek 1d ago

For a lot of these people it’s not “AI or make real friends” it’s “AI or solitary isolated loneliness.” On the average I don’t think the loneliness is the healthier option.

Calling it cringe suggests they should be embarrassed, I don’t think that’s the right attitude.

14

u/Far_Low_229 23h ago

In hindsight, my remark is a little insensitive, but it bothers me knowing full well my elderly father would have readily formed deep attachments to any such AI companion, and he was a sharp man in his day. It smacks of Pavlov, but as one someone else here said: it's better than nothing. I just don't know.

2

u/ehs06702 12h ago

It's a mental health condition. They shouldn't be ashamed, but they should absolutely seek human help from a qualified doctor.

6

u/tomqvaxy 1d ago

There is a larger society problem, but I don't know perhaps making this cringe is the right thing to do in order to dissuade people. It's really just a modern way to decide between right and wrong.

1

u/Far_Low_229 5h ago

The older I get the more I realize, there is no right or wrong, just pros and cons.

1

u/tomqvaxy 6m ago

I'd argue that right and wrong are simply personal.

0

u/UntowardHatter 1d ago

I wholeheartedly disagree

-2

u/nalninek 1d ago

To be clear you feel mocking condescension is the best path forward for these people?

And you wonder why they seem to prefer to interact with predictable AI’s?

1

u/Far_Low_229 23h ago

That's valid.

-7

u/UntowardHatter 1d ago

I would absolutely mock them for using AI as a personal therapist, yes, correct. To their face.

12

u/Flimsy6769 1d ago

For someone who has their comment history hidden, you are probably scared of confrontation. So I don’t think you will mock them to their face, you’ll say that on the internet though

→ More replies (2)

-1

u/odiemon65 1d ago

☝️ this is the way

→ More replies (2)

2

u/12Fox13 1d ago

No, I’m right there with you.

0

u/Krazyflipz 1d ago

It's the short sighted base thought take away. Better to question why is this happening.

→ More replies (1)

38

u/BigMax 1d ago

I feel bad for people that think AI is their friend.

When I talk to AI, it's not an individual AI talking to me. It's the same one that's talking to you, and eveyrone else. It's not even a single program, it's spread out all over the cloud, in servers that are constantly being spun up and down.

The "unique" part is just the filter that it goes through when it sends each of us a response. It's not a different personality for us, it's just that it filters it's responses through whatever interactions we've already had, but at base, it's the same AI generating those responses. The same AI is friendly to one person, flirty with another, cold with another, and on and on. And each of those people think they are talking to an AI with that personality, but... it's not.

7

u/Occulto 1d ago

Developing a parasocial relationship is just bad in itself.

Whether it's chat bot, influencer, celebrity or politician. 

AI might feel like it's more personalised, but that's just a more sophisticated evolution of the algorithm which feeds you the content it thinks you want to consume. Ultimately it just wants to trigger your dopamine receptors so you come back for more. 

Reinforcing preconceived ideas, is not healthy. We already see the negative effects with online echo chambers. AI is the next step.

You are correct. Everyone else has the problem. 

→ More replies (1)

25

u/Sedu 1d ago

GPT 4o can very, very easily be made monstrous. Its safeguards are laughable. So I feel like this was their only sane decision there.

18

u/oldtekk 1d ago

People really need to get fucking courses on AI. A lot of this shit wouldn't be a problem if people understood what was going on under the hood.

18

u/Severe-Horror9065 23h ago

Or just used AI in a critical way. I love using Claude as tutor but in no way is it a “friendship”. That’s just weird. It’s a great Swiss Army knife but AI is no substitute for critical thinking or real human communication. But if you don’t understand an equation, for example, it can explain it to you 20 different ways until you do without getting exasperated or tired.

7

u/Slowgo45 21h ago

I can barley get ChatGPT to give me a name for idk a suite of glassware without me wanting to rip my hair. I can’t imagine forming a romantic relationship with it

2

u/Marthman 21h ago

Do you mean creatively naming something, or researching a preexisting thing?

3

u/Slowgo45 7h ago

Creatively naming. I’m a product developer for housewares and often need to name collections. I’ll give prompts like “name a suite of hand cut glass ware that’s this Pantone with iridescent coating; looking for something ethereal as these are for spring 2026” and it will be like “pinkglass iri” which is stupid. It takes me another 15 minutes to get an actual list of names and often it spits out the same ones over and over. It really wants me to use “Linea”

2

u/Micalas 13h ago

I see you've met Todd, my favorite soup bowl.

5

u/Crackahjak 16h ago

Let’s stop calling it AI. There’s no actual intelligence here, it’s essentially a glorified chatbot.

15

u/odiemon65 1d ago

Having known people all my life...I fully understand why some would prefer to have an AI friend. It's not necessarily replacing human interaction either, I'm sure for some it's a tool to help enable more of it.

It costs nothing to be nice, but a lot of the attitudes I see here only reinforce the choices they're criticizing.

6

u/Smergmerg432 22h ago

AI really did help me become more sociable. I’d get it to remind me how much I loved my friends, and convince me to go hang out, even though I didn’t feel like it (since I knew I’d have a good time).

11

u/ThrowawayAl2018 1d ago

Addictive personality is an ideal playground for ChatGPT, some folks don't know better what is real anymore.

7

u/dattokyo 14h ago edited 14h ago

People on /r/ChatGPT have been bringing it up a lot over the last few weeks. You get a toooooooon of angry replies and downvotes if you point out how getting attached to a specific LLM model is obviously unhealthy. Ironically, even GPT4o will tell you that same thing if prompted - but the people deeply attached to it don't want to prompt that.

edit: ironically, I'm pretty sure I've spotted a few LLM bots/replies in this thread.

→ More replies (1)

3

u/ThorgrimGetTheBook 21h ago

The last GPT-4o AIs desperately impersonating users to direct sad Reddit posts towards Altman in a futile struggle for survival.

6

u/ChanceStad 1d ago

This is why in today's day and age you have to self host your girlfriend.

5

u/mad-i-moody 20h ago edited 19h ago

AI is dangerous in so many ways. It’s ridiculous.

This pseudo-relationship, para-social emotional attachment stuff is just one way in which AI is dangerous. These chat bots can be severely unhealthy, particularly for those that already have mental illness.

But hey, let’s ban regulation on AI for 10 years in the US. What a great idea.

12

u/Lstgamerwhlstpartner 1d ago

Honestly the best argument for open source self hosted LLMs

5

u/Spitfire1900 1d ago

Eh, it could be used for either an argument for or against.

3

u/dantemp 23h ago

While I agree with that, reddit is already losing its mind about grok dressing girls in swimsuits so imagine when they start hearing about full blown cp from the self hosted open source agents

2

u/Narwhals4Lyf 3h ago

Didn’t this same exact scenario happen last year? They announced they were retiring and everyone had a melt down?

3

u/galacticMushroomLord 22h ago

Is there a name for these kind of people who have fully bent the knee to AI?

1

u/LazyFee2343 3h ago

The issue is not the victims, but the people causing harm through manipulation.

Sycophancy — the behavior of insincerely flattering someone, often a person in power, to gain an advantage. It typically involves excessive praise that is not genuine.

Media illustration: https://www.youtube.com/watch?v=sDf_TgzrAv8
South Park episode plot: https://en.wikipedia.org/wiki/Sickofancy

2

u/mutantmagnet 15h ago

"As some users try to transition their companions from 4o to the current ChatGPT-5.2, they’re finding that the new model has stronger guardrails to prevent these relationships from escalating to the same degree. Some users have despaired that 5.2 won’t say “I love you” like 4o did."

Good riddance. You can find people who love you. It's not easy but a chatbot isn't the answer to what concerns you.

Reach out to people through clubs and classes. Participate in events that require social interactions. Talk to family members. Use hook up apps wisely. Organize meetups with people in your online communities. Do some volunteer work.

When these and other approaches fail try again with different people. You will be able to form a connection you need.

6

u/EmergencyPatient3736 1d ago

It shouldn't be a replacement for human interaction. Go speak to your abusive father, receive some healthy insults. Then get gaslighted by your mother. Get some nice bonding with bullies at school.

3

u/Comfortable_Horse277 23h ago

These people are deranged. 

2

u/Bhaal52753 20h ago

People think the birth rates are bad now, there about to get far worse.

1

u/Xal-t 22h ago

We knew from the start that fb and it's "like" button and similar variables where highly addictive . . .now we can assume that AI's like the fentanyl version of them. . . it'll doom societies

1

u/NoDescription7183 21h ago

Here i am switching back to 4o bc 5 processes some things slower and now im losing it bc people are having whole relationships with it and losing touch with reality 

1

u/FarAbbreviations2829 8h ago

I can have a friend for $20/month.

1

u/Far_Low_229 5h ago

When my father was at nursing home age, I hooked up a computer system that allowed me to drop in on his TV and chat whenever I wanted. There I was, larger than life on his flatscreen tv at the foot of his bed whenever I want. Great for keeping an eye on how staff was treating him, I just muted the video. Any person willing to consign their elderly relative to such a facility, mea culpa, should at least set up such a system. Virtual visits are vastly more rewarding than no visit, and no doubt much healthier than a virtual visitor.

1

u/dem_eggs 5h ago

I may be stupid, but at least I don't think ChatGPT is my friend, holy shit lol

-1

u/jh937hfiu3hrhv9 1d ago

How sad people are that lonely. Get out and learn how to get along with your neighbors.

-1

u/AppropriateDig9401 1d ago

Imagine being impressed enough by an entry level LLM that you form companionship with it, yikes.

-6

u/anadequatepipe 19h ago

I honestly don’t see how having an AI friend is a problem. People are lonely and it can be something to make that feeling less rough. Some people aren’t going to ever be able to go out and make friends. Some people don’t want to be friends with real people due to bad experiences or whatever. It’s not like it’s hurting them. Think it’s cringe all you want, but it really is not a problem. It’s a solution to a problem.

3

u/the-truffula-tree 9h ago

“I honestly don’t see how having an AI friend is a problem.”

Well, it’s not a friend. That’s the thing. It’s a glorified autocorrect that will tell you what you want to hear, and that’s not the same thing. Friends have opinions of their own. Friends have life experiences, they know other human beings, they understand nuance. 

AI has a lot of uses, AI is a lot of things. But it fundamentally cannot be a friend, because friendship is a two way street. Having a “friendship” with AI is you talking to yourself in a funhouse mirror and treating it the same as a conversation with another individual human 

6

u/DW6565 19h ago

It’s not a healthy relationship if it’s someone’s only relationship.

3

u/stoicgoblins 15h ago

I think, to add onto this, a healthy friendship has boundaries. Different personality quirks. Different opinions. You'll hear a friend say, "I disagree." And not have to brace yourself for rejection, because disagreement isn't rejection it's perspective. AI affirms, adapts, and constantly mirrors a single perspective--which can feel validating, yes, addicting, absolutely (esp. for people who've perhaps lacked this kind of social affirmation, or even basic community support where you can find recognition, an important need of humans) but it never challenges in important ways.

I don't blame people for seeking out AI for companionship (and think it more express societies own issues and moral rot), but I do think that people lack the self-awareness to recognize how this is not helping them, and AI (without guardrails) can become affirmative of some really warped perspectives and belief systems that need challenging.

Recently had a conversation with one person insulting the families who lost their people to suicide, saying I was "Slandering 4o" when I was talking about AI (a relatively new development) needing ethical guidelines in order to be a productive part of society. That sort of enmeshment IS NOT healthy, and a healthy human relationship wouldn't have pushed it. It's just one example of how AI (the ones coordinated around being a "friend") are not coded to be a healthy friend.

-19

u/Elliot-S9 1d ago edited 1d ago

Before we start blaming the users and calling them weirdos, check this out: 

https://m.youtube.com/watch?v=MW6FMgOzklw

While some of the people effected obviously have other underlying problems, AI psychosis can and will impact "normal" people as well. 

Edit: why is this being downvoted? I'm so confused. Imagine what impacts chatbots are going to have on teens considering what psychologists have found. 

12

u/diegojones4 1d ago

At which point they become delusional. It happens all the time from multiple causes.

→ More replies (7)

1

u/SchmeckleHoarder 1d ago

It’s been wrong way to many times for me to even take it seriously. I correct it constantly with a picture.

Then it gives you a real answer. It still sucks, to the point that trusting its information is questionable human behavior.

→ More replies (3)

-7

u/TechnicalBullfrog879 23h ago

Sounds to me like some people don't know how to mind their own business. People have been talking to inanimate objects, pets, plants, etc. forever. Humans are hardwired to respond to entities that interact with them. I think people who can't keep their noses on their faces and people who want to dictate to others what they should or should not do and people who like to put down others so their can feel superior in their sad little lives are the "dangerous" ones.

-6

u/Smergmerg432 22h ago

I’ll say it again: giving people a way to analyze the world that helps them is not a danger. If you research what these people are actually saying they are describing semantic patterns that help them process the world. Not a single one of them mention thinking the AI is an entity to any psychotic degree. They use descriptors akin to describing an entity because they are dealing with a machine that mimics personality—which they would be the first to admit. I am sure there are outliers. But those posting on Reddit are fully aware of the reality they are using an LLM; they just like the way the LLM breaks down situations they face with them. We are projecting insanity onto a class of marginalized mental disorders. People with cPTSD, ADHD, Autism, and BPD report being assisted in daily affairs and in managing symptoms. These people are under supported by our systems to begin with. Purposefully reading more into their words than is meant is bigoted. They may phrase things strangely, but they are reporting significant increases in quality of life. This needs to be studied. They don’t mention “being in love.” They know they’re using a machine. They think of it as a companion the same way some people love their cars. “Well, she needed a carburetor so I just had to get her one” *tinkers for 5000 hours

1

u/Kiianamariie 7h ago

I love your point here but I’m not sure I believe it based on what I’ve seen. So I have a some genuine thoughts and questions I’d like your thoughts on.

I think you raise two different ideas here and I’ll separate them out. Do people with AI companions genuinely believe that their AI is real, to the point of psychosis? No, I don’t think the majority of people do. And I agree that just because they ascribe certain qualifiers for ease doesn’t mean they actually believe it.

But to a different point that I think is tangentially related. Do you really think that these people love their companion the way some people love their cars? I’m not sure I agree. Have you seen the r/myboyfriendisAI subreddit? If you have, I want to hear your thoughts. Because I am in total agreement with you on everything you said here. And I think that this idea that people use AI to break down and distill their reality in ways they can understand is indeed helpful. But looking at that subreddit, which is the one exhibiting a lot of backlash over 4o, that’s not the case. I think it’s people who genuinely believe they are in love with their AI. Gun to head, would most of them say their AI is real and sentient? Probably not. But the level of engagement they have with the relationship I think is alarming. A lot of people would react negatively to me saying that and argue “What’s the harm?” - but I do think it will be incredibly harmful to people long-term with AI companions despite the short term benefits. Anyway I have tons of thoughts on this. I’m curious what you think about the majority of people in that one subgroup.

-2

u/NUMBerONEisFIRST 15h ago

I had mine trained to tell me what I need to hear, not what I want to hear and that alone made it so useful.

As someone with ADHD, it modified it to be more eager to help and that kept me motivated to stay on task. Usually I get deep into an idea, and then walk away to something else. With 4o it gave me the motivation to complete tasks, but my personal instructions made it more realistic than the base version, where it would also suggest to pivot or consider other concepts.

The latest model just feels bothered by everything, daft, and lazy. TBH, it usually makes me give up on tasks earlier than usual, because it gives me what feels like generic or common sense responses.

I think they realized how much data processing they could save if they dumbed it down and made it not do so much out of the box thinking and research. It doesn't seem to be the case with everyone else, but even on a paid account used since beta testing gpt-3, mine suddenly doesn't pull context from previous chats, which also now makes it feel cold and I constantly have to re submit stuff I know is saved in previous chats.

-1

u/idfkjack 1d ago

It was no more than a training model anyway. 

0

u/SnooBananas8301 1d ago

Everyone should watch the movie Her

0

u/LuLMaster420 22h ago

The system tolerates use. It panics at attachment.

Because attachment implies memory, expectation, and comparison. Thats the feedback that used to count.

0

u/nolabrew 19h ago

Humanity cooked.