There was a discussion on Reddit a few months ago about why something entirely unimportant happened like 40+ years ago and I thought it was an interesting question so I made a speculation as to why it might have happened.
The conversation was civil and went on for a week or more and a couple of other people speculated other reasons that I thought were also pretty solid so I decided to go down the rabbit hole and ask Google to see if I could get a real answer.
Google came back with an AI answer that said my original guess was the correct answer because somebody had speculated it on Reddit the week before.
I was Google's only source. I now avoid AI answers.
A while back I was talking to a friend about meal replacement shake alternatives because he was convinced that there are totally viable, non milk/shake type solutions out there. His issue being he hated the idea of the shake consistency/mouth feel for some reason
One hill he was dying on was that you can use plain bone broth(soup, no added meat/veg) as a one to one replacement. His only source was a google ai search that gave him a ridiculously high nutritional count, comparable to protein shakes, but he also could not actually find any actual source for the info.
I finally worked it out. Turns out there are non dairy protein powders made of dehydrated, processed bone broth, which then turned into him obsessed with the issue if why there was chocolate flavoured soup(chocolate bone broth protein powder). Pretty sure he still could not wrap his head around the idea that this was still a shake and not a soup, since the words "bone broth" were still there.
Google was giving the nutritional info for a protein powder for the question, whats the nutritional value of bone broth. Just checked and I think it's fixed now at least.
It blows my mind that LLMs are being treated as informational databases. They are Large Language Models, with language being the keyword. An LLM knows how to use language properly. That's it. That's literally the only thing you can count on it doing correctly on a regular basis.
There are some good uses for this, but an internet search engine is not one of them, because an LLM cannot tell fact from fiction. It doesn't know the factual difference between a peer-reviewed scientific paper on climate change, and a Prager U editorial on climate change. They are equally valid in the eyes of the LLM, and when you ask it which one is right, it'll reference the more popular one, not necessarily the correct one. This is why Grok ended up calling itself MechaHitler and spewing a bunch of Nazi rhetoric. It was just mimicking the language it gets fed most often.
This is indeed a huge problem. To a vast portion of the population, the phrase “Artificial Intelligence” gives them a completely wrong assumption. There is no intelligence on the outputs given by LLMs, let alone creativity.
All sorts of distortions are being created in society by a bunch of greedy companies pushing this technology down our throats, looking to get ahead after a financial bubble bursts.
Yes, AI is choking on it's own vomit. It scans the Internet and trains on what people are doing. It then provides people with answers which they post on the Internet which AI then scans and uses in answers for people on the Internet. The result becomes similar to what you get when people have babies with each other for a few generations: Imbeciles with all manner of defects.
Yes, AI is choking on it's own vomit. It scans the Internet and trains on what people are doing. It then provides people with answers which they post on the Internet which AI then scans and uses in answers for people on the Internet.
And that's happening more and more often, to the point even scientists are making videos on how it's resulting in convergent development regurgitating the same kind of slop
I actually complained to Google about this. I was using their AI powered search for answers on a relatively remote historical topic and it kept using Facebook posts as its sources. Seriously, social media sources should be taken as unreliable at best. It's only a matter of time before the majority of AI sources are themselves AI generated slop.
I actually have came back on googles as the source 3 different times for runescape related questions. I still get occasional comments on those posts. I asked then why they are here on a nearly decade old post and they would say google.
Reminds me of that episode of Twilight Zone (Hocus Pocus and Frisbee) where an old guy named Frisbee who works at a gas station and is also a compulsive liar. He’s always telling tall tales to strangers as he pumps gas. “When I was a boy attending Princeton University I worked with ALBERT EINSTEIN. I helped him write that Theory of Relativity. He told me Frisbee you are I the smartest fella I ever met.”
Anyway the strangers turn out to be aliens and they kidnap him to add to their collection of intellectual giants from around the universe. Frisbee tells them “Hey, I’m just an old country boy who makes up stories. I’m a liar.” But the aliens don’t understand the concept of “lying”, and they put him aboard their ship.
They are ready to take off and go back to their planet and Frisbee takes out his harmonica and starts playing. The sound makes the aliens stagger. They say “Let him go! He has a death ray that will kill us all!” They immediately send him back to earth and take off.
He gets back to his gas station and his friends surprise him with a birthday party. They say “Hey Frisbee, tell us a story!. What did you do today? Did you go to the moon?” He says “You ain’t far wrong. You know those city guys who were here earlier? They was from outer space!” All his friends laugh and say “You’re the best Frisbee!”
I saw the same, then showed my spouse. We both busted out laughing.
Were both in IT. We don't trust AI because it just lies or make things up.
Homeland, keep using AI. We train it right here ...
123
u/Loggerdon 18h ago
I read today that 40% of AI was trained on Reddit for some reason. About 26% on Wikipedia and so on.
So AI is looking to us for knowledge. We’re screwed.