The amount of times I’ve seen a question answered by “I asked chatgpt and blah blah blah” and the answer being completely bullshit makes me wonder who thinks asking the bullshit machine™ questions with a concrete answer is a good idea
This is your reminder that LLMs are associative models. They produce things that look like other things. If you ask a question, it will produce something that looks like the right answer. It might even BE the right answer, but LLMs care only about looks, not facts.
A lot of people really hate uncertainty and just want an answer. They do not care much if the answer is right or not. Being certain is more important than being correct.
Why not just read the first part of a wikipedia article if they want that though? It’s not the end all source but it’d better than asking the machine known to make things up the same question.
Because the AI propaganda machine is not exactly advertising the limitations, and the general public sees LLMs as a beefed up search engine. You and I know that’s laughable, but they don’t. And OpenAI sure doesn’t want to educate people - that would cost them revenue.
The amount of times I’ve seen a question answered by “I asked chatgpt and blah blah blah” and the answer being completely bullshit makes me wonder who thinks asking the bullshit machine™ questions with a concrete answer is a good idea
This is your reminder that LLMs are associative models. They produce things that look like other things. If you ask a question, it will produce something that looks like the right answer. It might even BE the right answer, but LLMs care only about looks, not facts.
A lot of people really hate uncertainty and just want an answer. They do not care much if the answer is right or not. Being certain is more important than being correct.
Why not just read the first part of a wikipedia article if they want that though? It’s not the end all source but it’d better than asking the machine known to make things up the same question.
Because the AI propaganda machine is not exactly advertising the limitations, and the general public sees LLMs as a beefed up search engine. You and I know that’s laughable, but they don’t. And OpenAI sure doesn’t want to educate people - that would cost them revenue.
I don’t see the point either if you’re just going to copy verbatim. OP could always just ask AI themselves if that’s what they wanted.
The stupid and the lazy.