- cross-posted to:
- internetisbeautiful@lemmy.ml
- cross-posted to:
- internetisbeautiful@lemmy.ml
They make AI related challenges where you try and break into it. New one is out now, others can be found on the site
They make AI related challenges where you try and break into it. New one is out now, others can be found on the site
These LLMs, have no concept of truth or logic. Basically, they automate generating statistically likely bullshit. They sound smart only because the statistical likelihood of smart sounding words and sentences is higher due to the training process. It’s like Hinton said, We have automated crap generation.
Fundamental difference between humans and LLMs is that when humans have a question we look for answers, either through searching books or internet or by experimenting, as in research. But LLMs literally generate answers without verifying it’s authenticity because truth is not a concept built into them.