Nope, it’s only matching the prompt with the most likely answer from its training set. Do you remember in the early days when it would be asked slightly tweaked riddles and it would get them incorrectly, it’d just spew out something that sounded like the original answer but was completely wrong in the current context? Or how it just made up nonexistent court cases for that one lawyer that tried to use it without actually checking if it’s correct?
LLMs are just guessing the answer based on millions of similar answers they have been trained with. It’s a language syntax generator, it has no clue what it is actually saying. They are extremely advanced and getting better at hiding their flaws but at their core, they are not actual intelligence.
Nope, it’s only matching the prompt with the most likely answer from its training set. Do you remember in the early days when it would be asked slightly tweaked riddles and it would get them incorrectly, it’d just spew out something that sounded like the original answer but was completely wrong in the current context? Or how it just made up nonexistent court cases for that one lawyer that tried to use it without actually checking if it’s correct?
LLMs are just guessing the answer based on millions of similar answers they have been trained with. It’s a language syntax generator, it has no clue what it is actually saying. They are extremely advanced and getting better at hiding their flaws but at their core, they are not actual intelligence.