David Gerard@awful.systemsM to TechTakes@awful.systemsEnglish · 1 month agoLLMs can’t reason — they just crib reasoning-like steps from their training datapivot-to-ai.comexternal-linkmessage-square129fedilinkarrow-up1200arrow-down10
arrow-up1200arrow-down1external-linkLLMs can’t reason — they just crib reasoning-like steps from their training datapivot-to-ai.comDavid Gerard@awful.systemsM to TechTakes@awful.systemsEnglish · 1 month agomessage-square129fedilink
minus-squarelunarul@lemmy.worldlinkfedilinkEnglisharrow-up4·1 month ago My best guess is it generates several possible replies and then does some sort of token match to determine which one may potentially be the most accurate. Didn’t the previous models already do this?
minus-squareDarkThoughts@fedia.iolinkfedilinkarrow-up4·1 month agoNo idea. I’m not actually using any OpenAI products.
Didn’t the previous models already do this?
No idea. I’m not actually using any OpenAI products.