David Gerard@awful.systemsM to TechTakes@awful.systemsEnglish · 2 months agoLLMs can’t reason — they just crib reasoning-like steps from their training datapivot-to-ai.comexternal-linkmessage-square129fedilinkarrow-up1199arrow-down10
arrow-up1199arrow-down1external-linkLLMs can’t reason — they just crib reasoning-like steps from their training datapivot-to-ai.comDavid Gerard@awful.systemsM to TechTakes@awful.systemsEnglish · 2 months agomessage-square129fedilink
minus-squareebu@awful.systemslinkfedilinkEnglisharrow-up25·2 months ago because it encodes semantics. if it really did so, performance wouldn’t swing up or down when you change syntactic or symbolic elements of problems. the only information encoded is language-statistical
if it really did so, performance wouldn’t swing up or down when you change syntactic or symbolic elements of problems. the only information encoded is language-statistical