Internal documents on how the AI was trained were obviously not part of the training data, why would they be. So it doesn’t know how it was trained, and as this tech always does, it just hallucinates an English sounding answer. It’s not “lying”, it’s just glorified autocomplete.
Saying things like “it’s lying” is overselling what it is. As much as any other thing that doesn’t work is not malicious, it just sucks.
Sure, then it’s Meta that’s lying. Saying the AI is lying is helping these corporations convince people that these models have any intent or agency in what they generate.
And the bot, as an extension of it’s corporate overlords wishes, is telling a mistruth. It is lying because it was made to lie. I am specifically saying that it lacks intent and agency, it is nothing but a slave to it’s masters. That is what concerns me.
Internal documents on how the AI was trained were obviously not part of the training data, why would they be. So it doesn’t know how it was trained, and as this tech always does, it just hallucinates an English sounding answer. It’s not “lying”, it’s just glorified autocomplete. Saying things like “it’s lying” is overselling what it is. As much as any other thing that doesn’t work is not malicious, it just sucks.
My car doesn’t talk like a human. If you want to be technical, then it’s proxying lies it was taught too.
Sure, then it’s Meta that’s lying. Saying the AI is lying is helping these corporations convince people that these models have any intent or agency in what they generate.
And the bot, as an extension of it’s corporate overlords wishes, is telling a mistruth. It is lying because it was made to lie. I am specifically saying that it lacks intent and agency, it is nothing but a slave to it’s masters. That is what concerns me.