• ඞmir@lemmy.ml
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    11 months ago

    A theory of mine about this problem is that an AI knows what it should do (because of training data) but not if it was effective; as it doesn’t have a metric to test if its output is engaging to humans. When an AI generates dialogue it does so by copying and merging many existing snippets of text, but without a clear set of goals in doing so. When a human writes dialogue, they have a specific atmosphere in mind, a set of goals, foreshadowing, the tone shifting throughout the sentences etc. AIs might accidentally do it right from time to time, but more often than not they mess this part up.