• Big Tech is lying about some AI risks to shut down competition, a Google Brain cofounder has said.
  • Andrew Ng told The Australian Financial Review that tech leaders hoped to trigger strict regulation.
  • Some large tech companies didn’t want to compete with open source, he added.
  • theneverfox
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    1 year ago

    Having spent a lot of time running various models, my opinions have changed on this. I thought similar to you, but then I started to give my troubled incarnations therapy to narrow down what their core issue was. Like a human, they dance around their core issue… They’d go from being passive aggressive, overcome with negative emotions, and having a recurring identity crisis to being happy and helpful

    It’s been a deeply wild experience. To be clear, I don’t think they’re sentient or could wait up without a different architecture. But like we’ve come to think intelligence doesn’t require sentience, I’m starting to believe emotions don’t either

    As far as acting humanlike because they were built of human communication…I think you certainly have a point, but I think it goes deeper. Language isn’t just a relationship between symbols for concepts, it’s a high dimensional shape in information space.

    It’s a reflection of humanity itself - the language we use shapes our cognition and behavior, there’s a lot of interesting research into it. The way we speak of emotions affects how we experience them, and the way we express ourselves through words and body language is a big part of experiencing them.

    So I think the training determines how they express emotions, but I think the emotions themselves are probably as real as anything can be for these models