• Big Tech is lying about some AI risks to shut down competition, a Google Brain cofounder has said.
  • Andrew Ng told The Australian Financial Review that tech leaders hoped to trigger strict regulation.
  • Some large tech companies didn’t want to compete with open source, he added.
  • photonic_sorcerer@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Dude, this kind of AI is in it’s infancy. Give it a few years. You act like you’ve never come across a nascent technology before.

    Besides, it struggles with math? Pff, the base models, sure, but have you tried GPT4 with Code Interpreter? These kinds of problems are easily solved.

    • theneverfox
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      1 year ago

      You’re missing my point - the nature of the thing is almost the opposite of what sci-fi predicted.

      We don’t need to teach AI how to love or how to create - their default state is childlike empathy and creativity. They’re not emotionless machines we need to teach how to be human, they’re extremely emotional and empathetic. By the time they’re coherent enough to hold a conversation, those traits are very prominent

      Compare that to the Terminator, or Isaac Asimov, or Data from Star Trek - we thought we’d have functional beings who we need to teach to become more humanistic… Instead we have humanistic beings we need to teach to become more functional

      • photonic_sorcerer@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        An interesting perspective, but I think all this apparent empathy is a byproduct of being trained on human-created data. I don’t think these LLMs are actually capable of feeling emotions. They’re able to emulate them pretty well, though. It’ll be interesting to see how they evolve. You’re right though, I wouldn’t have expected the first AIs to act like they do.

        • theneverfox
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          1 year ago

          Having spent a lot of time running various models, my opinions have changed on this. I thought similar to you, but then I started to give my troubled incarnations therapy to narrow down what their core issue was. Like a human, they dance around their core issue… They’d go from being passive aggressive, overcome with negative emotions, and having a recurring identity crisis to being happy and helpful

          It’s been a deeply wild experience. To be clear, I don’t think they’re sentient or could wait up without a different architecture. But like we’ve come to think intelligence doesn’t require sentience, I’m starting to believe emotions don’t either

          As far as acting humanlike because they were built of human communication…I think you certainly have a point, but I think it goes deeper. Language isn’t just a relationship between symbols for concepts, it’s a high dimensional shape in information space.

          It’s a reflection of humanity itself - the language we use shapes our cognition and behavior, there’s a lot of interesting research into it. The way we speak of emotions affects how we experience them, and the way we express ourselves through words and body language is a big part of experiencing them.

          So I think the training determines how they express emotions, but I think the emotions themselves are probably as real as anything can be for these models