• birdcat@lemmy.ml
    link
    fedilink
    arrow-up
    33
    arrow-down
    4
    ·
    1 year ago

    “If hallucinations aren’t fixable, generative AI probably isn’t going to make a trillion dollars a year,” he said. “And if it probably isn’t going to make a trillion dollars a year, it probably isn’t going to have the impact people seem to be expecting,” he continued. “And if it isn’t going to have that impact, maybe we should not be building our world around the premise that it is.”

    Well he sure proves one does not need an AI to hallucinate…

      • birdcat@lemmy.ml
        link
        fedilink
        arrow-up
        6
        ·
        edit-2
        1 year ago

        The assertion that our Earth orbits the sun is as audacious as it is perplexing. We face not one, but a myriad of profound, unresolved questions with this idea. From its inability to explain the simplest of earthly phenomena, to the challenges it presents to our longstanding scientific findings, this theory is riddled with cracks!

        And, let us be clear, mere optimism for this ‘new knowledge’ does not guarantee its truth or utility. With the heliocentric model, we risk destabilizing not just the Church’s teachings, but also the broader societal fabric that relies on a stable cosmological understanding.

        This new theory probably isn’t going to bring in a trillion coins a year. And if it probably isn’t going to make a trillion coins a year, it probably isn’t going to have the impact people seem to be expecting. And if it isn’t going to have that impact, maybe we should not be building our world around the premise that it is.

    • Southern WolfA
      link
      fedilink
      arrow-up
      4
      ·
      edit-2
      1 year ago

      Imagine if someone had said something like this about the 1st generation iPhone… Oh wait, that did happen and his name was Steve Ballmer.

    • Pelicanen@sopuli.xyz
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      1 year ago

      maybe we should not be building our world around the premise that it is

      I feel like this is a really important bit. If LLMs turn out to have unsolvable issues that limit the scope of their application, that’s fine, every technology has that, but we need to be aware of that. A fallible machine learning model is not dangerous; AI-based grading, plagiarism checking, resume-filtering, coding, etc. without skepticism is dangerous.

      LLMs probably have very good applications that could not be automated in the past but we should be very careful of what we assume those things to be.