• istanbullu@lemmy.ml
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    7 months ago

    These are statistical models, meaning that you’ll get a different answer each time, also different answers based on context.

    • BradleyUffner@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      1
      ·
      7 months ago

      Not exactly. The answers would be exactly the same given the exact same inputs if they didn’t intentionally and purposefully inject some random jitter into the algorithm each time specifically to avoid getting the same answer each time

      • pup_atlas
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        1
        ·
        7 months ago

        It’s not just random jitter, it also likely adds context, including the device you’re using, other recent queries, and your relative location (like what state you’re in).

        I don’t work for Google, but I am somewhat close to a major AI product, and it’s pretty much the industry standard to give some contextual info to the model in addition to your query. It’s also generally not “one model”, but a set of models run in sequence— with the LLM (think chatGPT) only employed at the end to generate a paragraph from a conclusion and evidence found by a previous model.

      • EmoDuck@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        2
        ·
        7 months ago

        That jitter is automatically present because different people will get different search results, so it’s not really intentional or purposeful

        • Turun@feddit.de
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          2
          ·
          7 months ago

          Yes it is intentional.

          Some interferences even expose a way to set the “temperature” - higher values of that mean more randomized (feels creative) output, lower values mean less randomness. A temperature of 0 will make the model deterministic.

          • istanbullu@lemmy.ml
            link
            fedilink
            English
            arrow-up
            1
            ·
            7 months ago

            even at 0 temperature the model will not be deterministic, because it depends on the seed used as well as things like numerical noise.

            • Turun@feddit.de
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              7 months ago

              Yeah no, that’s not how this works.

              Where in the process does that seed play a role and what do you even mean with numerical noise?

              Edit: I feel like I should add that I am very interested in learning more. If you can provide me with any sources to show that GPTs are inherently random I am happy to eat my own hat.