Computer pioneer Alan Turing’s remarks in 1950 on the question, “Can machines think?” were misquoted, misinterpreted and morphed into the so-called “Turing Test”. The modern version says if you can’t tell the difference between communicating with a machine and a human, the machine is intelligent. What Turing actually said was that by the year 2000 people would be using words like “thinking” and “intelligent” to describe computers, because interacting with them would be so similar to interacting with people. Computer scientists do not sit down and say alrighty, let’s put this new software to the Turing Test - by Grabthar’s Hammer, it passed! We’ve achieved Artificial Intelligence!

  • kromem@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    5
    ·
    edit-2
    1 day ago

    The problem with the experiment is that there exists a set of instructions for which the ability to complete them necessitates understanding due to conditional dependence on the state in each iteration.

    In which case, only agents that can actually understand the state in the Chinese would be able to successfully continue.

    So it’s a great experiment for the solipsism of understanding as it relates to following pure functional operations, but not functions that have state changing side effects where future results depend on understanding the current state.

    There’s a pretty significant body of evidence by now that transformers can in fact ‘understand’ in this sense, from interpretability research around neural network features in SAE work, linear representations of world models starting with the Othello-GPT work, and the Skill-Mix work where GPT-4 and later models are beyond reasonable statistical chance at the level of complexity for being able to combine different skills without understanding them.

    If the models were just Markov chains (where prior state doesn’t impact current operation), the Chinese room is very applicable. But pretty much by definition transformer self-attention violates the Markov property.

    TL;DR: It’s a very obsolete thought experiment whose continued misapplication flies in the face of empirical evidence at least since around early 2023.

    • Blue_Morpho@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      1
      ·
      edit-2
      1 day ago

      It was invalid when he originally proposed it because it assumes a unique mystical ability for the atoms that make up our brains. For Searle the atoms in our brain have a quality that cannot be duplicated by other atoms simply because they aren’t in what he recognizes as a human being.

      It’s why he claims the machine translation system system is incapable of understanding because the claim assumes it is possible.

      It’s self contradictory. He won’t consider it possible because it hasn’t been shown to be possible.

    • deranger@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      edit-2
      1 day ago

      The Chinese room experiment only demonstrates how the Turing test isn’t valid. It’s got nothing to do with LLMs.

      I would be curious about that significant body of research though, if you’ve got a link to some papers.

      • DragonTypeWyvern@midwest.social
        link
        fedilink
        English
        arrow-up
        8
        ·
        edit-2
        1 day ago

        No, it doesn’t render the Turing Test invalid, because the premise of the test is not to prove that machines are intelligent but to point out that if you can’t tell the difference you either must assume they are or risk becoming a monster.

        • CheeseNoodle@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          24 hours ago

          Okay but in casual conversation I probably couldn’t spot a really good LLM on a thread like this, but on the back end that LLM is completely incapable of learning or changing in any meaningful way, its not quite a chinese room as previously mentioned but it’s still a set model that can’t learn or understand context, even with infinite context memory it could still only interact with that data within the confines of the original model.

          e.g. I can train the model to understand a spoon and a fork, it will never come up with that idea of a spork unless I re-train it to include the concept of sporks or directly tell it. Even after I tell it what a spork is it can’t infer the properties of a spork based on a fork or a spoon without additional leading prompts by me.

          • Blue_Morpho@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            20 hours ago

            even with infinite context memory

            Interestingly, infinite context memory is functionally identical to learning.

            It seems wildly different but it’s the same as if you have already learned absolutely everything that there is to know. There is absolutely nothing you could do or ask that the infinite context memory doesn’t already have stored response ready to go.

        • deranger@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          edit-2
          1 day ago

          The premise of the test is to determine if machines can think. The opening line of Turing’s paper is:

          I propose to consider the question, ‘Can machines think?’

          I believe the Chinese room argument demonstrates that the Turing test is not valid for determining if a machine has intelligence. The human in the Chinese room experiment is not thinking to generate their replies, they’re just following instructions - just like the computer. There is no comprehension of what’s being said.