Moore’s law is the observation that the number of transistors in an integrated circuit (IC) doubles about every two years.

Is there anything similar for the sophistication of AI, or AGI in particular?

  • Chrüsimüsi
    link
    fedilink
    21
    edit-2
    11 months ago

    Some in the AI industry have proposed concepts similar to Moore’s Law to describe the rapid growth of AI capabilities.

    Although there is no universally accepted law or principle akin to Moore’s Law for AI, people often refer to trends that describe the doubling of model sizes or capabilities over a specific time frame.

    For instance, OpenAI has previously described a trend where the amount of computing power used to train the largest AI models has been doubling roughly every 3.5 months since 2012.

    Source

    • AndyOP
      link
      fedilink
      211 months ago

      Thank you!

      But does that equate to the power of AI doubling every 3.5 months?

        • ℕ𝕖𝕞𝕠
          link
          fedilink
          111 months ago

          which completely killed the field of AI for decades

          uh, maybe if you forgot about natural language processing

            • @Buffalox@lemmy.world
              link
              fedilink
              211 months ago

              I’d say that when playing chess was the premiere achievement of AI, it was as good as dead, playing chess proves very little, as it’s basically a task that can be achieved computationally. Investments in research had almost completely dried out for a couple of decades.

              AI development was almost completely dead, but calling it the AI winter is fine too. ;)

          • @Buffalox@lemmy.world
            link
            fedilink
            111 months ago

            AI made very little progress for 40 years from the 70’s, basically just some basic pattern recognition like OCR in the 80’s.

            Up until recently AI development has been extremely underwhelming, especially compared to what we hoped back in the 80’s.

            Although results are pretty impressive, autonomous cars are still a hard nut to crack.

            Most impressive IMO are the recent LLMs (Large Language Model), but these results are very recent, compared to the many decades research has been done to develop better AI.

            Honestly an AI beating a human at chess is not that impressive AI research IMO, as it’s an extremely narrow task, you can basically just throw computational power at. Still for many years that was the most impressive AI achievement.

      • Chrüsimüsi
        link
        fedilink
        611 months ago

        I guess it’s hard to measure the power of AI anyway but I would say a strong no: it doesn’t equate to the power of AI doubling every 3.5 months 😅