• 9point6@lemmy.world
    link
    fedilink
    English
    arrow-up
    59
    ·
    9 months ago

    AI models are often multiple gigabytes, tbh it’s a good sign that it’s not “AI” marketing bullshit (less of a risk with open source projects anyway). I’m pretty wary of “AI” audio software that’s only a few megabytes.

    • interdimensionalmeme@lemmy.ml
      link
      fedilink
      English
      arrow-up
      8
      ·
      9 months ago

      Tensorflowlite models are tiny, but they’re potentially as much an audio revolution as synthetizer were in the 70s. It’s hard to tell if that’s what we’re looking at here.

    • Neato@ttrpg.network
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      3
      ·
      9 months ago

      Why are they that big? Is it more than code? How could you get to gigabytes of code?

      • General_Effort@lemmy.world
        link
        fedilink
        English
        arrow-up
        43
        ·
        9 months ago

        Currently, AI means Artificial Neural Network (ANN). That’s only one specific approach. What ANN boils down to is one huge system of equations.

        The file stores the parameters of these equations. It’s what’s called a matrix in math. A parameter is simply a number by which something is multiplied. Colloquially, such a file of parameters is called an AI model.

        2 GB is probably an AI model with 1 billion parameters with 16 bit precision. Precision is how many digits you have. The more digits you have, the more precise you can give a value.

        When people talk about training an AI, they mean finding the right parameters, so that the equations compute the right thing. The bigger the model, the smarter it can be.

        Does that answer the question? It’s probably missing a lot.

          • Aatube@kbin.social
            link
            fedilink
            arrow-up
            10
            arrow-down
            2
            ·
            9 months ago
            1. Specifying weights, biases and shape definitely makes a graph.
            2. IMO having a lot of more preferred and more deprecated routes is quite close to a flowchart except there’s a lot more routes. The principles of how these work is quite similar.
            • General_Effort@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              3
              ·
              9 months ago
              1. There are graph neural networks (meaning NNs that work on graphs), but I don’t think that’s what is used here.

              2. I do not understand what you mean by “routes”. I suspect that you have misunderstood something fundamental.

              • Aatube@kbin.social
                link
                fedilink
                arrow-up
                4
                arrow-down
                1
                ·
                9 months ago
                1. I’m not talking about that. What’s weights, biases and shape if not a graph?
                2. By routes, I mean that the path of the graph doesn’t necessarily converge and that it is often more tree-like.
                • General_Effort@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  edit-2
                  9 months ago

                  You can see a neural net as a graph in that the neurons are connected nodes. I don’t believe that graph theory is very helpful, though. The weights are parameters in a system of linear equations; the numbers in a matrix/tensor. That’s not how the term is used in graph theory, AFAIK.

                  ETA: What you say about “routes” (=paths?) is something that I can only make sense of, if I assume that you misunderstood something. Else, I simply don’t know what that is talking about.

                  • Natanael@slrpnk.net
                    link
                    fedilink
                    English
                    arrow-up
                    0
                    ·
                    9 months ago

                    If you look at the nodes which are most likely to trigger from given inputs then you can draw paths

      • ඞmir@lemmy.ml
        link
        fedilink
        English
        arrow-up
        7
        ·
        9 months ago

        They’re composed of many big matrices, which scale quadratically in size. A 32x32 matrix is 4x the size of a 16x16 matrix.

      • 9point6@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        2
        ·
        edit-2
        9 months ago

        The current wave of AI is around Large Language Models or LLMs. These are basically the result of a metric fuckton of calculation results generated from running a load of input data in, in different ways. Given these are often the result of things like text, pictures or audio that have been distilled down into numbers, you can imagine we’re talking a lot of data.

        (This is massively simplified, by someone who doesn’t entirely understand it themselves)