• deweydecibel@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    10 months ago

    And AI is the technology of the future, despite all the whinging and griping by commenters on the subject.

    You have no idea, any more than the rest of us. Like, please tell me you understand “____ is the technology of the future” has been said more times than it’s ever been true.

    The idea of AI is a technology of the future, but what we have growing now is not AI, not really, and this iteration can be just as big a flop as any other technology of the moment.

    • 4am@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 months ago

      LLMs are what everyone dunks on, and “image generators are coming for our jobs! Think of artists! It’s not real art if a cheating machine does it!” is also a common cry.

      But do any of those people even know about the new class of antibiotics a neural network trained to find patterns in protein folding discovered? Do any of them know about the accuracy of diagnosis that IBM Watson was able to make in cases of rare cancers, even when doctors didn’t see it? What about changes in weather prediction accuracy? Novel suggestions in materials science?

      We are mimicking neural patterns, similar to the way our own minds work, to achieve pattern recognition and even extrapolate from them. And yeah, right now we’re brute forcing it, and we’re not even entirely sure how these relationships develop. It’s in its infancy, and growing fast.

      This is technology considered the holy grail of computing. We have been chasing this concept since the 1940s. There are a million sci-fi stories about it and there are a million more attempts to make it work before one really stuck.

      And now we’re at the beginning of it being practical and you think we’re just gonna go “eh it’s a wet fart like the Virtual Boy. Oh well, let’s make some new phones or something”?

      No. This is literally the technology of the future. Within your lifetime (assuming you live a reasonable while longer) there will come a point when you won’t be able to buy a CPU without some type of neural engine in it.

      And yes, people will (and already are) do horrific shit with it. It will fuck over a large portion of the white collar economy; a portion of which were told to go into the careers they did because they’d be safe from automation. “Get a degree and you’ll be safe!” they told us! Now they tell us “you better work at two different targets to make that payment, should have studied a trade!”

      So the reason for skepticism and animosity is almost certainly the fear of being replaced; but look at how far these AI models have come in the last month alone. We’re already in “this is changing the future” territory and those things are just getting started.

      • daltotron@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        This is technology considered the holy grail of computing.

        This shit is just analog computing though, right? Like at it’s base, we’re just reproducing analog computation in a digital environment and then we’re framing that in a million different ways, like we’ve been doing since the seventies. We’ve actually had this shit since the first computers, which were analog. The whole reason we moved to digital, though, is because the results were easier to break down, parse, and we had control over every step of the process to confirm it was correct, and it was going to be correct every time. A clearer sense of limitations and constraints, basically.

        Now I’m not entirely against analog computing as a matter of fact, right, in fact I think it can be pretty cool if we recognize it for what it is, but at the same time I can’t help but think that the level of hype around it is fucking insane. Primarily because it’s not easily controllable or reproducible. Not in the sense that we’re gonna somehow invent a rogue AI that will kill us all, or whatever garbage, but in the sense that, while you can get easily reproducible results (such is the nature of computation), it is very hard to control what the output is of a given neural network. You can process loads of information extremely quickly, but, like, what use is that if I don’t know whether or not the solution is correct, or if it’s just a kind of ballpark figure? That’s the main issue.

        Again, fine if we recognize it, but I don’t think we’re really close at all to just like, randomly inventing a rogue consciousness. We’re not anywhere close to that, from what I’ve seen. We’re still barely good at image recognition and generation in an actually complicated environment, and even then it’s still pretty hard to get what it is that you specifically want, partially because the hype is driving so much development at this point, and the implementation is bunk and, again, kind of uncontrollable. Venture capital jumping down this thing’s throat has partially blocked it’s airway, as I see it. Still a useful technology, potentially, but a million stupid tech demos and image generators for nonsensical memes that we can flood everyone with is the dumbest shit imaginable, and even dumber than that is the level of venture capitalists I see that want to somehow monetize that.

        And so I have to ask, right, if I want a robot to sort through the different colors of little plastic beads, right, do I get a large language model on that, or do I just run a pretty basic and more efficient algorithm that just narrows the parameter of beads to a certain color, as recorded by the camera, and then that’s it? Do I want to translate a sentence with AI, or do I want to just manually run a straight word to word conversion that maybe changes based on a couple passes I’m gonna run at it to check whether or not it contextually makes sense with something like a markov chain? Trick question, they are both the same approach, AI has just done it in a way where I could apply a kind of broader paintbrush to the thing and get my results a little faster and with a little less thought even if I have less control over it.

      • SuperSpruce@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        10 months ago

        Here’s one of the big issues: Basically all of the AI is not even happening on your CPU, it’s happening on the cloud.

        And that wouldn’t be in issue if companies stopped shoving “AI” into everything not originally built for AI.

        And even that wouldn’t be as big of an issue if the companies talked about the benefits of the new tech instead of just going “AI!!!1!!! drops mic