Whenever AI is mentioned lots of people in the Linux space immediately react negatively. Creators like TheLinuxExperiment on YouTube always feel the need to add a disclaimer that “some people think AI is problematic” or something along those lines if an AI topic is discussed. I get that AI has many problems but at the same time the potential it has is immense, especially as an assistant on personal computers (just look at what “Apple Intelligence” seems to be capable of.) Gnome and other desktops need to start working on integrating FOSS AI models so that we don’t become obsolete. Using an AI-less desktop may be akin to hand copying books after the printing press revolution. If you think of specific problems it is better to point them out and try think of solutions, not reject the technology as a whole.

TLDR: A lot of ludite sentiments around AI in Linux community.

  • FQQD@lemmy.ohaa.xyz
    link
    fedilink
    English
    arrow-up
    242
    arrow-down
    1
    ·
    edit-2
    5 months ago

    I dont think the community is generally against AI, there’s plenty of FOSS projects. They just don’t like cashgrabs, enshittification and sending personal data to someone else’s computer.

    • anamethatisnt@lemmy.world
      link
      fedilink
      arrow-up
      57
      ·
      5 months ago

      sending personal data to someone else’s computer.

      I think this is spot on. I think it’s exciting with LLMs but I’m not gonna give the huge corporations my data, nor anyone else for that matter.

    • FatCat@lemmy.worldOP
      link
      fedilink
      arrow-up
      8
      arrow-down
      81
      ·
      5 months ago

      I don’t see anyone calling for cash grabs or privacy destroying features to be added to gnome or other projects so I don’t see why that would be an issue. 🙂

      On device Foss models to help you with various tasks.

      • wewbull@feddit.uk
        link
        fedilink
        English
        arrow-up
        46
        arrow-down
        2
        ·
        5 months ago

        You are, if you’re calling for Apple like features.

        You might argue that “private cloud” is privacy preserving, but you can only implement that with the cash of Apple. I would also argue that anything leaving my machine, to a bunch of servers I don’t control, without my knowledge is NOT preserving my privacy.

        • Auli@lemmy.ca
          link
          fedilink
          English
          arrow-up
          9
          ·
          5 months ago

          I’m waiting for the moment the storey breaks they ChatGPT didn’t do what Apple asked.

        • FatCat@lemmy.worldOP
          link
          fedilink
          arrow-up
          6
          arrow-down
          39
          ·
          edit-2
          5 months ago

          You might argue that “private cloud” is privacy preserving

          I don’t know since when “on device” means send it to a server. Come up with more straw men I didn’t mention for you to defeat.

          • MentalEdge@sopuli.xyz
            link
            fedilink
            arrow-up
            29
            arrow-down
            1
            ·
            edit-2
            5 months ago

            Apple’s “private cloud” is a thing. Not all “Apple Intelligence” features are “on device”, some can and do utilize cloud-based processing power, and this will also be available to app developers.

            Apparently this has additional safeguards vs “normal cloud” which is why they are branding it “private cloud”.

            But it’s still “someone else’s computer” and apple is not keeping their AI implementation 100% on device.

      • technocrit@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        16
        arrow-down
        2
        ·
        edit-2
        5 months ago

        On device Foss models to help you with various tasks.

        Thankfully I really really don’t need an “AI” to use my desktop. I don’t want that kind of BS bloat either. But go ahead and install whatever you want on your machine.

        • umami_wasabi@lemmy.ml
          link
          fedilink
          arrow-up
          9
          ·
          5 months ago

          It is quite a bloat. Llama3 7B is 4.7GB by itself, not counting all the dependencies and drivers. This can easily take 10+ GB of the drive. My Ollama setup takes about 30GB already. Given a single application (except games like COD that takes up 300GB), this is huge, almost the size of a clean OS install.

      • PrivateNoob@sopuli.xyz
        link
        fedilink
        arrow-up
        13
        ·
        5 months ago

        FQQD probably refers to companies such as MS, Apple, Google, Adobe, etc. since they usually incorporate AI into everything.

  • DudeImMacGyver@sh.itjust.works
    link
    fedilink
    arrow-up
    103
    arrow-down
    19
    ·
    5 months ago

    Reminder that we don’t even have AI yet, just learning machine models, which are not the same thing despite wide misuse of the term AI.

    • knatschus@discuss.tchncs.de
      link
      fedilink
      arrow-up
      23
      arrow-down
      8
      ·
      5 months ago

      Have you mentioned that in gaming forums aswell when they talked about AI?

      AI is a broad term and can mean many different things, it does not need to mean ‘true’ AI

    • Lojcs@lemm.ee
      link
      fedilink
      arrow-up
      14
      arrow-down
      5
      ·
      5 months ago

      But ml is a type of ai. Just because the word makes you think of androids and skynet doesn’t mean that’s the only thing that can be called so. Personally never understood this attempt at limiting the word to that now while ai has been used for lesser computer intelligences for a long time.

        • NoiseColor@startrek.website
          link
          fedilink
          arrow-up
          2
          arrow-down
          8
          ·
          5 months ago

          Well not at all. What a word means is not defined by what you might think. When the majority starts to use a word for something and that sticks, it can be adopted. That happens all the time and I have read articles about it many times. Even for our current predicament. Language is evolving. Meanings change. And yes ai today includes what is technically machine learning. Sorry friend, that’s how it works. Sure you can be the grumpy drunk at a bar complaining that this is not strictly ai by some definition while the rest of the world rolls their eyes and proceeds to more meaningful debates.

          • DudeImMacGyver@sh.itjust.works
            link
            fedilink
            arrow-up
            4
            arrow-down
            2
            ·
            5 months ago

            Words have meaning and, sure, they can be abused and change meaning over time but let’s be real here: AI is a hype term with no basis on reality. We do not have AI, we aren’t even all that close. You can make all the ad hominem comments you want but at the end of the day, the terminology comes from ignorant figureheads hyping shit up for profit (at great environmental cost too, LLM aka “AI” takes up a lot of power while yielding questionable results).

            Kinda sounds like you bought into the hype, friend.

            • NoiseColor@startrek.website
              link
              fedilink
              arrow-up
              2
              arrow-down
              8
              ·
              5 months ago

              You missed the point again, oh dear! Let me try again in simpler terms : you yourself dont define words, how they are used in the public does. So if the world calls it ai, then the word will mean what everybody means when they use it.

              This is how the words come to be, evolve and are at the end put in the dictionary. Nobody cares what you think. Ai today includes ML. Get over it.

              Nice try with deflection attempts, but I really don’t care about them, I’m only here to teach you where words come from and to tell you, the article is written about you.

              Also that I’m out of time for this. Bye.

        • FatCat@lemmy.worldOP
          link
          fedilink
          arrow-up
          6
          arrow-down
          26
          ·
          5 months ago

          Its an interesting discussion. But I disagree you have a clear cut fact.

          Just because it’s a computer writing things with math why do you say it is not intelligence. It would be helpful if you could be more detailed here.

    • FatCat@lemmy.worldOP
      link
      fedilink
      arrow-up
      21
      arrow-down
      55
      ·
      5 months ago

      That’s just nitpicking. Everyone here knows what we mean by AI. Yes it refers to LLMs.

      Reminds me of Richard Stallman always interjecting to say “actually its gnu/Linux or as I like to say gnu plus Linux”…

      Well no Mr Stallman its actually gnu + Linux + Wayland + systemd + chromium and whatever other software you have installed, are you happy now??

      • Inevitable Waffles [Ohio]@midwest.social
        link
        fedilink
        English
        arrow-up
        42
        arrow-down
        3
        ·
        5 months ago

        As someone who frequently interacts with the tech illiterate, no they don’t. This sudden rush to put weighed text hallucination tables into everything isn’t that helpful. The hype feels like self driving cars or 3D TVs for those of us old enough to remember that. The potential for damage is much higher than either of those two preceding fads and cars actually killed poeple. I think many of us are expressing a healthy level of skepticism toward the people who need to sell us the next big thing and it is absolutely warranted.

        • Auli@lemmy.ca
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          1
          ·
          5 months ago

          It’s exactly like self driving everyone is like this is the time we are going to get AGI. But it well be like everything else overhyped and under deliver. Sure it well have its uses companies well replace people with it and they enshitificstion well continue.

        • FatCat@lemmy.worldOP
          link
          fedilink
          arrow-up
          4
          arrow-down
          32
          ·
          5 months ago

          The potential for damage is much higher

          Doubt it. Maybe Microsoft can fuck it up somehow but the tech is here to stay and will do massive good.

          • Inevitable Waffles [Ohio]@midwest.social
            link
            fedilink
            English
            arrow-up
            19
            arrow-down
            2
            ·
            5 months ago

            You can doubt all you like but we keep seeing the training data leaking out with passwords and personal information. This problem won’t be solved by the people who created it since they don’t care and fundamentally the technology will always show that lack of care. FOSS ones may do better in this regard but they are still datasets without context. Thats the crux of the issue. The program or LLM has no context for what it says. That’s why you get these nonsensical responses telling people that killing themselves is a valid treatment for a toothache. Intelligence is understanding. The “AI” or LLM or, as I like to call them, glorified predictive textbars, doesn’t understand the words it is stringing together and most people don’t know that due to flowery marketing language and hype. The threat is real.

            • Auli@lemmy.ca
              link
              fedilink
              English
              arrow-up
              8
              arrow-down
              1
              ·
              5 months ago

              Not to mention the hulucinations. What a great marketing term for it’s fucking wrong.

              • Inevitable Waffles [Ohio]@midwest.social
                link
                fedilink
                English
                arrow-up
                3
                ·
                5 months ago

                They act like its the computer daydreaming. No, its wrong. The machine that is supposed to provide me correct information. It didn’t it. These marketing wizards are selling snake oil in such a lovely bottle these days.

      • ElectricMachman@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        30
        arrow-down
        8
        ·
        5 months ago

        So when we actually do have AI, what are we supposed to call it? The current use of the term “AI” is too ambiguous to be of any use.

        • jacobc436@lemmy.ml
          link
          fedilink
          arrow-up
          18
          ·
          5 months ago

          Nothing was ever wrong with calling them “virtual assistants” - at least with them you’re conditioned to have a low bar of expectations. So if it performs past expectations, you’ll be excited, lol.

        • HumanPenguin@feddit.uk
          link
          fedilink
          English
          arrow-up
          13
          arrow-down
          3
          ·
          5 months ago

          Honestly what we have now is AI. As in it is not intelligent just trys to mimic it.

          Digital Intelegence if we ever achive it would be a more accurate name.

          • MudMan@fedia.io
            link
            fedilink
            arrow-up
            16
            arrow-down
            2
            ·
            5 months ago

            Look, the naming ship has sailed and sunk somewhere in the middle of the ocean. I think it’s time to accept that “AI” just means “generative model” and what we would have called “AI” is now more narrowly “AGI”.

            People call videogame enemies “AI”, too, and it’s not the end of the world, it’s just imprecise.

          • FatCat@lemmy.worldOP
            link
            fedilink
            arrow-up
            5
            arrow-down
            19
            ·
            edit-2
            5 months ago

            This is a bit philosophical but who is to say that mimicking intelligence with advanced math is not intelligence. LLMs can perform various thinking tasks better than humans we consider intelligent.

        • breadsmasher@lemmy.world
          link
          fedilink
          English
          arrow-up
          9
          arrow-down
          1
          ·
          5 months ago

          What AI means will change, what it refers to will change. Currently, the LLMs and other technologies are referred to as AI, like you say. In five years time we will have made huge leaps. Likely, this will result in technology also called AI.

          In a similar vein, hover boards are still known as exactly that - like in films. Whereas the “real” hover board that exists has wheels. We didn’t stop calling the other ones hover boards, and if we ever get real ones they will likely also be called hoverboards.

          • snooggums@midwest.social
            link
            fedilink
            English
            arrow-up
            9
            arrow-down
            1
            ·
            5 months ago

            Whereas the “real” hover board that exists has wheels.

            Hovercraft have existed for decades and actually hover which makes everyone just accepting Hoverboards as wheeled infuriating.

      • breadsmasher@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        5 months ago

        To be 🤓 really really nitpicky, and i’m writing this because I find it interesting, not an attack or whatever. A tongue in cheek AcHtUaLlY 🤓

        GNU/Linux is the “whole operating system”, and everything else is extra. The usefulness of an operating system without applications is debatable but they 🤓 technically aren’t required to complete the definition of an operating system.

        But this is also basically the debate of Linux vs GNU/Linux vs also needing applications to make a useful operating system.

        Quoting wiki summary,

        In its original meaning, and one still common in hardware engineering, the operating system is a basic set of functions to control the hardware and manage things like task scheduling and system calls. In modern terminology used by software developers, the collection of these functions is usually referred to as a kernel, while an ‘operating system’ is expected to have a more extensive set of programmes. The GNU project maintains two kernels itself, allowing the creation of pure GNU operating systems, but the GNU toolchain is also used with non-GNU kernels. Due to the two different definitions of the term ‘operating system’, there is an ongoing debate concerning the naming of distributions of GNU packages with a non-GNU kernel.

        https://en.wikipedia.org/wiki/GNU?wprov=sfti1#GNU_as_an_operating_system

        • FatCat@lemmy.worldOP
          link
          fedilink
          arrow-up
          3
          arrow-down
          11
          ·
          5 months ago

          Don’t tell me Linux mint would still be Linux mint without the a desktop environment like Cinnamon. An os is the collection of all the software not just the low level code.

          • breadsmasher@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            edit-2
            5 months ago

            Well that’s the debate! Is it “GNU/Linux Mint”? What about the desktop environment, “GNU/Linux Mint Cinnamon”?

            ed.

            Don’t tell me …

            Absolutely not telling you - just reiterating the ongoing debate

  • zerakith@lemmy.ml
    link
    fedilink
    arrow-up
    85
    arrow-down
    3
    ·
    5 months ago

    I won’t rehash the arguments around “AI” that others are best placed to make.

    My main issue is AI as a term is basically a marketing one to convince people that these tools do something they don’t and its causing real harm. Its redirecting resources and attention onto a very narrow subset of tools replacing other less intensive tools. There are significant impacts to these tools (during an existential crisis around our use and consumption of energy). There are some really good targeted uses of machine learning techniques but they are being drowned out by a hype train that is determined to make the general public think that we have or are near Data from Star Trek.

    Addtionally, as others have said the current state of “AI” has a very anti FOSS ethos. With big firms using and misusing their monopolies to steal, borrow and coopt data that isn’t theirs to build something that contains that’s data but is their copyright. Some of this data is intensely personal and sensitive and the original intent behind the sharing is not for training a model which may in certain circumstances spit out that data verbatim.

    Lastly, since you use the term Luddite. Its worth actually engaging with what that movement was about. Whilst its pitched now as generic anti-technology backlash in fact it was a movement of people who saw what the priorities and choices in the new technology meant for them: the people that didn’t own the technology and would get worse living and work conditions as a result. As it turned out they were almost exactly correct in thier predictions. They are indeed worth thinking about as allegory for the moment we find ourselves in. How do ordinary people want this technology to change our lives? Who do we want to control it? Given its implications for our climate needs can we afford to use it now, if so for what purposes?

    Personally, I can’t wait for the hype train to pop (or maybe depart?) so we can get back to rational discussions about the best uses of machine learning (and computing in general) for the betterment of all rather than the enrichment of a few.

    • AnarchoSnowPlow@midwest.social
      link
      fedilink
      arrow-up
      29
      arrow-down
      1
      ·
      5 months ago

      It’s a surprisingly good comparison especially when you look at the reactions: frame breaking vs data poisoning.

      The problem isn’t progress, the problem is that some of us disagree with the Idea that what’s being touted is actual progress. The things llms are actually good at they’ve being doing for years (language translations) the rest of it is so inexact it can’t be trusted.

      I can’t trust any llm generated code because it lies about what it’s doing, so I need to verify everything it generates anyway in which case it’s easier to write it myself. I keep trying it and it looks impressive until it ends up at a way worse version of something I could have already written.

      I assume that it’s the same way with everything I’m not an expert in. In which case it’s worse than useless to me, I can’t trust anything it says.

      The only thing I can use it for is to tell me things I already know and that basically makes it a toy or a game.

      That’s not even getting into the security implications of giving shitty software access to all your sensitive data etc.

      • aksdb@lemmy.world
        link
        fedilink
        arrow-up
        6
        ·
        5 months ago

        If you are so keen on correctness, please don’t say “LLMs are lying”. Lying is a conscious action of deceiving. LLMs are not capable of that. That’s exactly the problem: they don’t think, they just assemble with probability. If they could lie, they could also produce real answers.

    • BrianTheeBiscuiteer@lemmy.world
      link
      fedilink
      arrow-up
      9
      arrow-down
      1
      ·
      5 months ago

      I’ve never heard anyone explicitly say this but I’m sure a lot of people (i.e. management) think that AI is a replacement for static code. If you have a component with constantly changing requirements then it can make sense, but don’t ask an llm to perform a process that’s done every single day in the exact same way. Chief among my AI concerns is the amount of energy it uses. It feels like we could mostly wean off of carbon emitting fuels in 50 years but if energy demand skyrockets will be pushing those dates back by decades.

      • someacnt_@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        5 months ago

        My concern with AI is also with its energy usage. There’s a reason OpenAI has tons of datacenters, yet people think it does not take much because “free”!

    • FatCat@lemmy.worldOP
      link
      fedilink
      arrow-up
      2
      arrow-down
      53
      ·
      5 months ago

      Right, another aspect of the Luddite movement is that they lost. They failed to stop the spread of industrialization and machinery in factories.

      Screaming at a train moving 200kmph hoping it will stop.

      • Telorand@reddthat.com
        link
        fedilink
        arrow-up
        35
        ·
        5 months ago

        But that doesn’t mean pushback is doomed to fail this time. “It happened once, therefore it follows that it will happen again” is confirmation bias.

        Also, it’s not just screaming at a train. There’s actual litigation right now (and potential litigation) from some big names to reign in the capitalists exploiting the lack of regulation in LLMs. Each is not necessarily for a “luddite” purpose, but collectively, the results may effectively achieve the same thing.

        • FatCat@lemmy.worldOP
          link
          fedilink
          arrow-up
          2
          arrow-down
          28
          ·
          edit-2
          5 months ago

          “It happened once, therefore it follows that it will happen again” is confirmation bias

          You’re right but realistically it will fail. The voices speaking against it are few and largely marginalised, with no money or power. There will probably be regulations but it will not go away.

          • Telorand@reddthat.com
            link
            fedilink
            arrow-up
            6
            arrow-down
            1
            ·
            5 months ago

            Right, but like I said, there’s several lawsuits (and threatened lawsuits) right now that might achieve the same goals of those speaking against how it’s currently used.

            I don’t think anyone here is arguing for LLMs to go away completely, they just want to be compensated fairly for their work (else, restrict the use of said work).

        • FatCat@lemmy.worldOP
          link
          fedilink
          arrow-up
          3
          arrow-down
          26
          ·
          5 months ago

          Work on useful alternatives to big corpo crapware = lick the boot?

          Mkay…

          • kronisk @lemmy.world
            link
            fedilink
            arrow-up
            14
            arrow-down
            1
            ·
            5 months ago

            It was more in response to your comments. I don’t think anyone has a problem with useful FOSS alternatives per se.

  • Killing_Spark@feddit.de
    link
    fedilink
    arrow-up
    54
    arrow-down
    2
    ·
    5 months ago

    I think the biggest problem is that ai for now is not an exact tool that gets everything right. Because that’s just not what it is built to do. Which goes against much of the philosophy of most tools you’d find on your Linux PC.

    Secondly: Many people who choose Linux or other foss operating system do so, at least partially, to stay in control over their system which includes knowing why stuff happens and being able to fix stuff. Again that is just not what AI can currently deliver and it’s unlikely it will ever do that.

    So I see why people just choose to ignore the whole thing all together.

    • callcc@lemmy.world
      link
      fedilink
      arrow-up
      14
      arrow-down
      4
      ·
      5 months ago

      This and on top of being inexact, it’s not understandable and un-transparent. These are two of the top reasons to push for free software. Even if the engine executing and teaching models are free, the model itself can’t really be considered free because of its lack of transparency.

      • MudMan@fedia.io
        link
        fedilink
        arrow-up
        9
        arrow-down
        1
        ·
        5 months ago

        That is a stretch. If you try to download and host a local model, which is fairly easy to do these days, the text input and output may be semi-random, but you definitely have control over how to plug it into any other software.

        I, for one, think that fuzzy, imprecise outputs have lots of valid uses. I don’t use LLMs to search for factual data, but they’re great to remind you of names of things you know but have forgotten, or provide verifiable context to things you have heard but don’t fully understand. That type of stuff.

        I think the AI shills have done a great disservice by presenting this stuff as a search killer or a human replacement for tasks, which it is not, but there’s a difference between not being the next Google and being useless. So no, Apple and MS, I don’t want it monitoring everything I do at all times and becoming my primary interface… but I don’t mind a little search window where I can go “hey, what was that movie from the 50s about the two old ladies that were serial killers? Was that Cary Grant or Jimmy Stewart?”.

        • callcc@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          5 months ago

          I’m not against probabilistic models and the like. I merely try to capture part of the reason they are not always well received in the floss community.

          I use LLMs regularly, and there is nothing rivalling them in many use cases.

    • snooggums@midwest.social
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      1
      ·
      5 months ago

      I think the biggest problem is that ai for now is not an exact tool that gets everything right.

      The biggest problem is that it isn’t an exact tool, but is being presented as if it was and implemented as a replacement for people instead of a tool they can use to make themselves more efficient.

    • zingo@lemmy.ca
      link
      fedilink
      arrow-up
      3
      arrow-down
      2
      ·
      5 months ago

      Yeah, sure don’t want Skynet built-in on my Linux Distro.

    • FatCat@lemmy.worldOP
      link
      fedilink
      arrow-up
      6
      arrow-down
      32
      ·
      5 months ago

      Good point about the imprecision. On the other hand most Linux desktop users are Normie’s, think Steam deck and so on.

      Some of the most popular Linux desktops are built for ordinary people with the KISS principle in mind. Not arch using tinkerers

      • Killing_Spark@feddit.de
        link
        fedilink
        arrow-up
        9
        ·
        5 months ago

        I’m not saying nobody should work on this. There is obviously demand or at least big tech is assuming demand. I’m just saying it’s not surprising to me a lot of Foss developers don’t really care.

      • someacnt_@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        5 months ago

        I used ubuntu until a few weeks ago, where I switched to Pop OS. In this sense, I might be close to the “normies”. Yet, I am incredibly skeptical of AI.

        It’s distinct.

      • hydroptic@sopuli.xyz
        link
        fedilink
        arrow-up
        8
        arrow-down
        21
        ·
        5 months ago

        On the other hand most Linux desktop users are Normie’s, think Steam deck and so on.

        Jesus fuck what a statement. Your parents probably regret having you.

  • kbal@fedia.io
    link
    fedilink
    arrow-up
    43
    arrow-down
    2
    ·
    5 months ago

    One of the main things that turns people off when the topic of “AI” comes up is the absolutely ridiculous level of hype it gets. For instance, people claiming that current LLMs are a revolution comparable to the invention of the printing press, and that they have such immense potential that if you don’t cram them into every product you can all your software will soon be obsolete.

    • FatCat@lemmy.worldOP
      link
      fedilink
      arrow-up
      4
      arrow-down
      28
      ·
      5 months ago

      Thanks for the history lesson, these days it is used to refer to those opposed to industrialisation, automation, computerisation, or new technologies or even progress in general.

      • Zeoic@lemmy.world
        link
        fedilink
        English
        arrow-up
        30
        arrow-down
        1
        ·
        5 months ago

        These days, it is often misused by ignorant people because it sounds derogatory.

        FTFY

        • trevor@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          11
          ·
          5 months ago

          Seriously. The Luddites were mostly correct about their objections to technology being used to replace humans and making exploitation more efficient, making OP’s misuse of the terms that much funnier.

        • sping@lemmy.sdf.org
          link
          fedilink
          English
          arrow-up
          8
          arrow-down
          1
          ·
          5 months ago

          But our ignorant misconceptions are ubiquitous so they have become truth!

  • Spectacle8011@lemmy.comfysnug.space
    link
    fedilink
    arrow-up
    40
    arrow-down
    1
    ·
    5 months ago

    Tech Enthusiasts: Everything in my house is wired to the Internet of Things! I control it all from my smartphone! My smart-house is bluetooth enabled and I can give it voice commands via alexa! I love the future!

    Programmers / Engineers: The most recent piece of technology I own is a printer from 2004 and I keep a loaded gun ready to shoot it if it ever makes an unexpected noise.

  • Ramin Honary@lemmy.ml
    link
    fedilink
    English
    arrow-up
    36
    arrow-down
    1
    ·
    edit-2
    5 months ago

    No, it is because people in the Linux community are usually a bit more tech-savvy than average and are aware that OpenAI/Microsoft is very likely breaking the law in how they collect data for training their AI.

    We have seen that companies like OpenAI completely disregard the rights of the people who created this data that they use in their for-profit LLMs (like what they did to Scarlett Johansson), their rights to control whether the code/documentation/artwork is used in for-profit ventures, especially when stealing Creative Commons “Share Alike” licensed documentation, or GPL licensed code which can only be used if the code that reuses it is made public, which OpenAI and Microsoft does not do.

    So OpenAI has deliberately conflated LLM technology with general intelligence (AGI) in order to hype their products, and so now their possibly illegal actions are also being associated with all AI. The anger toward AI is not directed at the technology itself, it is directed at companies like OpenAI who have tried to make their shitty brand synonymous with the technology.

    And I haven’t even yet mentioned:

    • how people are getting fired by companies who are replacing them with AI
    • or how it has been used to target civilians in war zones
    • or how deep fakes are being used to scam vulnerable people.

    The technology could be used for good, especially in the Linux community, but lately there has been a surge of unethical (and sometimes outright criminal) uses of AI by some of the worlds wealthiest companies.

  • DigDoug@lemmy.world
    link
    fedilink
    English
    arrow-up
    34
    ·
    5 months ago

    …this looks like it was written by a supervisor who has no idea what AI actually is, but desperately wants it shoehorned into the next project because it’s the latest buzzword.

  • umami_wasabi@lemmy.ml
    link
    fedilink
    arrow-up
    34
    arrow-down
    1
    ·
    edit-2
    5 months ago

    Gnome and other desktops need to start working on integrating FOSS AI models so that we don’t become obsolete.

    I don’t get it. How Linux destops would become obsolete if they don’t have native AI toolsets on DEs? It’s not like they have a 80% market share. People who run them as daily drivers are still niche, and most don’t even know Linux exists. Most ppl grown up with Microsoft and Apple shoving ads down their throat, using them in schools first hand, and that’s all they know and taught. If I need AI, I will find ways to intergrate to my workflow, not by the dev thinks I need it.

    And if you really need something like MS’s Recall, here is a FOSS version of it.

    • SuperSpruce@lemmy.zip
      link
      fedilink
      arrow-up
      6
      ·
      5 months ago

      Is OpenRecall secure as well? One of my biggest problems with MS recall is that it stores all your personal info in plain text.

      • callcc@lemmy.world
        link
        fedilink
        arrow-up
        9
        ·
        5 months ago

        A floss project’s success is not necessarily marked by its market share but often by the absolute benefit it gives to its users. A project with one happy user and developer can be a success.

  • lemmyvore@feddit.nl
    link
    fedilink
    English
    arrow-up
    33
    arrow-down
    1
    ·
    edit-2
    5 months ago

    You can’t do machine learning without tons of data and processing power.

    Commercial “AI” has been built on fucking over everything that moves, on both counts. They suck power at alarming rates, especially given the state of the climate, and they blatantly ignore copyright and privacy.

    FOSS tends to be based on a philosophy that’s strongly opposed to at least some of these methods. To start with, FOSS is build around respecting copyright and Microsoft is currently stealing GitHub code, anonymizing it, and offering it under their Copilot product, while explicitly promising companies who buy Copilot that they will insulate them from any legal downfall.

    So yeah, some people in the “Linux space” are a bit annoyed about these things, to put it mildly.

    Edit: but, to address your concerns, there’s nothing to be gained by rushing head-first into new technology. FOSS stands to gain nothing from early adoption. FOSS is a cultural movement not a commercial entity. When and if the technology will be practical and widely available it will be incorporated into FOSS. If it won’t be practical or will be proprietary, it won’t. There’s nothing personal about that.

  • chronicledmonocle@lemmy.world
    link
    fedilink
    arrow-up
    34
    arrow-down
    3
    ·
    5 months ago

    I’m not against AI. I’m against the hoards of privacy-disrespecting data collection, the fact that everybody is irresponsibility rushing to slap AI into everything even when it doesn’t make sense because line go up, and the fact nobody is taking the limitations of things like Large Language Models seriously.

    The current AI craze is like the NFTs craze in a lot of ways, but more useful and not going to just disappear. In a year or three the crazed C-level idiots chasing the next magic dragon will settle down, the technology will settle into the places where it’s actually useful, and investors will stop throwing all the cash at any mention of AI with zero skepticism.

    It’s not Luddite to be skeptical of the hot new craze. It’s prudent as long as you don’t let yourself slip into regressive thinking.

    • Handles@leminal.space
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      2
      ·
      5 months ago

      Completely agree and I’ll do you one better:

      What is being sold as AI doesn’t hold a candle to actual artificial intelligence, they’re error prone statistical engines incapable of delivering more than the illusion of intelligence. The only reason they were launched to the public is that corporations were anxious not to be the last on the market — whether their product was ready or not.

      I’m happy to be a Luddite if it means having the capacity for critical thought to Just Not Use Imperfect Crapware™.

  • electric_nan@lemmy.ml
    link
    fedilink
    arrow-up
    28
    ·
    5 months ago

    There are already a lot of open models and tools out there. I totally disagree that Linux distros or DEs should be looking to bake in AI features. People can run an LLM on their computer just like they run any other application.

    • Womble@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      5 months ago

      It doesnt though, local models would be at the core of FOSS AI, and they dont require you to trust anyone with your data.

      • technocrit@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        3
        arrow-down
        2
        ·
        edit-2
        5 months ago

        local models would be at the core of FOSS AI, and they dont require you to trust anyone with your data.

        Would? You’re slipping between imaginary and apparently declarative statements. Very typical of “AI” hype.

        • Womble@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          5 months ago

          Local models WOULD form the basis of FOSS AI. Supposition on my part but entirely supportable given there is already a open source model movement focus on producing local models and open source software is generally privacy focused.

          Local models ARE inherently private due to the way that no information leaves the device it is processed on.

          I know you dont want to engage with arguments and instead just wail at the latest daemon for internet points, but you can have more than one statement in a sentence without being incoherent.

  • Rozaŭtuno@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    29
    arrow-down
    3
    ·
    5 months ago

    I get that AI has many problems but at the same time the potential it has is immense, especially as an assistant on personal computers

    [Citation needed]

    Gnome and other desktops need to start working on integrating FOSS AI models so that we don’t become obsolete.

    And this mentality is exactly what AI sceptics criticise. The whole reason why the AI arms race is going on is because every company/organisation seems convinced that sci-fi like AI is right behind the corner, and the first one to get it will capture 100% of the market in their walled garden while everyone else fades into obscurity. They’re all so obsessed with this that they don’t see a problem with putting in charge a virtual dumbass that is constantly wrong.