• Xerxos@lemmy.ml
    link
    fedilink
    arrow-up
    40
    arrow-down
    23
    ·
    17 hours ago

    I don’t get how so many people carry their computer illiteracy as a badge of honor.

    Chatgpt is useful.

    Is it as useful as Tech Evangelists praise it to be? No. Not yet - and perhaps never will be.

    But I sure do love to let it write my mails to people who I don’t care for, but who I don’t want to anger by sending my default 3 word replies.

    It’s a tool to save time. Use it or pay with your time if you willfully ignore it.

    • rtxn@lemmy.world
      link
      fedilink
      arrow-up
      70
      ·
      edit-2
      15 hours ago

      Tech illiteracy. Strong words.

      I’m a sysadmin at the IT faculty of a university. I have a front row seat to witness the pervasive mental decline that is the result of chatbots. I have remote access to all lab computers. I see students copy-paste the exercise questions into a chatbot and the output back. Some are unwilling to write a single line of code by themselves. One of the network/cybersecurity teachers is a friend, he’s seen attendance drop to half when he revealed he’d block access to chatbots during exams. Even the dean, who was elected because of his progressive views on machine learning, laments new students’ unwillingness to learn. It’s actual tech illiteracy.

      I’ve sworn off all things AI because I strongly believe that its current state is a detriment to society at large. If a person, especially a kid, is not forced to learn and think, and is allowed to defer to the output of a black box of bias and bad data, it will damage them irreversibly. I will learn every skill that I need, without depending on AI. If you think that makes me an old man yelling at clouds, I have no kind words in response.

      • Neuromancer49@midwest.social
        link
        fedilink
        English
        arrow-up
        24
        ·
        edit-2
        13 hours ago

        x 1000. Between the time I started and finished grad school, Chat GPT had just come out. The difference in students I TA’d at the beginning and end of my career is mind melting. Some of this has to do with COVID losses, though.

        But we shouldn’t just call out the students. There are professors who are writing fucking grants and papers with it. Can it be done well? Yes. But the number of games talking about Vegetative Electron Microscopy, or introductions whose first sentence reads “As a language model, I do not have opinions about the history of particle models,” or completely non sensical graphics generated by spicy photoshop, is baffling.

        Some days it held like LLMs are going to burn down the world. I have a hard time being optimistic about them, but even the ancient Greeks complained about writing. It just feels different this time, ya know?

        ETA: Just as much of the onus is on grant reviewers and journal editors for uncritically accepting slop into their publications and awarding money to poorly written grants.

      • grrgyle@slrpnk.net
        link
        fedilink
        arrow-up
        5
        ·
        12 hours ago

        Speaking of being old, just like there are noticeable differences between people growing up before or after ready internet access. I think there will be a similar divide between people who did their learning before or after llms.

        Even if you don’t use them directly, there’s so much more useless slop than there used to be online. I’ll make it five minutes into a how-to article before realizing it doesn’t actually make any sense when you look at the whole thing, let alone have anything interesting or useful to say.

      • Tar_Alcaran@sh.itjust.works
        link
        fedilink
        arrow-up
        7
        ·
        13 hours ago

        If a person, especially a kid, is not forced to learn and think, and is allowed to defer to the output of a black box of bias and bad data, it will damage them irreversibly.

        I grew up, mostly, in the time of digital search, but far enough back that they still resembled the old card-catalog system. Looking for information was a process that you had to follow, and the mere act of doing that process was educational and helped order your thoughts and memory. When it’s physically impossible to look for two keywords at the same time, you need to use your brain or you won’t get an answer.

        And while it’s absolutely amazing that I can now just type in a random question and get an answer, or at least a link to some place that might have the answer, this is a real problem in how people learn to mentally process information.

        A true expert can explain things in simple terms, not because they learned them in simple terms or think about them in simple terms, but because they have to ability to rephrase and reorder information on the fly to fit into a simplified model of the complex system they have in their mind. That’s an extremely important skill, and it’s getting more and more rare.

        If you want to test this, ask people for an analogy. If you can’t make an analogy, you don’t truly understand the subject (or the subject involves subatomic particles, relativity or topology and using words to talk about it is already basically an analogy)

      • Xerxos@lemmy.ml
        link
        fedilink
        arrow-up
        2
        arrow-down
        9
        ·
        15 hours ago

        Saying you heard of it but don’t even try it and then brag on social media about it is different than trying it and then deciding it’s not worth it/more trouble than it’s worth.

        Do I see it as detrimental to education? Definitely, especially since teachers are not prepared for it.

        • rtxn@lemmy.world
          link
          fedilink
          arrow-up
          7
          arrow-down
          4
          ·
          14 hours ago

          I haven’t tried it either. Not even as a joke. I didn’t need to. I’ve seen its effects and came to a conclusion: that I would reject AI and whatever convenience it might bring in order to improve my own organic skills.

          • CarbonatedPastaSauce@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            14 hours ago

            It’s not a terrible tool if you already have critical thinking skills and can analyze the output and reject the nonsense. I consider it an ‘idea’ machine as it was sometimes helpful when coding to give me a new idea, but I never used what it spit out because it writes nonsensical code far too frequently to be trusted. The problem is that if you don’t already know what you’re doing, you don’t have the skills to do that critical analysis. So it turns into a self-defeating feedback loop. That’s what we aren’t ready for, because our public education has been so abysmal for the last… forever.

            • Azzu@lemm.ee
              link
              fedilink
              arrow-up
              6
              arrow-down
              1
              ·
              edit-2
              13 hours ago

              But if you can analyze the content and reject the nonsense, then you didn’t need it in the first place, because you already knew enough about the topic.

              And when you’re using it for things you don’t know enough about, that’s where you can’t tell the nonsense! You will say to yourself, because you noticed nonsense before, that “you can tell”, but you won’t actually be able to, because you’re going from known-unknown into unknown-unknown territory. You won’t even notice the nonsense because you don’t know what nonsense could even be there.

              Large language models are just that, they generate some language without sense behind it, if you use it for anything at all that requires reasoning, then you’re using it wrong.

              The literally only thing LLMs are good for is shit like “please reword this like that”, “please write an ad text praising these and these features of a product”, stuff that is about language and that’s it.

              • CarbonatedPastaSauce@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                13 hours ago

                I certainly have bias on their usefulness because all I’ve ever used them for was to get coding ideas when I had a thorny problem. It was good for giving me a direction of thought on a function or process that I hadn’t considered, but there was so much garbage in the actual code I would never use it. It just pointed me in the right direction to go write my own. So it’s not that I ‘needed’ it, but it did on a few occasions save me some time when I was working on a difficult programming issue. Certainly not earth shattering, but it has been useful a few times for me in that regard.

                I don’t even like to talk very much about the fact that I found it slightly useful at work once in a while, because I’m an anti-LLM person, at least in the way they are being promoted. I’m very unhappy with the blind trust so many people and companies put in them, and I think it’s causing real harm.

    • essell@lemmy.world
      link
      fedilink
      arrow-up
      14
      arrow-down
      6
      ·
      17 hours ago

      As an older techy I’m with you on this, having seen this ridiculous fight so many times.

      Whenever a new tech comes out that gets big attention you have the Tech Companies saying everyone has to over it in Overhype.

      And you have the proud luddites who talk like everyone else is dumb and they’re the only ones capable of seeing the downsides of tech

      “Buy an iPhone, it’ll Change your life!”

      “Why do I need to do anything except phone people and the battery only lasts one day! It’ll never catch on”

      “Buy a Satnav, it’ll get you anywhere!”

      “That satnav drove a woman into a lake!”

      “Our AI is smart enough to run the world!”

      “This is just a way to steal my words like that guy who invented cameras to steal people’s souls!”

      🫤

      Tech was never meant to do your thinking for you. It’s a tool. Learn how to use it or don’t, but if you use tools right, 10,000 years of human history says that’s helpful.

      • Tar_Alcaran@sh.itjust.works
        link
        fedilink
        arrow-up
        8
        arrow-down
        1
        ·
        13 hours ago

        The thing is, some “tech” is just fucking dumb, and should have never been done. Here are just a few small examples:

        “Get connected to the city gas factory, you can have gaslamps indoors and never have to bother with oil again!”
        “Lets bulldoze those houses to let people drive through the middle of our city”
        “In the future we’ll all have vacuum tubes in our homes to send and deliver mail”
        “Airships are the future of transatlantic travel”
        “Blockchain will revolutionize everything!”
        “People can use our rockets to travel across the ocean”
        “Roads are a great place to put solar panels” “LLMs are a great way of making things”

        • essell@lemmy.world
          link
          fedilink
          arrow-up
          4
          ·
          13 hours ago

          There are two kinds of scientific progress: the methodical experimentation and categorization which gradually extend the boundaries of knowledge, and the revolutionary leap of genius which redefines and transcends those boundaries.

          Acknowledging our debt to the former, we yearn nonetheless for the latter.

           -- Academician Prokhor Zakharov,
          
          • Tar_Alcaran@sh.itjust.works
            link
            fedilink
            arrow-up
            5
            ·
            edit-2
            13 hours ago

            Always upvote Alpha Centauri!

            EDIT: and in slightly more content-related answer: I picked those examples because there’s a range of reason why these things were stupid. Some turned out to be stupid afterwards, like building highly polluting gasworks in the middle of cities or airships. Some turned were always stupid even in their very principles, like using rockets for airtravel, solarpanel roads or blockchain.

            LLMs are definitely in the latter category. Like solar roadways, blockchains or commute-by-rocket, the “solution” just doesn’t have problem or a market.

            • essell@lemmy.world
              link
              fedilink
              arrow-up
              3
              ·
              edit-2
              9 hours ago

              I agree. People are often dumb, especially the smart ones.

              When you go through life seeing the world differently it’s easy to assume that other people just don’t get it, that they’re the problem as always, when they say your invention is useless, misguided, inappropriate or harmful.

              No matter how smart these people are, reality always catches up in the end, hopefully with as few casualties as possible.

      • CarbonatedPastaSauce@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        1
        ·
        14 hours ago

        Not all tools are worthy of the way they are being used. Would you use a hammer that had a 15% chance of smashing you in the face when you swung it at a nail? That’s the problem a lot of us see with LLMs.

        • essell@lemmy.world
          link
          fedilink
          arrow-up
          4
          arrow-down
          1
          ·
          13 hours ago

          No, but I do use hammers despite the risks.

          Because I’m aware of the risks and so I use hammers safely, despite the occasional bruised thumb.

          • CarbonatedPastaSauce@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            1
            ·
            13 hours ago

            You missed my point. The hammers you’re using aren’t ‘wrong’, i.e. smacking you in the face 15% of the time.

            Said another way, if other tools were as unreliable as ChatGPT, nobody would use them.

            • essell@lemmy.world
              link
              fedilink
              arrow-up
              8
              arrow-down
              3
              ·
              13 hours ago

              You’ve missed my point.

              ChatGPT can be wrong but it can’t hurt you unless you assume it’s always right

            • xor@lemmy.blahaj.zone
              link
              fedilink
              English
              arrow-up
              6
              arrow-down
              4
              ·
              13 hours ago

              Hammers are unreliable.

              You can hit your thumb if you use the tool wrong, and it can break, doing damage, if e.g. it is not stored properly. When you use a hammer, you accept these risks, and can choose to take steps to mitigate them by storing it properly, taking care when using it and checking it’s not loose before using it.

              In the same regard, if you use LLMs for what they’re good at, and verify their outputs, they can be useful tools.

              “LLMs pointless because I can write a shopping list myself” is like saying “hammers are pointless because I can just use this plank instead”. Sure, you can do that, but there’s other scenarios where a hammer would be kinda handy.

              • CarbonatedPastaSauce@lemmy.world
                link
                fedilink
                English
                arrow-up
                4
                arrow-down
                1
                ·
                13 hours ago

                if you use LLMs for what they’re good at, and verify their outputs

                This is the part the general public is not prepared for, and why the whole house of cards falls apart.

    • Sat@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      3
      ·
      17 hours ago

      That’s the thing. It’s a tool like any other. People who just give it a 5 word prompt and then use the raw output are doing it wrong.

      • Tar_Alcaran@sh.itjust.works
        link
        fedilink
        arrow-up
        7
        ·
        13 hours ago

        It takes a lot of skill and knowledge to recognise a wrong answer that is phrased like a correct answer. Humans are absolutely terrible at this skill, it’s why con artists are so succesful.

        And that skill and knowledge is not formed by using LLMs

        • essell@lemmy.world
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          13 hours ago

          Absolutely.

          And you can’t learn to build a fence by looking at a hammer.

          My point all over really. Tools and skills develop together and need to be seen in context.

          People, whether for or against, who describe AI or other tool in isolation, who ignore detail and nuance, are not helpful or informative.

      • CarbonatedPastaSauce@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        14 hours ago

        But you have the tech literacy to know that. Most non-tech people that use it do not, and just blindly trust it, because the world is not used to the concept that the computer is deceiving them.

      • gamermanh@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        2
        ·
        16 hours ago

        I used the image generation of a jail broken model locally to drum up an AI mock-up of work I then paid a professional to do

        This was 10000x smoother than the last time I tried this, where I irritated the artist with how much they failed to understand what I meant. The AI didn’t care, I was able to get something decently close to what I had in my head, and a professional took that and made something great with it

        Is that a better example?

        • Tar_Alcaran@sh.itjust.works
          link
          fedilink
          arrow-up
          7
          arrow-down
          1
          ·
          13 hours ago

          Yes. AI is great at creating mediocre slop to pour onto a giant mountain of mediocre slop that already exists online. In fact, that’s an LLM’s greatest power: Producing stuff that looks like other stuff.

          This is the perfect usecase for it. Mockups, sketches, filler. Low-quality, low-effort stuff used only as an input for more work.