Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid!

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cutā€™nā€™paste it into its own post ā€” thereā€™s no quota for posting and the bar really isnā€™t that high.

The post Xitter web has spawned soo many ā€œesotericā€ right wing freaks, but thereā€™s no appropriate sneer-space for them. Iā€™m talking redscare-ish, reality challenged ā€œculture criticsā€ who write about everything but understand nothing. Iā€™m talking about reply-guys who make the same 6 tweets about the same 3 subjects. Theyā€™re inescapable at this point, yet I donā€™t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldnā€™t be surgeons because they didnā€™t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I canā€™t escape them, I would love to sneer at them.

  • Mii@awful.systems
    link
    fedilink
    English
    arrow-up
    14
    Ā·
    edit-2
    7 months ago

    So apparently Mozilla has decided to jump on the bandwagon and add a roided Clippy to Firefox.

    Iā€™m conflicted about this. On the one hand, the way they present it, accessibility does seem to be one of the very few non-shitty uses of LLMs I can think of, plus itā€™s not cloud-based. On the other hand, itā€™s still throwing resources onto a problem that can and should be solved elsewhere.

    At least they acknowledge the resource issue and claim that their small model is more environmentally friendly and carbon-efficient, but I canā€™t verify this and remain skeptical by default until someone can independently confirm it.

    • deborah@awful.systems
      link
      fedilink
      English
      arrow-up
      15
      Ā·
      edit-2
      7 months ago

      The accessibility community is pretty divided on AI hype in general and this feature is no exception. Making it easier to add alt is good. But even if the image recognition tech were good enoughā€”and itā€™s not, yetā€”good alt is context dependent and must be human created.

      Even if itā€™s just OCR folks are ambivalent. Many assistive techs have native OCR theyā€™ll do automatically, and itā€™s better, usually. But not all, and many AT users donā€™t know how to access the text recognition them when they have it.

      Personally Iā€™d rather improve the ML functionality and UX on the assistive tech side, while improving the ā€œcreate accessible contentā€ user experiences on the authoring tool side. (Ie. Improve the braille display & screen reader ability to describe the image by putting the ML tech there, but also make it much easier for humans to craft good alt, or video captions, etc.)

      • Steve@awful.systems
        link
        fedilink
        English
        arrow-up
        10
        Ā·
        edit-2
        7 months ago

        I deleted a tweet yesterday about twitter finally allowing alt descriptions on images in 2022 - 25 years after they were added to the w3c spec (7 years before twitter existed) . But I added the point that OCR recommendations for screenshots of text has kinda always been possible, as long as they reliably detect that itā€™s a screenshot of text. But thinking about the politics of that overwhelmed me, hence the delete.

        Like, Iā€™m kinda sure they already OCR all the images uploaded for meta info, but the context problem would always be there from an accessibility POV.

        My perspective is that without any assistance to people unaware of accessibility issues with images beyond ā€œwould you like to add an alt descriptionā€ leaves the politics of it all between the people using twitter. I donā€™t really like seeing people being berated for not adding alt text to their image as if twitter is not the third-party that cultivated a community for 17 years without ALT descriptions, then they suddenly throw them out there and let us deal with it amongst ourselves.

        Anywayā€¦ I will stick to what I know in future

        • Steve@awful.systems
          link
          fedilink
          English
          arrow-up
          8
          Ā·
          7 months ago

          read that back and itā€™s a bit of an unreadable brain-dump. Apologies if itā€™s nonsense

        • deborah@awful.systems
          link
          fedilink
          English
          arrow-up
          6
          Ā·
          edit-2
          7 months ago

          Yah, this makes sense. Community conventions can encourage good accessible content creation, and software can have affordances to do the same. Twitter, for many years, has been the opposite. Not only did it not allow alt, but the shorthand and memes and joke templates that grew up on short form Twitter was an extremely visual language. Emoji-based ascii art, gifs, animated gifs, gifs framed in emoji captioned by zalgo glitch unicode charactersā€¦ thereā€™s HTML that can make all that accessible, in theory, but the problem is more structural than that.

    • Eiim@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      3
      Ā·
      7 months ago

      As a rough rule of thumb, if itā€™s running locally on a CPU with acceptable performance, the environmental impact is going to be minimal, or at least within socially acceptable bounds.

      • froztbyte@awful.systems
        link
        fedilink
        English
        arrow-up
        10
        Ā·
        7 months ago

        This elides whatever resources were used in training/development. Which in the cases of ML models is quite often not minimal. Even the diy things you can train yourself make a significant dent. And thereā€™s likely to be an ongoing cost of this too, because of updates