Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful youā€™ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cutā€™nā€™paste it into its own post ā€” thereā€™s no quota for posting and the bar really isnā€™t that high.

The post Xitter web has spawned soo many ā€œesotericā€ right wing freaks, but thereā€™s no appropriate sneer-space for them. Iā€™m talking redscare-ish, reality challenged ā€œculture criticsā€ who write about everything but understand nothing. Iā€™m talking about reply-guys who make the same 6 tweets about the same 3 subjects. Theyā€™re inescapable at this point, yet I donā€™t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldnā€™t be surgeons because they didnā€™t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I canā€™t escape them, I would love to sneer at them.

Last weekā€™s thread

(Semi-obligatory thanks to @dgerard for starting this)

  • Architeuthis@awful.systems
    link
    fedilink
    English
    arrow-up
    13
    Ā·
    edit-2
    12 days ago

    The job site decided to recommend me an article calling for the removal of most human oversight from military AI on grounds of inefficiency, which is a pressing issue since apparently weā€™re already living in the Culture.

    The Strategic Liability of Human Oversight in AI-Driven Military Operations

    Conclusion

    As AI technology advances, human oversight in military operations, though rooted in ethics and legality, may emerge as a strategic liability in future AI-dominated warfare.

    Oh unknowable genie of the sketchily curated datasets Claude, come up with an optimal ratio of civilian to enemy combatant deaths that will allow us to bomb that building with the giant red cross that you labeled an enemy stronghold.

    • Mii@awful.systems
      link
      fedilink
      English
      arrow-up
      12
      Ā·
      12 days ago

      So, ethics and legality are strategic liabilities? Jesus fucking Christ, thatā€™s not even sneer-worthy. This guy is completely fucking insane.

      • Architeuthis@awful.systems
        link
        fedilink
        English
        arrow-up
        10
        Ā·
        edit-2
        12 days ago

        If youā€™ve convinced yourself that youā€™ll mostly be fighting the AIs of a rival always-chaotic-evil alien species or their outgroup equivalent, you probably think they are.

        Otherwise I hope shooting first and asking questions later will probably continue to be frowned upon in polite society even if itā€™s automated agents doing the shooting.

      • skillissuer@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        9
        Ā·
        edit-2
        12 days ago

        This is straight up Hague material right there, all he wants is plausible deniability

        Computer said so šŸ„ŗ

        e: thatā€™s a shit take for several reasons and we have autonomous killers already. itā€™s called air defense (in some modes) because how many civilians are going at mach fuck with RCS of 0.1m^2, thatā€™s no civilian thatā€™s ballistic missile. also lmao at speed of decision

        perun video on this topic https://m.youtube.com/watch?v=tou8ahLZvP4

        • YourNetworkIsHaunted@awful.systems
          link
          fedilink
          English
          arrow-up
          5
          Ā·
          11 days ago

          Honestly the most surprising and interesting part of that episode of Power(projection)Points with Perun was the idea of simple land mines as autonomous lethal systems.

          Once again, the concept isnā€™t as new as they want you to think, moral and regulatory frameworks already exist, and the biggest contribution of the AI component is doing more complicated things than existing mechanisms but doing them badly.

    • o7___o7@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      Ā·
      edit-2
      12 days ago

      This is awful for sure, but thankfully low impact. Turns out that this Terminator Enjoyer is an unemployed idea guy. Maybe heā€™s wrangling for an IDF contract?

      From jobbie site:

      ā€œAn accomplished manager, with expertise in developing innovative concepts and ideas into client services operations and streamlining delivery of products/services within Defence / Cyber Security and Information Technology industry.ā€

      and

      Technology and Innovation executive , Currently on short sabbatical May 2024 - Present 7 months

      Also, the image is perfect. I especially like the Joe Kucan-looking general embedded in the star trek tactical station. The Technology of Peace ainā€™t what it used to be, is it?

      • istewart@awful.systems
        link
        fedilink
        English
        arrow-up
        9
        Ā·
        12 days ago

        Eliminating Mothman is our prime strattgic priority

        Private Bbailcy! I see you back there! Cut it out with the oversighing, youā€™re dragging down morale KPIs for this quarter!

      • JFranek@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        Ā·
        12 days ago

        Also, I the image is perfect. I especially like the Joe Kucan-looking general embedded in the star trek tactical station. The Technology of Peace ainā€™t what it used to be, is it?

        Is that a screenshot from Command&Conquer 4?

    • Sailor Sega Saturn@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      Ā·
      12 days ago

      As AI technology advances, human oversight in military operations, though rooted in ethics and legality, may emerge as a strategic liability in future AI-dominated warfare.