AI doomsday warnings a distraction from the danger it already poses, warns expert::A leading researcher, who will attend this week’s AI safety summit in London, warns of ‘real threat to the public conversation’

  • fubo@lemmy.world
    link
    fedilink
    English
    arrow-up
    55
    arrow-down
    2
    ·
    edit-2
    1 year ago

    AI safety folks have been warning about the predictable disastrous consequences of turning economic power over to unethical AI systems for many years now, long before deepfakes, predictive policing, or other trendy “AI dangers” were around.

    • TheEighthDoctor@lemmy.world
      link
      fedilink
      English
      arrow-up
      29
      arrow-down
      3
      ·
      1 year ago

      turning economic power over to unethical AI systems for many years now

      What’s the difference from unethical human systems?

    • Obinice@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      1 year ago

      disastrous consequences of turning economic power over to unethical AI systems

      Phew, good thing we’ve got ethical Jeff Bezos and Elon Musks controling out economies and piloting our governments instead 😅 really dodged a bullet there

    • burliman@lemm.ee
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      4
      ·
      1 year ago

      These warnings and fears would be a little easier to hear if they weren’t pushed so hard by the most disingenuous people ever. Sounds like they want everyone else to pause so they can get ahead.

      • zbyte64@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        The most obnoxious ones are not only the loudest but they tend to get more screen time. You won’t see Gebru on cable news as often as you might get ol’ Yud talking about some vengeful AI god.

  • scarabic@lemmy.world
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    10
    ·
    edit-2
    1 year ago

    Aidan Gomez, co-author of a research paper that helped create the technology behind chatbots…

    Is anyone else tired of warnings about the dangers of a technology from the jackass who invented it?

    • BetaDoggo_@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      The paper they’re referring to is Attention is all you need, the paper that first demonstrated the transformers architecture, primarily focused on machine translation though also found to perform exceptionally for language modeling. Blaming him for others’ misuse is like blaming the inventor of the hunting rifle for assault rifles.

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    1 year ago

    This is the best summary I could come up with:


    Focusing on doomsday scenarios in artificial intelligence is a distraction that plays down immediate risks such as the large-scale generation of misinformation, according to a senior industry figure attending this week’s AI safety summit.

    Aidan Gomez, co-author of a research paper that helped create the technology behind chatbots, said long-term risks such as existential threats to humanity from AI should be “studied and pursued”, but that they could divert politicians from dealing with immediate potential harms.

    Gomez is attending the two-day summit, which starts on Wednesday, as chief executive of Cohere, a North American company that makes AI tools for businesses including chatbots.

    The second day, which will feature a smaller group of countries, experts and tech executives convened by Rishi Sunak, will discuss what concrete steps can be taken to address AI risks.

    Those fears led to the publishing of an open letter in March, signed by more than 30,000 tech professionals and experts including Elon Musk, calling for a six-month pause in giant AI experiments.

    However Yann LeCun, their fellow “godfather” and co-winner of the ACM Turing award – regarded as the Nobel prize of computing – has described fears that AI might wipe out humanity as “preposterous”.


    The original article contains 869 words, the summary contains 200 words. Saved 77%. I’m a bot and I’m open source!

  • Uriel238 [all pronouns]@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    We fixed it! AI Perfectly Safe: Top Artificial General Intelligence expert releases final publication of flawless logical explanation of new advances that render all AI and deep learning technology completely safe to all human persons and for all applications serving human community lifestyle. Dangeous AI hypotheses debunked and field of AI safety regarded obsolete by scientific consensus.

    Expert now plans to return to remote hometown in Siberia for permanent retirement and will travel by undisclosed route for non-AI-related security concerns.