• Zement@feddit.nl
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    8 hours ago

    I really like the idea of an LLM being narrowly configured to filter, summarize data which comes in at a irregular/organic form.

    You would have to do it multiples in parallel with different models and slightly different configurations to reduce hallucinations (Similar to sensor redundancies in Industrial Safety Levels) but still, … that alone is a game changer in “parsing the real world” … that energy amount needed to do this “right >= 3x” is cut short by removing the safety and redundancy because the hallucinations only become apparent down the line somewhere and only sometimes.

    They poison their own well because they jump directly to the enshittyfication stage.

    So people talking about embedding it into workflow… hi… here I am! =D

    • AA5B@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 hours ago

      A buddy of mine has been doing this for months. As a manager, his first use case was summarizing the statuses of his team into a team status. Arguably hallucinations aren’t critical