• knightly the Sneptaur
    link
    fedilink
    arrow-up
    5
    ·
    9 hours ago

    Given the error rate of LLMs, it seems more like they wasted $258 and a week that could have been spent on a human review.

    • OmnipotentEntity@beehaw.org
      link
      fedilink
      arrow-up
      11
      ·
      5 hours ago

      LLMs are bad for the uses they’ve been recently pushed for, yes. But this is legitimately a very good use of them. This is natural language processing, within a narrow scope with a specific intention. This is exactly what it can be good at. Even if does have a high false negative rate, that’s still thousands and thousands of true positive cases that were addressed quickly and cheaply, and that a human auditor no longer needs to touch.