• DarkThoughts@kbin.social
    link
    fedilink
    arrow-up
    0
    arrow-down
    1
    ·
    9 months ago

    All of those could be terrible to be honest, because AI is a data tracking vacuum. An AI adblocker or content filter sounds cool at first, but it would mean it reads and analyzes your data, just like the shit you do with chatbots too. Reading your mails? That’s basically what Google does for years with gmail, that’s why they have such a good spam filter. I agree that a chatbot would be kinda useless though, even if privacy friendly, which in of itself would be great but I just don’t see the use. This could simply be outsourced to a website.

    • KairuByte@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 months ago

      The only reason this would be an issue is if it’s sending that data off to a third party. If its fully local, who cares what data it sees?

      • DarkThoughts@kbin.social
        link
        fedilink
        arrow-up
        0
        ·
        9 months ago

        If they’re local they’d be basically useless due to a lack of computing power and potential lack of indexing for a search engine chatbot, so I doubt it. It would also have to be so polished that it wouldn’t require further user knowledge / input, and that’s just not a thing with any local LLM I’ve come across. Mozilla can gladly prove me wrong though. I certainly wouldn’t mind if they generally can make the whole process of local LLMs easier and more viable.

        • Pennomi@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          9 months ago

          The requirements to run good local LLMs have really been shrinking this past year… I have a lot of faith that there is a generally useful yet tiny AI tool within the grasp of Mozilla.

        • KairuByte@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          9 months ago

          I can understand your thinking, but it could be as simple as giving the user the option to outsource the computation to a secure something or other, if their machine can’t handle it.

          And yeah, the requirements are still quite high, but they are being reduced somewhat steadily, so I wouldn’t be surprised if average hardware could manage it in the long term.

          Edit: For the record, Mozilla is one of the only companies I would trust if they said “the secure something or other is actually secure.” And they’d likely show actual proof and provide and explanation as to how.