• Treemaster099
    link
    fedilink
    English
    arrow-up
    47
    arrow-down
    5
    ·
    edit-2
    1 year ago

    Good. Technology always makes strides before the law can catch up. The issue with this is that multi million dollar companies use these gaps in the law to get away with legally gray and morally black actions all in the name of profits.

    Edit: This video is the best way to educate yourself on why ai art and writing is bad when it steals from people like most ai programs currently do. I know it’s long, but it’s broken up into chapters if you can’t watch the whole thing.

    • PlebsicleMcGee@feddit.uk
      link
      fedilink
      English
      arrow-up
      21
      arrow-down
      2
      ·
      1 year ago

      Totally agree. I don’t care that my data was used for training, but I do care that it’s used for profit in a way that only a company with big budget lawyers can manage

      • CoderKat@lemm.ee
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        edit-2
        1 year ago

        But if we’re drawing the line at “did it for profit”, how much technological advancement will happen? I suspect most advancement is profit driven. Obviously people should be paid for any work they actually put in, but we’re talking about content on the internet that you willingly create for fun and the fact it’s used by someone else for profit is a side thing.

        And quite frankly, there’s no way to pay you for this. No company is gonna pay you to use your social media comments to train their AI and even if they did, your share would likely be pennies at best. The only people who would get paid would be companies like reddit and Twitter, which would just write into their terms of service that they’re allowed to do that (and I mean, they already use your data for targeting ads and it’s of course visible to anyone on the internet).

        So it’s really a choice between helping train AI (which could be viewed as a net benefit for society, depending on how you view those AIs) vs simply not helping train them.

        Also, if we’re requiring payment, only the super big AI companies can afford to frankly pay anything at all. Training an AI is already so expensive that it’s hard enough for small players to enter this business without having to pay for training data too (and at insane prices, if Twitter and Reddit are any indication).

        • Programmer Belch@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          8
          ·
          1 year ago

          Hundreds of projects in github are supported by donations, innovation happens even without profit incentives. It may slow down the pace of AI development but I am willing to wait anothrt decade for AIs if it protects user data and let’s regulation catch up.

        • Johem@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          Reddit is currently trying to monetize their user comments and other content by charging for API access. Which creates a system where only the corporations profit and the users generating the content are not only unpaid, but expected to pay directly or are monetized by ads. And if the users want to use the technogy trained by their content they also have to pay for it.

          Sure seems like a great deal for corporations and users getting fleeced as much as possible.

      • Treemaster099
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        1 year ago

        https://youtu.be/9xJCzKdPyCo

        This video can answer just about any question you ask. It’s long, but it’s split up into chapters so you can see what questions he’s answering in that chapter. I do recommend you watch the whole thing if you can. There’s a lot of information that I found very insightful and thought provoking

          • Treemaster099
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            1 year ago

            I don’t really have the time to look for timestamps, but he does present his arguments from many different angles. I highly recommend watching the whole thing if you can.

            Aside from that, the main thing I want to address is the responsibility of these big corporations to curate the massive library of content they gather. It’s entirely in their power to blacklist certain things like PII or sensitive information or hate speech, but they decided not to because it was cheaper. They took a gamble that people either wouldn’t care, didn’t have the resources to fight it, or would actively support their theft if it meant getting a new toy to play with.

            Now that there’s a chance they could lose a massive amount of money, this could deter other ai companies from flagrantly breaking the law and set a better standard that protects people’s personal data. Tbh I don’t really think this specific case has much ground to stand on, but it’s the first step in securing more safety for people online. Imagine if the database for this ai was leaked. Imagine all of the personal data, yours and mine included, that would be available to malicious people. Imagine the damage that could cause.