- cross-posted to:
- privacy@lemmy.world
- privacy@lemmy.ca
- piracy@lemmy.dbzer0.com
- cross-posted to:
- privacy@lemmy.world
- privacy@lemmy.ca
- piracy@lemmy.dbzer0.com
OpenAI’s ChatGPT and Sam Altman are in massive trouble. OpenAI is getting sued in the US for illegally using content from the internet to train their LLM or large language models
if you release data into the public domain (aka, if it’s indexable by a search engine) then copying that data isnt stealing - it cant be, the data was already public in the first place.
this is just some lawyer trying to make a name for themselves
Just because the data is “public” doesn’t mean it was intended to be used in this manner. Some of the data was even explicitly protected by gpl licensing or similar.
but GPL licensing indicates that “If code was put in the public domain by its developer, it is in the public domain no matter where it has been” - so, likewise for data. if anyone has a case against OpenAI, it’d be whatever platforms they scraped - and ultimately those platforms would open their own, individual lawsuits.
Not a lawyer, but you can argue that if the language model is trained using gpl licensed data, then the language model has to be published under gpl as well.
That’s not at all how the GPL works…
can you expand on that? I’m not very familiar with the legal aspect of GPL.
crickets…
If you release code under gpl, and I modify it, I’m required to release those modifications publicly under gpl as well.
so if content is under GPL and used for training data, how far is the process of training/fine-tuning considered “modification”? For example, if I scrape a bunch of blog posts and just try to use tools to analyze the language, does that considered “modification”? What is the minimum solution that OpenAI should do (or should have done) here, does it stop at making the code for processing the data public, or the entire code base?
I’m not sure. And I’m not sure there’s legal precedant for that either.
That’s why I dont have a problem with any of these lawsuits, it gives us clarity on the legal aspects, whichever way it goes.
Not everything indexed by a search engine is public domain that’s not how copyright works.
There’s plenty that actually is in the public domain but I guess scraping the web is a lot easier for these people
I don’t agree. Purpose and use case should be a factor. For example, my friends take pictures of me and put them on social media to share memories. Those images have since been scraped by companies like Clearview AI providing reverse face search to governments and law enforcement. I did not consent to or agree to that use when my likeness was captured in a casual setting like a birthday party.
perhaps - but it could easily be argued that you knew that what you share on the internet was viewable by anyone. are you going to sue Clearview and/or the law enforcement agencies for control over your image that’s in the public domain?
You keep using that word. Maybe you should look up what it means.
You know that you are making a photo public when you post it publicly
I didn’t post it, and it seems like you missed the point of my comment.
I didn’t post it
Let’s note that a NY Magazine article is copyrighted but publicly available.
If an LLM scrapes that article, then regurgitates pieces of it verbatim in response to prompts, without quoting or parodying, that is clearly a violation of NY Mag’s copyright.
If an LLM merely consumes the content and uses it to infinitesimally improve its ability to guess the next word that fits into a reply to a prompt, without a series of next-words reproducing multiple sentences from the NY Mag article, then that should be perfectly fine.
deleted by creator
That’s not how copyright works. You cannot freely monetize on other people’s work. If you publish some artwork I cannot copy it and sell it as my own work.
But you can learn from it and create your own new art that may have a similar style as the original
A human can, within limits.
But software isn’t human. AI models aren’t “learning”, “practicing” and “developing their own skills”.
Human-made software is copying other peoples work, transforming it, letting a bunch of calculations loose on it, and mass producing similar works as the input.
Using an artists work to train an ai model and making similar stuff with it to make money off of it, is like copying someones work, putting on a mug, and selling that.
It’s not using it as inspiration to improve your own skills.
People are humanizing computer programs way too much, and thus, we have arguments like this. An AI language model is not one of those sci-fi AIs that live in spaceships and talk to the crew. AI language models do not have individuality, creativity, consciousness, or free will. They are computer programs doing math to turn inputs into outputs.
deleted by creator