I’m actually surprised by the comments in here. This technology is incredibly disruptive to authors, if they are correct that their intellectual property has been misused by these companies to train LLMs, then they absolutely should have the right to prevent that.
You can both be pro AI and advancement, and still respect creators intellectual rights and the right to not have all content stolen by megacorporations and used by them to create profits while decimating entire industries.
Exactly this, this is the equivalent of me taking a movie, making a function, charge for it, and then be displeased when the creators demand an explanation about it.
It’s more like reading a book and then charging people to ask you questions about it.
AI training isn’t only for mega-corporations. We can already train our own open source models, so we should let people put up barriers that will keep out all but the ultra-wealthy.
But when the answers aren’t original thoughts but regurgitations of other peoples’ thoughts about the book, then it’s plagiarism. LLMs can’t provide original output, only variations on what people have made available (whether legally or not). The answer might not even be correct or make any sense. It’s just predictive text to a crazy degree.
When you copy someone’s work without attribution, that’s plagiarism. When your output is only possible because of someone else’s work over which they own copyright and the output replicated the copyrighted material, that’s copyright infringement.
LLMs can provide original output, but they can also make errors. You’d have to prove it meets the grounds for plagiarism, and to my knowledge no one’s been able to. It’s all been claims with no substance or merit so far.
An LLM can’t make something original, it can only make something derivative. But that derivative work isn’t the same as when a human makes a derivative work because a human isn’t writing each word or phrase based on the likely “correct” next word or phrase through an algorithmic process. What humans do is magnitudes more complex, though it can at times also be accidental or intentional plagiarism.
In short, an LLM’s output is necessarily a string of preexisting human inputs. A human’s output, while it can be informed by and reference other human inputs, can be an original analysis. The AI that is publicly available is not sophisticated enough to be more than fancy predictive text.
You’re making a hasty generalization here, namely by making sweeping claims without evidence or examples. Also, you’re begging the question by assuming that humans are more original than LLMs, again without providing any support or justification.
Take for example this study that found doctors prefered Med-paLM’s output to human doctor’s. If Everything is a remix, there’s no reason LLMs can’t meet the minimum criteria for creativity, especially absent any evidence to the contrary.
I’m really not, though I’ll readily admit I’m simplifying things. An LLM can only create something it’s been given. I guess it can generate a string of characters and assign a definition to it, but it’s not really intentional creation. There are many similarities between how a human generates something and how an LLM does, but to argue they’re the same radically oversimplifies how humans work. While we can program an LLM, we literally do not have the capability to replicate a human brain.
For example, can you tell me what emotions the LLM had when it produced the output it did? Did its physical condition have any effect? What about its past, not just what it has learned but how it was treated? What is its motivation? A human response to anything involving creativity factors in many things that we aren’t even consciously aware of, and these are things an LLM doesn’t have.
The study you’re citing is from Google, there’s likely some bias and selective reporting. That said, we were talking about creativity, not regurgitating facts or analyzing data. I think it’s universally accepted that as the tech gets better, it’s preferable to have a computer make the first attempt at a diagnosis, especially for a scan or large data analysis, then have a human confirm.
For the remix example, don’t forget that samples get attribution. Artists credit what they sampled and get called out when they don’t. I’m actually unclear as to whether an LLM actually can cite to how it derived its output just because the coders haven’t revealed if there’s some sort of derivation log.
No, it’s more like checking out every book from the library, and spending 450 years training at the speed of light, being evaluated on how well you can exactly reproduce the next part of any snippet taken from any book.
I don’t think that it is even remotely close to being the same thing. I’m sorry but we shouldn’t be affording companies the ability to profit off other people’s creations without their consent, regardless of how current copyright law works.
Acting as though a human writing a summary is the same thing as a vast network of computers processing data at a speed that is hundreds if not thousands times faster than a human is foolish. Perhaps it is also foolish to try and apply our current copyright laws (which already favour large corporations and not individual creators) to this slew of new technology, but just ignoring the fundamental difference between the two is no way of going about it. We need copyright reform, we need protections for creators, and we need to stop acting as though machine learning algorithms are remotely comparable to humans both in their capabilities, responsibilities and rights.
There is a perfectly reasonable way of doing this ethically, and that is using content that people have provided to the model of their own volition with their consent either volunteered or paid for, but not scraped from an epub, regardless of if you bought it or downloaded it from libgen.
There are already companies training machine learning models ethically in this manner, and if creators do not want their content used as training data, it should not be.
Human writing and LLM output can be creative, original, informative, or useful, depending on the context and purpose. It is a tool to be used by humans, we are in control of the input and the output. What we say goes, no one ever has to see LLM output without people making those decisions. Restricting LLMs is restricting the people that use
them. Mega-corporations will have their own models, no matter the price. What we say and do here will only affect our ability to catch up and stay competitive.
You also seem to be arguing a slippery slope argument, by implying that if LLMs are allowed to use copyrighted books as data, it will lead to negative consequences for creators and society, without explaining how or why this will happen, or providing any evidence. It’s a one-sided look at the issue that ignores the positive outcomes from LLMs, like increasing accessibility, diversity, and quality of literature and thought. As well as inspiring new forms of expression and creativity.
Finally, you seem to be making a moralistic fallacy. You claim that there is a perfectly reasonable way of doing this ethically, by using content that people have provided. However, you don’t support this claim, or address its challenges. How would you ensure that the content providers are the original authors or have the rights to the content? How would you compensate them for their contribution? Is this a good way to get content that is diverse and representative of different perspectives and cultures? What about bias or manipulation in the data collection and processing?
I don’t think we need any more expansions to copyright, but a better understanding of LLMs’ capabilities and responsibilities. I think we need to be open-minded and critical about the potential and challenges of LLMs, but also be on guard against fallacious arguments or emotional appeals.
If you as a PERSON, an individual without wanting to make profits do it, then yes it would be absurd.
But, here is a corporation trying making exactly the same they have been doing with open source projects, making a real paywall out of others peoples work red hat, cough cough.
One of the largest communities on Lemmy is !piracy@lemmy.dbzer0.com, so I’m not really surprised that there’s people that don’t care about copyright :)
On the other hand, if a human is allowed to write a summary of a book, why should an AI not be allowed to do the same thing? Are they going to sue cliffnotes too?
Hold on, piracy isn’t necessarily not caring about copyright, but can be (and is, in a lot of cases), about fighting against the big corporations who take advantage of historically abusive copyright laws to dominate the market and prevent small authors and companies from surviving.
These AI companies, despite being copyright violators, are much closer to the big IP monopolists than the small authors, which are victims of both groups.
about fighting against the big corporations who take advantage of historically abusive copyright laws to dominate the market and prevent small authors and companies from surviving.
If people were really that principled, they’d totally boycott the big corporations and only consume media from the small authors and companies.
You made a great point. This is exactly my issue with piracy. I believe it’s a movement in the wrong direction, because it actually benefits the big media in the end.
My main point is that if people don’t want their content used for training LLMs they should absolutely have the option to not have their content used to train LLMs.
Training databases should be ethically sourced from opt in programs, that some companies are already doing, such as Adobe.
My main point is that if people don’t want their content used for training LLMs they should absolutely have the option to not have their content used to train LLMs.
How can one prove that their content is being used to train the LLM though, rather than something that’s derivative of their content like reviews of it?
if a human is allowed to write a summary of a book, why should an AI not be allowed to do the same thing?
Said human presumably would have to purchase or borrow a book in order to read it, which earns the author some percentage of the profits. If giant corps want to use the books to train their LLMs, it’s only fair that they’d have to negotiate with the publishers much like libraries do.
Said human presumably would have to purchase or lend a book in order to read it
Borrowing a book from a library doesn’t earn the author any more profits for each time it’s lended out, I don’t think. My local library just buys books off Amazon.
What if I read the CliffNotes and make my own summary based on that? What if I read someone else’s summary and reword it? I think that’s more like what ChatGPT is doing - I really don’t think it’s being fed entire copyrighted books as training data. There’s no actual proof LibGen or ZLib is being used to train it.
Eventually the bad actors are going to lose a lot of money trying to litigate their theft of people’s art. It was always going to end up in the legal system. These apps are even programmed to scrub watermarks and signatures. It’s deliberate theft.
I agree. This technology doesn’t exist in a vacuum. This isn’t some utopia where a Human artist can just solely focus on creating their art and not worry about financial gain because their survival needs are always guaranteed to be met or whatever.
I’m actually surprised by the comments in here. This technology is incredibly disruptive to authors, if they are correct that their intellectual property has been misused by these companies to train LLMs, then they absolutely should have the right to prevent that.
You can both be pro AI and advancement, and still respect creators intellectual rights and the right to not have all content stolen by megacorporations and used by them to create profits while decimating entire industries.
Exactly this, this is the equivalent of me taking a movie, making a function, charge for it, and then be displeased when the creators demand an explanation about it.
It’s more like reading a book and then charging people to ask you questions about it.
AI training isn’t only for mega-corporations. We can already train our own open source models, so we should let people put up barriers that will keep out all but the ultra-wealthy.
But when the answers aren’t original thoughts but regurgitations of other peoples’ thoughts about the book, then it’s plagiarism. LLMs can’t provide original output, only variations on what people have made available (whether legally or not). The answer might not even be correct or make any sense. It’s just predictive text to a crazy degree.
When you copy someone’s work without attribution, that’s plagiarism. When your output is only possible because of someone else’s work over which they own copyright and the output replicated the copyrighted material, that’s copyright infringement.
LLMs can provide original output, but they can also make errors. You’d have to prove it meets the grounds for plagiarism, and to my knowledge no one’s been able to. It’s all been claims with no substance or merit so far.
An LLM can’t make something original, it can only make something derivative. But that derivative work isn’t the same as when a human makes a derivative work because a human isn’t writing each word or phrase based on the likely “correct” next word or phrase through an algorithmic process. What humans do is magnitudes more complex, though it can at times also be accidental or intentional plagiarism.
In short, an LLM’s output is necessarily a string of preexisting human inputs. A human’s output, while it can be informed by and reference other human inputs, can be an original analysis. The AI that is publicly available is not sophisticated enough to be more than fancy predictive text.
You’re making a hasty generalization here, namely by making sweeping claims without evidence or examples. Also, you’re begging the question by assuming that humans are more original than LLMs, again without providing any support or justification.
Take for example this study that found doctors prefered Med-paLM’s output to human doctor’s. If Everything is a remix, there’s no reason LLMs can’t meet the minimum criteria for creativity, especially absent any evidence to the contrary.
I’m really not, though I’ll readily admit I’m simplifying things. An LLM can only create something it’s been given. I guess it can generate a string of characters and assign a definition to it, but it’s not really intentional creation. There are many similarities between how a human generates something and how an LLM does, but to argue they’re the same radically oversimplifies how humans work. While we can program an LLM, we literally do not have the capability to replicate a human brain.
For example, can you tell me what emotions the LLM had when it produced the output it did? Did its physical condition have any effect? What about its past, not just what it has learned but how it was treated? What is its motivation? A human response to anything involving creativity factors in many things that we aren’t even consciously aware of, and these are things an LLM doesn’t have.
The study you’re citing is from Google, there’s likely some bias and selective reporting. That said, we were talking about creativity, not regurgitating facts or analyzing data. I think it’s universally accepted that as the tech gets better, it’s preferable to have a computer make the first attempt at a diagnosis, especially for a scan or large data analysis, then have a human confirm.
For the remix example, don’t forget that samples get attribution. Artists credit what they sampled and get called out when they don’t. I’m actually unclear as to whether an LLM actually can cite to how it derived its output just because the coders haven’t revealed if there’s some sort of derivation log.
No, it’s more like checking out every book from the library, and spending 450 years training at the speed of light, being evaluated on how well you can exactly reproduce the next part of any snippet taken from any book.
No, it’s really nothing like reading at all. Your example requires a human element. This is just the consumption of data, not reading.
Humans are the ones making these models. It’s not entirely the same thing, but you should read this article by the EFF.
I don’t think that it is even remotely close to being the same thing. I’m sorry but we shouldn’t be affording companies the ability to profit off other people’s creations without their consent, regardless of how current copyright law works.
Acting as though a human writing a summary is the same thing as a vast network of computers processing data at a speed that is hundreds if not thousands times faster than a human is foolish. Perhaps it is also foolish to try and apply our current copyright laws (which already favour large corporations and not individual creators) to this slew of new technology, but just ignoring the fundamental difference between the two is no way of going about it. We need copyright reform, we need protections for creators, and we need to stop acting as though machine learning algorithms are remotely comparable to humans both in their capabilities, responsibilities and rights.
There is a perfectly reasonable way of doing this ethically, and that is using content that people have provided to the model of their own volition with their consent either volunteered or paid for, but not scraped from an epub, regardless of if you bought it or downloaded it from libgen.
There are already companies training machine learning models ethically in this manner, and if creators do not want their content used as training data, it should not be.
Human writing and LLM output can be creative, original, informative, or useful, depending on the context and purpose. It is a tool to be used by humans, we are in control of the input and the output. What we say goes, no one ever has to see LLM output without people making those decisions. Restricting LLMs is restricting the people that use them. Mega-corporations will have their own models, no matter the price. What we say and do here will only affect our ability to catch up and stay competitive.
You also seem to be arguing a slippery slope argument, by implying that if LLMs are allowed to use copyrighted books as data, it will lead to negative consequences for creators and society, without explaining how or why this will happen, or providing any evidence. It’s a one-sided look at the issue that ignores the positive outcomes from LLMs, like increasing accessibility, diversity, and quality of literature and thought. As well as inspiring new forms of expression and creativity.
Finally, you seem to be making a moralistic fallacy. You claim that there is a perfectly reasonable way of doing this ethically, by using content that people have provided. However, you don’t support this claim, or address its challenges. How would you ensure that the content providers are the original authors or have the rights to the content? How would you compensate them for their contribution? Is this a good way to get content that is diverse and representative of different perspectives and cultures? What about bias or manipulation in the data collection and processing?
I don’t think we need any more expansions to copyright, but a better understanding of LLMs’ capabilities and responsibilities. I think we need to be open-minded and critical about the potential and challenges of LLMs, but also be on guard against fallacious arguments or emotional appeals.
Nah, false.
If you as a PERSON, an individual without wanting to make profits do it, then yes it would be absurd.
But, here is a corporation trying
makingexactly the same they have been doing with open source projects, making a real paywall out of others peoples workred hat, cough cough.It’s more like buying a book, studying everything in it, then charge people for tutoring them with the knowledge you got from the book.
But now a machine is doing it, with all the books it can find…
One of the largest communities on Lemmy is !piracy@lemmy.dbzer0.com, so I’m not really surprised that there’s people that don’t care about copyright :)
On the other hand, if a human is allowed to write a summary of a book, why should an AI not be allowed to do the same thing? Are they going to sue cliffnotes too?
Hold on, piracy isn’t necessarily not caring about copyright, but can be (and is, in a lot of cases), about fighting against the big corporations who take advantage of historically abusive copyright laws to dominate the market and prevent small authors and companies from surviving.
These AI companies, despite being copyright violators, are much closer to the big IP monopolists than the small authors, which are victims of both groups.
If people were really that principled, they’d totally boycott the big corporations and only consume media from the small authors and companies.
You made a great point. This is exactly my issue with piracy. I believe it’s a movement in the wrong direction, because it actually benefits the big media in the end.
My main point is that if people don’t want their content used for training LLMs they should absolutely have the option to not have their content used to train LLMs.
Training databases should be ethically sourced from opt in programs, that some companies are already doing, such as Adobe.
How can one prove that their content is being used to train the LLM though, rather than something that’s derivative of their content like reviews of it?
there is already lots of evidence that they have scraped copyrighted art and photographs for their datasets.
Said human presumably would have to purchase or borrow a book in order to read it, which earns the author some percentage of the profits. If giant corps want to use the books to train their LLMs, it’s only fair that they’d have to negotiate with the publishers much like libraries do.
Borrowing a book from a library doesn’t earn the author any more profits for each time it’s lended out, I don’t think. My local library just buys books off Amazon.
What if I read the CliffNotes and make my own summary based on that? What if I read someone else’s summary and reword it? I think that’s more like what ChatGPT is doing - I really don’t think it’s being fed entire copyrighted books as training data. There’s no actual proof LibGen or ZLib is being used to train it.
authors do get money from libraries that buy the books. and in some places they even get money depending on how much its checked out.
Eventually the bad actors are going to lose a lot of money trying to litigate their theft of people’s art. It was always going to end up in the legal system. These apps are even programmed to scrub watermarks and signatures. It’s deliberate theft.
Yes, thank you for this comment.
I agree. This technology doesn’t exist in a vacuum. This isn’t some utopia where a Human artist can just solely focus on creating their art and not worry about financial gain because their survival needs are always guaranteed to be met or whatever.
I’m pro AI and advancement, and anti-IP.
I hope to see AI disrupt our capitalistic value of ownership further.