• 0 Posts
  • 6 Comments
Joined 1 year ago
cake
Cake day: June 11th, 2023

help-circle

  • I think a critical detail getting overlooked in the broader discussion of the changes brought by LLM AI is not quality but quantity. What I mean is, sure, AI isn’t going to replace any one complete worker. There are vanishingly few jobs AI can 100% take over. But it can do 80% of a few jobs, 50% of more jobs, and 20% of a lot of jobs.

    So at the company level, where you had to hire 100 workers to do something, now you only need 80, or 50, or 20. That’s still individual people who are out of their entire job because AI did some or most of it, and their bosses consolidated the rest of the responsibilities onto the remaining workers.


  • “Calls” and “puts” are types of contracts about buying/selling stocks (they aren’t the stock themselves but are centered around a given stock and its trading price, so they are called “derivatives” as they are “derived” from the stock).

    A put is a contract that allows the buyer of the contract to sell stock at an agreed upon price to the seller of the contract, regardless of the current trading price. They are used for a variety of reasons. In one usage, someone who is buying some of the stock at the current trading price may also buy a “put” on the stock at a slightly lower price. This way, they spend a little more money at the time of buying the stock, but if the trading price plummets, they can still sell it at that slightly lower “put” price and not lose too much money.

    In this case, the idea would be to buy a “put” (without buying the stock at the same time) when the buyer thinks the stock’s trading price is overvalued. Then when the price falls below the “puts” agreed upon value, buy the stock at the lower price and immediately invoke the contract to sell at the "put"s higher price.


  • Whether or not data was openly accessible doesn’t really matter […] ChatGPT also isn’t just reading the data at its source, it’s copying it into its training dataset, and that copying is unlicensed.

    Actually, the act of copying a work covered by copyright is not itself illegal. If I check out a book from a library and copy a passage (or the whole book!) for rereading myself or some other use that is limited strictly to myself, that’s actually legal. If I turn around and share that passage with a friend in a way that’s not covered under fair use, that’s illegal. It’s the act of distributing the copy that’s illegal.

    That’s why whether the AI model is publicly accessible does matter. A company is considered a “person” under copyright law. So OpenAI can scrape all the copyrighted works off the internet it wants, as long as it didn’t break laws to gain access to them. (In other words, articles freely available on CNN’s website are free to be copied (but not distributed), but if you circumvent the New York Times’ paywall to get articles you didn’t pay for, then that’s not legal access.) OpenAI then encodes those copyrighted works in its models’ weights. If it provides open access to those models, and people execute these attacks to recover pristine copies of copyrighted works, that’s illegal distribution. If it keeps access only for employees, and they execute attacks that recover pristine copies of copyrighted works, that’s keeping the copies within the use of the “person” (company), so it is not illegal. If they let their employees take the copyrighted works home for non-work use (or to use the AI model for non-work use and recover the pristine copies), that’s illegal distribution.


  • It doesn’t have to have a copy of all copyrighted works it trained from in order to violate copyright law, just a single one.

    However, this does bring up a very interesting question that I’m not sure the law (either textual or common law) is established enough to answer: how easily accessible does a copy of a copyrighted work have to be from an otherwise openly accessible data store in order to violate copyright?

    In this case, you can view the weights of a neural network model as that data store. As the network trains on a data set, some human-inscrutable portion of that data is encoded in those weights. The argument has been that because it’s only a “portion” of the data covered by copyright being encoded in the weights, and because the weights are some irreversible combination of all of such “portions” from all of the training data, that you cannot use the trained model to recreate a pristine chunk of the copyrighted training data of sufficient size to be protected under copyright law. Attacks like this show that not to be the case.

    However, attacks like this seem only able to recover random chunks of training data. So someone can’t take a body of training data, insert a specific copyrighted work in the training data, train the model, distribute the trained model (or access to the model through some interface), and expect someone to be able to craft an attack to get that specific work back out. In other words, it’s really hard to orchestrate a way to violate someone’s copyright on a specific work using LLMs in this way. So the courts will need to decide if that difficulty has any bearing, or if even just a non-zero possibility of it happening is enough to restrict someone’s distribution of a pre-trained model or access to a pre-trained model.


  • That sounds more like a modern reinterpretation of “protecting religion from the state.” The context of the origin of the separation of church and state from the late 18th century was more about religious adherence being closely tied to political power, so you could deal your political opponents harm by branding them a participant of a socially outcast religion, or you could use political power to (legally) persecute the followers of a non-state religion. Yes, it was about protecting religion from the state, but it was in more concrete terms of protecting the followers of non-state-backed religions, rather than preventing some kind of philosophical corruption of the moral foundations of the religion.