• FaceDeer
    link
    fedilink
    210 months ago

    The problem I have with this is that the argument seems to boil down to “I don’t like this so it should be illegal.” It puts me in mind of the classic objection on the grounds that something is devastating to your case. Laws should have a rationale beyond simply being what “collective morality” decides, otherwise all sorts of religious prohibitions and moral scares end up embedded in the legal system too.

    Generally speaking, laws are based on the much simpler and more generic foundation of rights. Laws exist to protect rights, and get complicated because those rights can end up conflicting with each other. So what rights do the two “sides” of this conflict bring to the table? On the pro-AI side people are arguing that they have the right to learn concepts and styles from publicly available data, to analyze that data and record that analysis, and to make use of the products of that analysis. It all seems quite reasonable and foundational to me. On the anti-AI side - arguments based on complete misunderstandings of how the technology works aside - I generally see “because it’s devastating to my future career, your honor.”

    Anti-AI artists are simply being selfish, IMO, demanding that society must continue to provide them with their current niche of employment and “specialness” by restricting other peoples’ rights through new legal restrictions. Sure, if you can convince enough people to go along with that idea those laws will be passed. That doesn’t make them right. There have been many laws over the years that were both popular and wrong on many levels.

    Fortunately there are many different jurisdictions in the world. There isn’t just one “The Law.” So even if some places do end up banning AI I don’t think that’s going to slow it down much on a global scale, it’ll just help determine which places get a lead and which places fall behind in developing this new technology. There’s too much benefit for everyone to forego it everywhere.

    • @pup_atlas
      link
      110 months ago

      I’m out and about today, so apologies if my responses don’t contain the level of detail I’d like; As for the law being collective morality, all sorts of religious prohibitions and moral scares HAVE ended up in the law. The idea is that the “collective” is large enough to dispel any niche restrictive beliefs. Whether or not you agree with that strategy aside, that is how I believe the current system works in an ideal sense (even if it works differently in practice), that’s what it is designed to protect from my perspective.

      As for anti-AI artists, let me pose a situation for you to illustrate my perspective. As a prerequisite for this situation, a large part of a lawsuit, and the ability to advocate for a law is based on standing, the idea that you personally, or a group you represent has been directly, tangibly harmed by the thing you are trying to restrict. Here is the situation:

      I am a furry, and a LARGE part of the fandom is based on art and artists. A core furry experience is getting art commissioned of your character from other artists. It’s commonplace for all these artists to have a very specific, identifiable signature style, so much so that it is trivial for me and other furs to be able to identify artists by their work alone at just a glance. Many of these artists have shifted to making their living full time off of creating art. With the advent of some new generational models, it is now possible to train a model exclusively off of one singular artists style, and generate art indistinguishable from the real thing without ever contacting them. This puts their livelihood directly at risk, and also muddies the waters in terms of subject matter, and what they support. Without laws regulating training, this could take away their livelihood, or even give a (very convincing, and hard to disprove) impression that they support things they don’t, like making art involving political parties, or illegal activities, which I have seen happen already. This almost approaches defamation in my opinion.

      One argument you could make is that this is similar to the invention of photography, which may have directly threatened the work of painters. And while there are some comparisons you could draw from that situation, photography didn’t fundamentally replace their work verbatim, it merely provided an alternative that filled a similar role. This situation is distinct because in many cases, it’s not possible, or at least immediately apparent which pieces are authentic, or not. That is a VERY large problem the law needs to solve as soon as possible.

      Further, I believe the same, or similar problems exist in LLMs, like they do in the situation involving generative image models above. Sure with enough training, those issues are lessened in impact, but where is the line of what is ok and what isn’t? Ultimately the models themselves don’t contain any copyrighted content, but they (by design) combine related ideas and patterns found in the training data, in a way that will always approximate it, depending on the depth of training data. While “overfitting” might be considered a negative in the industry, it’s still a possibility, and until there is some sort of regulations establishing the fitness of commercially available LLMs, I can envision situations in which management would cut training short once it’s “good enough”, leaving overfitting issues in place.

      Lastly, with respect, I’d like to push back on both the notion that I’d like to ban AI or LLMs, as well as the notion that I’m not educated enough on the subject to adequately debate regulations on it. Both are untrue. I’m very much in favor of developing the technology, and exploring all it’s applications. It’s revolutionary, and worthy of the research attention it’s getting. I work on a variety of models across the AI and LLM space professionally, and I’ve seen how versatile it is; That said, I have also seen how over publicized it is. We’re clearly (from my perspective) in a bubble that will eventually pop. We’re claiming products use AI to do this and that across nearly every industry, and while LLMs in particular are amazing, and can be used in a ton of applications, it’s certainly not all of them— and I’m particularly cautious of putting new models in charge of dangerous or risky processes where they shouldn’t be before we develop adequate metrics, regulation, and guardrails. To summarize my position, I’m very excited to work towards developing them further, but I want to publicly express the notion that it’s not a silver bullet, and we need to develop legal frameworks for protecting people now, rather than later.

      • FaceDeer
        link
        fedilink
        210 months ago

        all sorts of religious prohibitions and moral scares HAVE ended up in the law. The idea is that the “collective” is large enough to dispel any niche restrictive beliefs.

        I’m rather confused by this. My point is that having the collective’s religious prohibitions and moral scares imposed upon the minority is a bad thing, and that it’s a flaw in “majority rule” that a rights-based legal system is supposed to attempt to counter. It doesn’t always work but that’s the idea. So simply having a large number of people pull out pitchforks and demand that the rights of AI trainers be restricted should not automatically result in that actually happening.

        With regard to your scenario about furry art: You’re simply describing a specific example of the general scenario I already talked about. You’re saying that furry artists should have a right to copyright their “style”, which is emphatically not the case. Style cannot be copyrighted (and as a furry-adjacent who’s seen plenty of furry art over the years, I would also very much disagree that every furry artist has a unique style. They copy off each other all the time). You’re also saying that furry artists should have a right to their livelihood, which is also not the case. Civilization changes over time, new technologies and new social movements come along and result in jobs coming and going. Nobody has the right to make a living at some particular career.

        You say “A core furry experience is getting art commissioned of your character from other artists.” Well, maybe that was a core furry experience. But the times they are a-changing. My avatar image here on the Fediverse was generated by me in large part by AI art generators and I got a much better experience and a much more accurate reflection of what I was going for than I would have got via a commission, and I got it for free. That sucks for the artists but it’s great for everyone else.

        And while there are some comparisons you could draw from that situation, photography didn’t fundamentally replace their work verbatim, it merely provided an alternative that filled a similar role.

        Does AI art actually replace an artist’s work verbatim? When I made my avatar image I still did a lot of intermediate fiddling steps in the Gimp. AI is just part of my workflow. An artist could also make use of it. Or they could continue making art the old fashioned way if they want, the mere existence of AI art generators doesn’t affect that ability one whit. All it does is change the market, possibly making it so that they can no longer make a living at their old job.

        There are still plenty of painters. But when photography came along there were probably a lot of portrait painters who were put out of work. Over the years I’ve had several family photographs taken in photography studios, but I’ve never even considered commissioning a painter to paint a portrait of myself.

        Ultimately the models themselves don’t contain any copyrighted content

        And that’s that for basically all the anti-AI legal arguments.

        but they (by design) combine related ideas and patterns found in the training data, in a way that will always approximate it, depending on the depth of training data

        And there’s absolutely nothing wrong with this. People do it all the time, why is it suddenly a huge moral problem when a machine does? Should it be illegal for someone to go to a furry artist and ask for something “in the style of Dark Natasha”, or for an artist to pick up some of his personal style from Jay Naylor’s work?

        I want to publicly express the notion that it’s not a silver bullet, and we need to develop legal frameworks for protecting people now, rather than later.

        I actually agree, but the people that I think are most in need of protecting are the people who train and use AI models. There are tons of news stories and personal experiences being posted these days about these people being persecuted in various ways, deplatformed, lied about, and so forth. They’re the ones whose rights people are proposing should be restricted.