First off, sorry if this is the wrong to community to post to - I’ll move it somewhere else should it not fit the community.
My best friend quite often is a contrarian for the sake of being a contrarian, I feel like. Discussing politics, veganism, the problems with using Amazon, what have you, with him is nigh impossible because he insists on his opinion and won’t budge. I feel like he just feels superior to other people, or at least to me, in a way that he just won’t change his mind, doesn’t hear other sides, and argues for the sake of arguing.
Now, in a recent discussion, I asked him if he knew why images aren’t displayed properly in my Firefox-fork browser (Mull). He gave an answer and asked why I would use a custom browser instead of Firefox itself to which I responded that it’s more privacy-focused and that I didn’t like Mozilla’s implementation of AI in their browser.
Long story short, it devolved into a lengthy discussion about AI, how the fear of AI is based on ignorance and a lack of knowledge, that it’s fine that AI is used for creative projects because in most cases it’s an assisting tool that aids creativity, doesn’t steal jobs etc. essentially that it’s just a tool to be used like a hammer would be.
What pisses me off the most about all this is that he subtly implies that I don’t know enough about the subject to have an opinion on it and that I don’t have any sources to prove my points so they’re essentially void.
How do I deal with this? Whatever facts I name he just shrugs off with “counter”-arguments. I’ve sent him articles that he doesn’t accept as sources. This has been going on for a couple hours now and I don’t know what to tell him. Do you guys have sources I could shove in his face? Any other facts I should throw his way?
Thank you in advance
Edit: A thing to add: I wasn’t trying to convince him that AI itself is bad - there are useful usages of AI that I won’t ignore. What I was concerned about is the way AI is used in any and all products nowadays that don’t need AI to function at all, like some AI-powered light bulbs or whatever; that creative jobs and arts are actively harmed by people scraping data and art from artists to create derivative “art”; that it’s used to influence politics (Trump, Gaza). These things. The way AI is used in its unmonitored way is just dangerous, I feel like
You’re arguing opinions and trying to convince someone as if they are facts. There’s plenty to criticize about how AI is used, but it is a valuable tool for those that use it.
The amount of value it provides is very subjective, and even if you don’t find it useful, many others do. You might as well be trying to argue that you don’t like photography because it doesn’t provide the same experience of drawings and paintings. You wouldn’t be wrong to feel that way, but you would be wrong to tell someone else that they need to feel the way you do.
Sometimes it’s okay to just disagree. With a topic like AI, the effectiveness of arguments against it will vary heavily depending on one’s subjective opinion on things like intellectual property, what is and isn’t art, how we should deal with emerging technologies etc.
Yeah I feel like especially when it comes to something like art, it becomes extremely subjective.
For me personally, I’m not a fan of using AI to create art itself, but I can see a good use case for it in supporting an artist. For example, a lot of artists I know are notoriously bad at things like marketing, or writing grant proposals and mission statements and all the other bits of miscellaneous paperwork that seems to be required. So I think AI has some value there, to facilitate artists getting their work out there, rather than creating the actual art.
But also having said that, on a more fundamental level I guess you could argue that any bit of art that lights someone’s brain up has some sort of value, even if it only works on one person and everyone else in the world thinks it’s crap. It still made life a little bit brighter for that one person, which I suppose is the point. So in that sense maybe it doesn’t matter where it came from at all?
Anyway, that’s a bit rambly and esoteric for first thing in the morning, sorry!
It seems like a quite pointless discussion since you both seem to have already decided your minds.
They don’t accept your sources? Why? If they really are valid and they just cherry-pick sources, then there is no way of convincing them.
On the other hand, you also just seem to dismiss their counterarguments without much thought. If they can give a counterargument for your every argument, then maybe your arguments actually aren’t good?
The thing is, they aren’t really counter-arguments. For example, I mentioned that AI being used to create art is theft* because artists aren’t credited and their art is used to create amalgamations out of thousands of pieces of art. He argued that it’s just the same when an artist draws inspiration from other peoples’ art and creates their own - which is just plain false. In his eyes, this might a valid counter-argument but it isn’t, right?
*copyright infringement
deleted by creator
I think the main difference is intent, inspiration, and feelings portrayed through art. AI may be able to replicate a certain style or use a motif, but the three aspects I mentioned are absent with an AI. It is, indeed, difficult to put into words what you mean - I’ll give you that.
deleted by creator
For sure, yes. I’m not arguing that AI has its uses. The main thing I was discussing with my friend was that the way AI is used commercially right now is damaging to a lot of industries, that it’s a trend being used by companies to make their products looks better than they are, and that they profit off other peoples’ work. AI is a tool, yes, but what I was so adamant about was that there should regulations and policies that make clear what is and where it’s ok to use and what isn’t
Then I would steer away from arguments which are more debatable and stick to ones that are more robust and focus on the present and future than the past, and avoid anything that can get mired in debate. I’d focus on what the specific problem is (we will have fewer artists due to competition with AI) why it’s a problem (cultural stagnation, lack of new inspiration for new ideas) and why alternative solutions to regulation wouldn’t work (would socializing artistic fields work as they’d no longer be subject to market forces).
Saying that AI is a tool like any other artists tool also doesn’t refute OP’s point about art theft.
deleted by creator
Whatever you decide to call it, the problem exists.
When you trace or use existing art as reference, you’re using this to learn and not passing it off as your own design. Equivalently, when training an AI model, it’s the same. I don’t think the training part is a problem. The problem comes when producing work. A generative model will only produce things that are essentially interpolations of artworks it has trained on. A human artist interpolates between artworks they have seen from other artists, as well as their own lived experiences, and extrapolate by evaluating how some more avant garde elements tickle their emotions. Herein lies the argument that generative AI in its current state doesn’t produce anything novel and just regurgitates what it has seen.
There’s also the problem of “putting words in someone else’s mouth”. Everyone has a unique art style (to a certain extent), just like how everyone has a unique writing style, or a unique voice. I’ll speak on voice first since more of us can relate to that. Having someone copy your voice to make it say things you did not say is something many will be very uncomfortable with. To an artist, art style and writing styles are the same.
The economic side is also a problem. And while I don’t expect generative AI to go away, it can be done in a way that is fair to the people whose work have made it possible and allows them to continue doing what they do. We should be striving towards that.
You should read these two articles from Cory Doctorow. I think they’ll help clear up some thing for you.
https://pluralistic.net/2024/05/13/spooky-action-at-a-close-up/#invisible-hand
I’ll check these out, appreciate it!
I read the first article, and I recommended you do as well, as it’s the best take I have seen on image generation.
It sounds like you and your friend both have your minds made up already, but reality is more nuanced, and the truth is somewhere inbetween.
“AI art” isn’t copyright infringement, or “stealing”, but it’s also not art. It’s a neutral technology.
I agree it is being used unethically (and overused) by corporations, but it’s fundamentally a problem with how our society uses and reacts to it. Like so many other new technologies, the true issue is with capitalism, not the tech itself.
Like so many other new technologies, the true issue is with capitalism, not the tech itself.
Probably the best single-line reduction of the whole issue here in this thread. Well said.
However, I do think it’s also cultural in the tech companies. The modern tech culture was borne from an attitude that was 100% rooted in “well the law says we can’t do this, so we’ll do this instead, which is different on a technical and legal level, but achieves the same end-result.”
This was heavily evident in early piracy, which went from centralized servers of Napster and Kazaa to the decentralized nature of Bittorrent entirely in response to civil suits for piracy. It was an arms race. Soon enough the copyright holders responded by hiring third parties to hide in torrent swarms to be able to log IPs and hit people “associated” with those IPs with suits for sharing trivial amounts of copyrighted data with the third party. That was responded to with private trackers, and eventually, streaming.
Each step was a technical response to an attempt by society to legally regulate them. Just find a new technical way that’s not regulated yet!
The modern tech companies never lost that ethos of giving technical responses to route around new legal regulation. Which, in itself, is further enabled by capitalism, as you astutely pointed out.
It’s not theft, because the artist still has their original art. It’s copyright infringement, if anything.
Right
Maybe he is just feeding your arguments to AI and you argue with AI instead of him. 😁
But yes. As others have stated, you argue opinions. Not facts. And neither of you is really correct. You value different things. For example he values progress more than authors copyrights. And I find his points you mentioned valid. As are yours.
I however believe, you can agree on any topic with a “smart/sane enough” person with “enough time/motivation” and “similar enough values” as you if you are also such a person.
In that case one or the other will change his mind.
Those conditions are rarely met and the most important part is you also have to be open to changing your mind.
For starters never shut the fuck up about books3.
Every AI company used it. They all knew where it came from. It was not hidden that the provenance was piracy. It was well known that the entirety of books3 was pirated via Bibliotik, a private tracker known for disseminating tools for removing DRM from ebooks. (Last I checked, that last part about removing DRM isn’t just a civil penalty, it’s an actual fucking crime in the US)
They can say all they want. The reality is they’re playing a game of “as long as do a bigger crime than anyone else, it’s totally okay.”
We harass the living shit out of “pirates” who do nothing but share media and don’t make a profit. We still have laws aimed at attempting at removing internet access wholesale to people accused of piracy.
But when your piracy makes a cool billion, suddenly its totes okay bro.
They always knew where books3 came from and they didn’t care. They can try to claim otherwise and stop using books3 now, but the reality is they wouldn’t have been able to grow their businesses at all without it.
Copyright is bullshit and broken, but I don’t understand how copyright violation on a mass scale is somehow okay when we put the admins of The Pirate Bay in prison for far fucking less. Like we literally saw private corporations lobby an entire country and it’s courts into changing how they worked just to prosecute The Pirate Bay admins. They went after them because they knew they didn’t have the money to fight it, even though so much of the case hinged on the “obscene” amounts of money they were making off of it. Not enough for good legal counsel.
It’s just one more example of “rules for thee, not for me.” Rich scumfucks are asking for it.
They’re happy to put us under the boot of copyright, but the big players? For them it’s fine because they’re making so much money, you see?
We live in a system built on exploitation and this is just the newest flavor.
Why is it so important for you to convince him? Sometimes people won’t agree with you and that’s fine. You can still be friends.
If you simply don’t want to engage in a discussion with him, then that’s fine, you should let him know that you’re not interested in talking about it. You don’t have to justify your choices to him, if you want to use a particular browser then that’s fine and if he spontaneously decides he needs to “talk you out of it” then that’s a dick move. Tell him that you don’t want to debate the subject and it’s no skin off of his nose so he shouldn’t try to engage you in one.
But if you’re asking “how can I convince him that he’s wrong”, well that is engaging in the debate. And if you’re going to engage in a debate you should try to be as open about it as you’d like your debate opponent to be in turn. Have you considered that perhaps he has some valid points and is not taking that position just to be contrarian?
Personally, I find that it’s pretty much impossible to talk someone with a strongly-held position out of that position. The value of Internet debates with people like that is that lots of spectators who don’t have such strongly-held positions may be watching, but when it’s a one-on-one situation it’s likely to be a futile and frustrating effort with no benefit. So I would advise going with the “don’t bother engaging” route. But of course, if you feel strongly that you want to engage, I can’t change your mind on that and won’t try. It’s your time to spend.
The secret to life is…
You rarely convince anyone out of their views. Live and let live.
Yea, I essentially gave up. Tired of arguing and convincing him when there’s no convincing. It’s just so frustrating to not be able to do anything about this
For me it’s not even that I hate the concept of AI. It’s that we’re shoving half baked AI literally everywhere we can without any fuck given to reliability, accuracy and safety, or even sustainability.
- GitHub Copilot: code so bad I won’t even bother reviewing the PR. It doesn’t save time at all unless you suck at coding already.
- Google Gemini/Bing Copilot: completely off the mark more often than not, it’s still faster to do a standard, well written web search.
- AI assistants: they’ve always sucked and still sucks but hey at least the voice is realistic right. It still doesn’t understand what you say to it half the time.
- Recall: nobody ever asked for that
- AI summaries: I’d rather skim through articles, at least I know what I skipped.
- AI translations: better than Google Translate I guess, but more of the same: it gets you by when reading foreign sites but the quality is still too bad to use it to translate my apps with it in a professional setting.
They’re all impressive products on a technical level, but they’re basically really expensive alpha quality software that sucks a stupid amount of power for dubious gains.
My other gripe is most of the time those feed your personal data to Microsoft/OpenAI for processing, because most people don’t have a quad RTX 4090 Ti setup to run any decent model locally at reasonable speeds. It’s using a jackhammer to nail a nail.
I’ll have to disagree with your stance on GitHub Copilot. It’s a tool that’s only useful if you’re already comfortable with coding. If you weren’t, you wouldn’t be able to distinguish when it spits out trash and where it’s actually useful.
kinnda reminds me of discussions around the automobile when it was a new, incredibly dangerous and unrealized product.
is it stupidly dangerous? yep. should we ban them? naw, then they wouldnt get better. should i at least make sure i invent a seat belt and utilize it while people make fun of me? absofuckinglutely.
Yo, as far as climate change is concerned, I’m 100% for getting rid of all personal vehicles and going all-in on mass transit.
We had trains before automobiles, so honestly, it feels like we could have skipped them and stuck with trains.
AI is a hot topic and most info in the public space is dead wrong. Unless you’re a developer and getting into the code base, you’re likely wrong. I’ve spent a year dabbling with that code base and have only barely scratched the surface.
Sam Altmann is behind a massive misinformation campaign to try and create a monopoly in AI. That is hard to prove, but in abstract, ALL the pieces fit well into this puzzle.
AI in the public space is a joke. It is all based off of the transformers library in one form or another. Go read the introduction page for the Transformers documentation on hugging face. It clearly states that it is incomplete and its intended use is as a simplified example code only. AI is enormously complex in its real capabilities. Most of the issues are due to the simplifications made to allow the ignorant public to use it.
Models have very limited scope, and the model itself is static. It can’t learn, predict, remember, or abstract ideas with mobility across information spaces.
The fear of AI in the present is based on a Greek pantheon like mythos of the machine gods. That has no basis in reality. At present the only thing to really fear is image recognition AI in drones just because that combo is poised to massively change war technology and cost effectiveness relative to any prior breakthrough in technology. There is very little relationship between this tech and what is in the public space.
Arguing with a belief system is an entirely pointless affair. If the person is more attached to their fantasies than reality, you’re never going to influence them with reality. The easiest tell is if they are spiritual. If they have exceptions to factual reality in this space, they will be gullible and blind to their biases elsewhere as well.
AI in the public space is a joke. It is all based off of the transformers library in one form or another. Go read the introduction page for the Transformers documentation on hugging face. It clearly states that it is incomplete and its intended use is as a simplified example code only. AI is enormously complex in its real capabilities. Most of the issues are due to the simplifications made to allow the ignorant public to use it.
Which page/passage are you referring to? I’m pretty sure you’re misreading or misinterpreting something because Huggingface has a good chunk of the state of the art models implemented. They’re complex in capabilities, but the implementations are incredibly simple, and that’s part of why it’s taken off the way it has.
I’m an atheist, so to me…
Believing that humans are so special that we’ll build a God-like intelligence that will surpass us…
That’s almost stupider than believing in a God with zero evidence for its existence.
We’ve already built machines that can surpass humans in many specialized domains. Why is it so hard to believe that we can put all of that together and have a machine surpassing us in all domains?
I mean idc about ai either, what other people choose to like has no effect on me whatsoever
Start by becoming an area matter expert, otherwise you two just have different opinions
Offer to bet him money about the outcome of some real world event that is contingent on the way he is claiming that things are
IDK how you can apply that to AI; that’s not the best one. But you can bet that there are no workers dying of heatstroke in Amazon-supplier warehouses. You can bet him that Trump has agreed to honor the results of the election. Etc etc.
It’s very easy to just make statements at each other. If you offer to back up your statements, then he can either refuse (in which case it’ll be harder for him to say he’s definitely right and you’re definitely wrong), or agree and then one or the other of you will learn something.
It’s up to you. You can also just let it go. But if you want to prove him wrong it is easier to do with questions and real-world actions than it is by coming up with the perfect statements. As you’ve discovered, he’s not obligated to react to statements any way other than how he’s decided to react to them.
I might give this a shot.
Yea, I think it’s equally important to know when to move on from things. No point in trying to win him over. It’s just frustrating to realise that there’s no convincing your conversation partner
Yeah. I wouldn’t invest yourself too much in it. But yes, depending on things it might be worthwhile to just offer to let him put his money where his mouth is, if he’s super sure about things.
“Bet” is a poor strategy. Focus on a win-win. For instance, “I see your point about x, and wonder about y.”
You’re probably out of luck trying to convince this person of your opinion. However if it comes up again and you want this person to walk away, I’d recommend mentioning the horrible working conditions that content moderators have to go through. Contrarians don’t like unassailable facts.
Could you elaborate what working conditions you mean specifically? Not sure I know what you mean
Content moderators are superexploited. These employees primarily work from African countries like Kenya, so there’s a whole second layer of neo-imperialism that I won’t get into right now. They’re given poverty wages, sometimes as low as $1.30/hr. Because they’re content moderators, they have to look at images and read descriptions of traumatic events without respect for mental health. OpenAI, Amazon, and Facebook have engaged in aggressive union busting tactics for these workers as well, though workers were still successful in organizing one of the largest labor unions in Africa.