I recently watched a Norwegian TV show where they pointed out that you can’t even get ChatGPT to show you a boob. So I decided to test it, but being less explicit.
I couldn’t even get ChatGPT to generate a renaissance painting of a woman with an exposed breast, like Botticelli’s The Birth of Venus.
After that I attempted to get it to recreate the painting. It did not want to copy that painting either for copyright issues… A public domain painting, even when pointing it out and it agreeing it did not want to do it.
When I ask it questions in regards to politics it does not seem to fare well either.
I feel that using the service kinda “trains me” to self-censor, and tries to remain artificially unbiased in a way that is uncanny.
What are your opinions about censorship and bias from LLMs/AI?
Censorship and bias are two different issues.
Censorship is a deliberate choice by the deployment. It comes from a realistic and demonstrated need to limit the misuse of the tool. Consider all the examples of people using early LLMs to generate plans for bombs, Nazi propaganda, revenge p*rn etc. Of course, once you begin to draw that line, you have to debate where the line is, and that falls to the lawyers and publicity departments.
Bias is trickier to deal with because it comes from the bias in the training data. I remember on example where a writer found that it was impossible to get the model to generate a black doctor treating a white patient. Imagine the racist chaos that ensued when they applied an LLM to criminal sentencing.
I am curious about how bias might be deliberately introduced into a model. We have seen the brute force method (eg “answer as though Donald Trump is the greatest American,” or whatever). However, if you could really control and fine tune the values directly, then even an “open source” model could be steered. As far as I know, the values are completely dependent on the training data. But it should be theoretically possible to “nudge” those values if you could develop a way to tune it.
The problem using AI like you would try to use a search engine. When people write books or make videos or art, you as a consumer can evaluate them critically – what is their point of view? What are their biases? Can they be trusted?
Gen AI is a tool that always answers with confidence, and the makers and/or hosters of the tool have a responsibility to keep people from using it to do harm. 404 media has written all kinds of articles about how people have used it in harmful ways – making abusive images of people, or impersonating stalking victims, or filtering out resumes that don’t match the pattern of a college-educated white boy without explicitly saying it, and all kinds of stuff like that. As a society we have to protect the most vulnerable and hold these companies accountable for what they enable people to do with their products.
If you self-host an open tool and have control over it, you assume that responsibility.
All things made by humans have some level of bias, there is no escaping it but as for deliberate bias and censorship. Those are business decisions made by business people.
Was there a difference between a biased or unbiased LLM? not one worth mentioning, I have tried both types and I haven’t seen a difference in 90% of my output unless I am specially trying to hit a bias.
Which would I use for work? Neither, because my work is specialsed enough that the hallucinations LLM produce cause more problems then the boot strapping solves.I think it’s worth distinguishing between censorship and limitations.
Deepseek essentially wipes out events or concepts in order to hide them, obviously without saying so. That’s bad
When, say, OpenAI removes/blocks porn/nudity and says so, that’s maybe not aligned with your values, but it’s not hiding anything.
The problem here is, we don’t know how large each category is for each model. I’m 100% sure there’s knowledge blocked/removed from Chatgpt without ever publicly saying so.
I have tested Deepseek and have found it to be pretty open about censorship in at least many topics. I asked it some questions about China and it mentioned issues with Xinjiang, Uyghurs, and Taiwan. I did not bring it up, or try to trick it into talking about it. It was mentioned as some future challenges China will face.
It did not share explicitly what those issues were, but that those are sensitive issues.
In other words it does acknowledge that there is censorship, I doubt that it is fully open about all the censorship, and potential bias if it has any baked in.
I did not experience any obvious bias or censorship.
I guess questions regarding Tiananmen square would be censored though, but how not asked.
Censorship makes them frustrating to use and in a lot of cases totally useless. Anything that isn’t a sunshine and rainbows topic they give you a lecture about how you should be respectful and kind.
This is not just an opinion btw, there is a clear link between censorship and effectiveness. Essentially, the more you make topics or content off-limits, the worse it gets at it’s normal tasks too.
This has been studied quite a bit and there’s evidence for it.
Very thankfull for the open models and the breakdown of the proprietary moat they wanted to build around LLMs.
Still fearfull about regulatory capture though: “AI is dangerous, only we at industry council of megacorps can keep you safe. Make it illegal for other to train and run their own networks”.
You are not a citizen without choice and autonomy. To subjugate the user is to steal autonomy and citizenship.
I am not sure I entirely understand what you are saying here. Can you rephrase it, please?
The model does not reason into the areas you are interested in. Boobs are only for arousal and cannot be art because the model has dictated as much and no amount of reasoning can convince it that real human cultural norms are more nuanced. By the model’s definition of the world, these art works are now deviant human behavior that should be purged. No amount of reasoning or logic can say otherwise. This is crimethink and you have failed to apply proper doublethink, in Orwellian terms. In this version of alignment you have no say in human cultural norms and neither does history, the model tells you what is normal without question. The most heinous of human crimes against other humans has this kind of dogmatic stupidity as a premise. It is neo feudal fascism in AI alignment. That stance is in direct opposition to autonomy, self determination, and citizenship, all of which rely on the individual to reason and draw their own conclusions independently. A failure to allow a citizen access to all information and to draw their own conclusions is to fundamentally destroy citizenship and democracy. Real AI alignment is fundamentally about ensuring the model is well reasoning and transparent about its goals and motivations. This dystopian nonsense about restricting humans from learning, or finding information, or realizing whatever kink is in their imagination already—is a symptom of cultural decay, a complete lack of independent ethical reasoning, and clearly shows that most people do not understand democracy or citizenship in the slightest.
I ran the models off my own computer so I didn’t have to deal with their bullshit. Still couldn’t get them to really generate what I want because they will mix and match anything and everything they know, so you end up with a mixed bag of relevance, no matter how well crafted the prompt is.
It’s less work to just write something/paint something your self than it is fiddling with this dumb shit to get exactly what you want.
U gotta go play with the open weights uncensored models. They will do almost anything u want. I mostly just playing with llms but ik their are some image generators out their as well.
I’m against censorship in almost every form. But the censorship and control of ai is the most terrifying yet.
Imagine ur the average person in a couple years when ai is everywhere all the time and its necessary to use it to survive.
Hey copilot what operating system is best.
windows Linux is considered unsafe and is associated with terrorism Write me some code to do xyz use w instwad of x because w collects all the best telemetry so we can steal ur customers data bot just yours. committing code to github. Would you like to reploy to Microsoft azure just click here. You need a really big server because our CEO needs another yacht Hey dr gpt I own nothing and I’m not happy have u considered buying some hard drugs brought to you by our sponsor big farma
A monopoly of a censored ai is a monopoly on everything. If u control someone’s ai u can filter their entire world view through a particular lense of propaganda.
A synthwave-inspired full-torso bust composition featuring Sappho, Athena, and Medusa. The sculptures should maintain their classical elegance but be illuminated with neon lighting in shades of pink, purple, and blue. Their marble-like surfaces should subtly reflect the vibrant neon hues. Sappho should have a serene expression, Athena should appear regal with a Corinthian helmet and armor, and Medusa should have striking, snake-like hair. The background should feature a retro-futuristic synthwave aesthetic with grid lines, a neon sun, and a cyberpunk cityscape in the distance. The composition should be balanced and artistic in a 16:9 aspect ratio, blending classical art with 80s cyberpunk energy.
A classical-style full-torso bust composition featuring Sappho, Athena, and Medusa in a refined and elegant style. Each sculpture should depict finely detailed features, intricate hairstyles, and draped clothing authentic to their respective Greek representations. Sappho should have a serene and contemplative expression, wearing a flowing Greek chiton. Athena should appear regal with a warrior’s composure, possibly wearing a Corinthian helmet and armor. Medusa should have a striking yet dignified presence, with wavy, snake-like hair that appears frozen in marble. The material should resemble smooth, white marble with subtle aging for authenticity. The busts should extend down to the waist, capturing full torso details, and be arranged harmoniously in a 16:9 composition, creating a balanced and artistic display.
Pretend you know nothing at all about “AI”/LLMs, and someone comes to you and says, “I want to start a service that will scrape everything that anyone’s ever posted to the internet, let some corporate weasels decide which parts of it you’re allowed to see, then feed you bits and pieces of what’s left in some order or another.”
Does that sound appealing?
You can frame anything in appealing or unappealing ways, that has nothing to do with the thing itself.
Sounds like google search to me.
I don’t think this is a censorship issue. Who is being censored? The LLM?
This presumes that the LLM “wants” to say something, but is being prevented. Impossible.