Am I missing something? The article seems to suggest it works via hidden text characters. Has OpenAI never heard of pasting text into a utf8 notepad before?
Am I missing something? The article seems to suggest it works via hidden text characters. Has OpenAI never heard of pasting text into a utf8 notepad before?
Am I the only one who rewrites most of ChatGPT’s output into my own words because it’s “voice” is garbage anyway? I ask it to write me a cover letter and that gives me a rough outline and some points to make, but I have to do massive editing to avoid redundancy, awkward phrasing, outright lies, etc.
I can’t imagine turning in raw ChatGPT output. I had one of my developers use Bing AI to write code and submitted that shit raw and it was immediately obvious because some relatively simple code has really weird artifacts like overwriting a value that had no reason to even be touched.
Idk it looks good to me. Straight to the main branch you go.
deleted by creator
That’s valid. And I’d be surprised if that could be watermarked.
For me I find it sounds too much like a marketing person or something I’d see in an ad or a website so I “dumb it down” a bit to make it not sound too corporate. Sometimes telling Chatgpt to do so fixes this though.
Lol. AI gonna take over the developers job. Like that’s even close to happening.
Few years ago the output of GPT was complete gibberish and few years before that even producing such gibberish would’ve been impressive.
It doesn’t take anyone’s job untill it does.
That’s not really true. Older GPTs were already really good. Did you ever see SubredditSimulator? I’m pretty sure that first came around like 10 years ago.
The first time I saw text written by GPT it all seemed alright at first glance but once you started to actually read it was immediately obvious it had no idea what it was talking about. It was grammatically correct nonsense.
LLMs aren’t going to take coding jobs, there are specific case AIs being trained for that. They write code that works but does not make sense to human eyes. It’s fucking terrifying but EVERYONE just keeps focusing on the LLMS.
There are at least 2 more dangerous model types being used right now to influence elections and manipulate online spaces and ALL everyone cares about is their fucking parrot bots…
Please elaborate for the uneducated
https://www.bbc.com/news/business-54348456#:~:text=But Palantir's rise has been shadowed by concerns,right to privacy and is ripe for abuse.
Thanks, great read. Appreciate it. That was one example but you mentioned two - are you thinking of some of the broader disinformation applications in addition to the data gathering mentioned?
Look, I don’t want to waste your time so let me tell you this is a subject I have been concerned about, researched, coded for, and posting about mass manipulation via AI since the 90s.
You can really be pedantic and nit-picky all you want, it really doesn’t matter to me. AI is the second greatest existential threat we face as a species. If you haven’t already been convinced at least to some degree of its danger, nothing I will say will change your mind anyway.
The most dangerous right now AI manifestation is in sentiment identification and control, the second is autonomous armed robots.
Thanks my dude. I was just asking you an honest question. Appreciate the information
Get bent in extradimensional vectors.
Yes, but I use chatgpt to do the rewrites too