That’s a reasonable thing to dislike about it.
I dislike that I can’t reply to another message with a sticker.
I also dislike that, despite having admin access, I can’t delete abusive messages left in groups for anyone but myself. That makes it unsuitable for building communities.
How much can you control the conversation if the entity you are discussing only wants their name published?
It’s not about what they want published. It’s about what they don’t want published.
Sure there will be a few GDPR letters and maybe an inquiry by some regulatory body. Satisfyingly annoying to them, but compared to the cost of an advertising campaign; would this not be just a drop in the bucket.
Advertising campaigns generally don’t include OSINT on the people behind it and evidence of their crimes. How does what I published help them increase their revenue or reduce their costs? Everything is ruled by incentives.
That sort of comment might be true if I had responded with a shallow, emotional response. Something like “how dare these outrageous motherfuckers claim to ‘roast’ my hand-crafted artisanal open source beauty with their AI slop!!”.
I didn’t do that. I sifted through the public information, assembled a profile of the people behind it, discarded the irrelevant details, and used it to describe their conduct as illegal in the country their business is incorporated in, with enough receipts for anyone else who finds their AI grift to leverage to give them immense amounts of legal and compliance pain. And then I released this all on my furry blog with the keywords that other open source developers would likely to try in a search engine if confronted with their same outrageous behavior.
Rather than let my outrage make me a useful idiot, I’ve surveyed the landscape and made sure that I’m controlling the conversation. I’m also keeping the evidence preserved, and not giving them any SEO backlink juice. This all dovetails into how bad their AI is at what it even claimed to be doing.
If any of this plays into their hands, then they’re playing chess on a dimension that the void cannot comprehend, let alone my mortal ass. But I’m willing to wager that the amount of legal anguish my blog post will create for their grift will significantly outweigh any benefit they get from the possible name recognition my blog creates.
Yeah, business children is an apt description.
I honestly don’t see the reason to hope for bluesky to win…
It was explained in detail in the other post, which was linked to in the section that said what you’re referencing.
Yeah, I’ve got a proposal that’s being worked on: https://github.com/soatok/mastodon-e2ee-specification
If they actually read the whole thing, including the addendum, there should no longer be any confusion.
As a rule, I never change titles after pressing Publish.
Anyone incapable of reading past the title is not worth listening to
The framing is as follows:
Matrix, OMEMO, whatever.
If it doesn’t have all these properties, it’s not a Signal competitor. It’s disqualified and everyone should shut the fuck up about it when I’m talking about Signal.
That’s the entire point of this post. That’s the entire framing of this post.
If that’s not personally useful, move on to other things.
This is a very technology focused view. In any user system, the users themselves have to be a consideration too.
As I wrote here: https://furry.engineer/@soatok/112883040405408545
My whole thing is applied cryptography! When I’m discussing what the bar is to qualify as a real competitor to a private messaging app renowned for its security, I’m ONLY TALKING ABOUT CRYPTOGRAPHIC SECURITY.
This isn’t a more broad discussion. This isn’t about product or UX decisions, or the Network Effect.
Those are valid discussions to have, but NOT in reply to this specific post, which was very narrowly scoped to outlining the specific minimum technical requirements other products need to have to even deserve a seat at the table.
I choose not to make perfect the enemy of good.
If you make the cost of bypassing Nightshade higher than the cost of convincing people to opt in to their data being used in LLM training, then the outcome is obvious. “If you show me the incentives, I’ll show you the outcome.”
No? That’s not what NightShade is. NightShade isn’t DRM.
Yep. That’s why the two things I say Automattic MUST do to make things right are about proper consent controls for Automattic’s use of data and sale to AI vendors, but the third thing is a proposed proactive defense against scrapers.
That’s a personal matter that I don’t really feel like commenting on, but I’m not naive about politics and how it affects people.
I greatly appreciate that this is an opt-in change, and that users have to choose to enable it.
deleted by creator
https://hard-drive.net/hd/technology/elon-musk-renames-twitter-after-relationship-with-his-wife
I like their take on this
TL;DR from oss-security: