Instead of scanning iCloud for illegal content, Apple’s tech will locally flag inappropriate images for kids. And adults are getting an opt-in nudes filter too.
First of all, understand how it works: it’s a local feature that uses image recognition to identify nudity. The idea is, if someone sends you a dick pick (or worse, CSAM), you don’t have to view it to know what it is. That’s been an option on the accounts of minors for some time now and it is legitimately a useful feature.
Now they’re adding it as an option to adult accounts and letting third party developers add it to their apps.
The threat that suddenly they’re going to send the scanning results to corporate without telling anyone seems unlikely. It would be a huge liability to do so and have no real benefits for them.
But the threat is this: with this technology available, there will be pressure to make it not optional (“Why does Apple let you disable the child porn filter — wtf?”). If they bend to that pressure then why not introduce filters for other illegal content. Why not filter for comments criticizing the CCP in China or content that infringes on copyright?
Having a “dick pick filter” is a useful technology and I know some people who would love to have it. That doesn’t mean the technology could be misused for nefarious purposes.
And yes, i did see it as a privacy issue, not a censorship one. Inevitably, if this finds the pressure to expand it towards other content, it could be a problem comparable to the “Article 13” Europe was, or is, facing.
Generally, blocking specific types of content is a valid option to have. As long as it is an option, and the user knows it is an option. I just distrust it coming from the likes of google or apple.
I get not trusting big tech companies, I do, but I think you’re not modeling their behavior. Usually when a huge publicly traded company does something dodgy, they don’t explicitly say they don’t do it; they use weasel words.
I would honestly find it very difficult to believe that there wasn’t going to be some telemetry, data / etc sent back to the mothership. I know in the marketing realm Apple caters towards “privacy”, but who’s really validating those claims.
Granted…I’m also very tin-foil-hatty about my data and retain it all locally with offsite backups. I tore down my Google Drive / cloud data about 2-years ago.
I think your threat model for this is wrong.
First of all, understand how it works: it’s a local feature that uses image recognition to identify nudity. The idea is, if someone sends you a dick pick (or worse, CSAM), you don’t have to view it to know what it is. That’s been an option on the accounts of minors for some time now and it is legitimately a useful feature.
Now they’re adding it as an option to adult accounts and letting third party developers add it to their apps.
The threat that suddenly they’re going to send the scanning results to corporate without telling anyone seems unlikely. It would be a huge liability to do so and have no real benefits for them.
But the threat is this: with this technology available, there will be pressure to make it not optional (“Why does Apple let you disable the child porn filter — wtf?”). If they bend to that pressure then why not introduce filters for other illegal content. Why not filter for comments criticizing the CCP in China or content that infringes on copyright?
Having a “dick pick filter” is a useful technology and I know some people who would love to have it. That doesn’t mean the technology could be misused for nefarious purposes.
I am aware that it’s local, i just assumed it would also call home.
My threat model here is based on cases like this: https://www.theverge.com/2022/8/21/23315513/google-photos-csam-scanning-account-deletion-investigation
And yes, i did see it as a privacy issue, not a censorship one. Inevitably, if this finds the pressure to expand it towards other content, it could be a problem comparable to the “Article 13” Europe was, or is, facing.
Generally, blocking specific types of content is a valid option to have. As long as it is an option, and the user knows it is an option. I just distrust it coming from the likes of google or apple.
Google explicitly says they scan images and report them to law enforcement. Apple explicitly says they do not phone home with scan results and so far there have been no such investigations.
I get not trusting big tech companies, I do, but I think you’re not modeling their behavior. Usually when a huge publicly traded company does something dodgy, they don’t explicitly say they don’t do it; they use weasel words.
I would honestly find it very difficult to believe that there wasn’t going to be some telemetry, data / etc sent back to the mothership. I know in the marketing realm Apple caters towards “privacy”, but who’s really validating those claims.
Granted…I’m also very tin-foil-hatty about my data and retain it all locally with offsite backups. I tore down my Google Drive / cloud data about 2-years ago.
There’s always some telemetry. But there’s a fair amount they do to truly make telemetry anonymous.