Apple announced today that will begin scanning every photo, iMessage, and search you perform on your iPhone, iPad, and Mac in order to verify that Apple approves of the content you are viewing. Under the Apple “Child Safety” program, this new mandatory software will
- Scan every photo you take
- Allow Apple engineers to review every photo you take
- Scan all media on your device
- Require you to store a database on your device that Apple will update in order to check your content
- Allows Apple to review your searches in real time to decide if Apple wants to intervene to block the search
Installation of this software and granting access is mandatory and users are not permitted to opt out.
Will all of this is done nominally to eliminate child pornography, the EFF commented how inevitably this will be weaponized to subvert privacy:
“Child exploitation is a serious problem, and Apple isn’t the first tech company to bend its privacy-protective stance in an attempt to combat it. But that choice will come at a high price for overall user privacy. Apple can explain at length how its technical implementation will preserve privacy and security in its proposed backdoor, but at the end of the day, even a thoroughly documented, carefully thought-out, and narrowly-scoped backdoor is still a backdoor.”
Related Posts:
- Merry Christmas from LowEndBox! - December 25, 2024
- We are Social Butterflies!Check Us Out Wherever You Browse, View, or Tap! - December 23, 2024
- Let’s Celebrate the Winter Solstice with Awesome Deals and a Free Bonus Code for RackNerd’s Giveaway! - December 22, 2024
I’m sorry but there is a considerable amount of incorrect information in this post.
* The product does not necessarily scan every photo. Only photos uploaded to iCloud.
* It does NOT allow Apple engineers to review every photo. If you are flagged (after hitting an unpublished threshold) low quality copies of you images are sent to Apple for review. Outside those flagged images nothing else is viewable by Apple engineers.
* It does NOT scan all media. It specifically scans photo uploaded to iCloud. Not photos stored local, not movies downloaded, not music, etc.
* It does not review your searches. I am assuming this is a reference to phones identified as used by people under the age of 13 by their parents. In this situation some information could be forwarded to a parent but if you are an adult, no searches are review in real-time…
There are plenty of good articles on what this new tool can do and what it limitations are. Please go find one….
What you’ve described is what Apple *says* they will do, Scout’s honor, really we promise, etc. What this article describes is what Apple will be *capable* of doing. The only difference is that you are willing to believe what Apple (unaudited, with zero external oversight, etc.) is saying they will do and trust they will go no further.
The client-side technology will indeed allow Apple to scan all photos, media, etc. Also, the linked article states that “Siri and Search are also being updated to intervene”, and there is no limitation to people under age 13.
“Apple announced today that will begin scanning every photo, iMessage, and search you perform on your iPhone, iPad, and Mac“
I don’t see a CAN or ARE CAPABLE in there at all. I do see “will begin scanning every”. That’s fear mongering.
Could they. Maybe. But they are not. And Apple says the process will be auditable. You could have presented both side of equation with true concerns. You didn’t.
@BradEK
Reading your argument and thinking aloud I have to side with @raindog…
let’s describe it this way…
“EVERY VPS installed will have an authorized_key installed to enable the provider to access, scan for illegal images and software and conduct other maintenance deemed necessary. We promise not to abuse it if you are a good admin, but you may NOT opt out once you have paid £1000 for your device…”
I think a little fear mongering is in order…
But the I use lineage on a chinese made droid… who knows what spyware is in the chip… I am suspicious of someone selling me a fruit with a bite taken out of it… LoL
@Jhay,
I understand your point but Apple has said these processes will be auditable. In addition they did specifically state they would not be doing the very things stated in this article.
Now, your points are valid that they COULD do them. An an article that stated this is what they are doing and this is what MIGHT POSSIBLY be enabled at some point with this technology would be valid.
But all the hypotheticals above were stated as fact. My point is only that what is being done now and what could possibly be done in the future should be clearly defined.
By only stating that they were being done now and no caveats presented really presents this article as complete fear mongering…
@BradEK
Point taken…
agree that upon research you are technically correct.
However, like SoftieMicro, they are forcing a mandatory update onto the customer owned equipment… AND laying the groundwork for their surveilance.
So, correct, they ARE not “doing it”… but they are jamming the backdoor wide open… and we know there is little point in closing the barndoor after the horses are out… (i.e. better 2 fear monger now ;-)
don’t get me wrong, I am NOT in favour of or even tolerant to any abuse (including child pprn). But the fruity guys are way off track with trying to be a law enforcement company (<- note oxymoron!)…
time to jailbreak those coveted devices!
PEACE!
Another thought, Apple is FAR from alone in this. Dropbox, Google, Microsoft and Facebook all scan items in their cloud products for CSAM.
https://www.macworld.com/article/352875/apple-ios-15-csam-scanning-icloud-photos-messages-siri-search-faq.html
“Yes, most cloud services, including Dropbox, Google, and Microsoft, as well as Facebook also have systems in place to detect CSAM images. These all operate by decrypting your images in the cloud to scan them.”
@Jhay,
Agree they are taking a different approach to this then other companies but I am not sure yet that it is necessarily so bad. Remember that Google, Facebook, Dropbox and Microsoft all perform scanning for this type of content on their cloud environments. Facebook reported over 22 MILLION infractions last year where Apple reported something like 250 (just 250).
So clearly Apple was not really performing any proactive scanning for this content in the past. So, is it particularly better to have this performed brute force on the cloud side or locally on the device itself?