Apple intends to install software, initially on American iPhones, to scan for child abuse imagery, raising alarm among security researchers who warn that it will open the door to surveillance of millions of people’s personal devices.
The automated system would proactively alert a team of human reviewers if it believes illegal imagery is detected, who would then contact law enforcement if the material can be verified. The scheme will initially roll out only in the US.
According to
people briefed on the plans, every photo uploaded to iCloud in the US will be given a safety voucher saying whether it is suspect or not. Once a certain number of photos are marked as suspect, Apple will enable all the suspect photos to be decrypted and,
if apparently illegal, passed on to the relevant authorities. The scheme seems to be a nasty compromise with governments to allow Apple to offer encrypted communication whilst allowing state security to see what some people may be hiding.
Alec
Muffett, a security researcher and privacy campaigner who formerly worked at Facebook and Deliveroo, said Apple's move was tectonic and a huge and regressive step for individual privacy. Apple are walking back privacy to enable 1984, he said. Ross Anderson, professor of security engineering at the University of Cambridge, said:
It is an absolutely appalling idea, because it is going to lead to distributed bulk surveillance of . . . our phones and laptops.
Although the system is currently trained to spot child sex
abuse, it could be adapted to scan for any other targeted imagery and text, for instance, terror beheadings or anti-government signs at protests, say researchers. Apple's precedent could also increase pressure on other tech companies to use similar
techniques.
And given that the system is based on mapping images to a hash code and then comparing that has code with those from known child porn images, then surely there is a chance of a false positive when an innocent image just
happens to the map to the same hash code as an illegal image. That could surely have devastating consequences with police banging on doors at dawn accompanied by the 'there's no smoke without fire' presumption of guilt that exists around the scourge of
child porn. An unlucky hash may then lead to a trashed life.
Apple's official blog post inevitably frames the new snooping capability as if it was targeted only at child porn but it is clear that the capability can be extended way beyond this
narrow definition. The blog post states:
Child Sexual Abuse Material (CSAM) detection
To help address this, new technology in iOS and iPadOS* will allow Apple to detect known CSAM images
stored in iCloud Photos. This will enable Apple to report these instances to the National Center for Missing and Exploited Children (NCMEC). NCMEC acts as a comprehensive reporting center for CSAM and works in collaboration with law enforcement agencies
across the United States.
Apple's method of detecting known CSAM is designed with user privacy in mind. Instead of scanning images in the cloud, the system performs on-device matching using a database of known CSAM image hashes
provided by NCMEC and other child safety organizations. Apple further transforms this database into an unreadable set of hashes that is securely stored on users' devices.
Before an image is stored in iCloud Photos, an on-device
matching process is performed for that image against the known CSAM hashes. This matching process is powered by a cryptographic technology called private set intersection, which determines if there is a match without revealing the result. The device
creates a cryptographic safety voucher that encodes the match result along with additional encrypted data about the image. This voucher is uploaded to iCloud Photos along with the image.
Using another technology called threshold
secret sharing, the system ensures the contents of the safety vouchers cannot be interpreted by Apple unless the iCloud Photos account crosses a threshold of known CSAM content. The threshold is set to provide an extremely high level of accuracy and
ensures less than a one in one trillion chance per year of incorrectly flagging a given account.
Only when the threshold is exceeded does the cryptographic technology allow Apple to interpret the contents of the safety vouchers
associated with the matching CSAM images. Apple then manually reviews each report to confirm there is a match, disables the user's account, and sends a report to NCMEC. If a user feels their account has been mistakenly flagged they can file an appeal to
have their account reinstated.
This innovative new technology allows Apple to provide valuable and actionable information to NCMEC and law enforcement regarding the proliferation of known CSAM. And it does so while providing
significant privacy benefits over existing techniques since Apple only learns about users' photos if they have a collection of known CSAM in their iCloud Photos account. Even in these cases, Apple only learns about images that match known CSAM.
Expanding guidance in Siri and Search
Apple is also expanding guidance in Siri and Search by providing additional resources to help children and parents stay safe online and get help with unsafe
situations. For example, users who ask Siri how they can report CSAM or child exploitation will be pointed to resources for where and how to file a report.
Siri and Search are also being updated to intervene when users perform
searches for queries related to CSAM. These interventions will explain to users that interest in this topic is harmful and problematic, and provide resources from partners to get help with this issue.
These updates to Siri and
Search are coming later this year in an update to iOS 15, iPadOS 15, watchOS 8, and macOS Monterey.*
Update: Apples photo scanning and snooping 'misunderstood'
13th August 2021. See
article from cnet.com
Apple plans to scan some photos on iPhones, iPads and Mac computers for images depicting child abuse. The move has upset privacy advocates and security researchers, who worry that the company's newest technology could be twisted into a tool for
surveillance and political censorship. Apple says those concerns are misplaced and based on a misunderstanding of the technology it's developed. In an interview published Friday by The Wall Street Journal, Apple's software head, Craig Federighi,
attributed much of people's concerns to the company's poorly handled announcements of its plans. Apple won't be scanning all photos on a phone, for example, only those connected to its iCloud Photo Library syncing system.
It's really clear a lot of
messages got jumbled pretty badly in terms of how things were understood, Federighi said in his interview. We wish that this would've come out a little more clearly for everyone because we feel very positive and strongly about what we're doing.
Update: Apple offers slight improvements
14th August 2021. See
article from theverge.com
The idea that Apple would be snooping on your device to detect child porn and nude mages hasn't gone down well with users and privacy campaigners. The bad publicity has prompted the company to offer an olive branch.
To address the possibility for
countries to expand the scope of flagged images to be detected for their own surveillance purposes, Apple says it will only detect images that exist in at least 2 country's lists. Apple says it won't rely on a single government-affiliated database --
like that of the US-based National Center for Missing and Exploited Children, or NCMEC -- to identify CSAM. Instead, it will only match pictures from at least two groups with different national affiliations. The goal is that no single government could
have the power to secretly insert unrelated content for censorship purposes, since it wouldn't match hashes in any other database.
Apple has also said that it would 'resist' requests from countries to expand the definition of images of interest.
However this is a worthless reassurance when all it would take is a court order for Apple to be forced into complying with any requests that the authorities make.
Apple has also states the tolerances that will be applied to prevent false
positives. It is alarming that innocent images can in fact generate a hash code that matches a child porn image. And to try and prevent innocent people from being locked up, Apple will now require 30 images to nave hashes matching illegal images before
the images get investigated by Apple staff. Previously Apple had declined to comment on what the tolerance value will be.