A new endeavor by Apple to police the presence of child sexual abuse on its hosting platforms has drawn a lot of criticism from privacy advocates.
Apple announced plans to begin scanning both the iCloud and Apple iPhones for images and photos that may include evidence of child sexual abuse. The versions of iOS and iPadOS will consist of “new cryptography applications to help limit the spread of [child sexual abuse material] online while designing for user privacy.” The most notable expression of this endeavor will be new protocols designed for Messages and iCloud Photos alongside Siri and Search.
The Messages app will now observe and use machine learning to blur out images containing sensitive content, mainly when users are 12 or younger. Siri will also provide resources for anyone who may feel unsafe or desire resources regarding dangerous situations. However, it is the company’s update to iPhones that concerns many users. Whenever an image uploads to iCloud, the algorithm will check it against a series of “hashes.” Hashes are a tool used by algorithms to give photos a unique fingerprint, allowing external forces or agencies to track them. Suppose the image triggers an alert based on an image’s hash. The account will get flagged to Apple employees, reviewing the case, suspending the relevant account, and informing law enforcement about the individual’s identity and location.
Based on this reliance on hashes, the AI would only be able to identify images that had been previously flagged by law enforcement or the National Center for Missing and Exploited Children, the NCMEC.
While it is worth noting that this extension of Apple technology is extreme, it is not unheard of in the industry. Tech companies have consistently cooperated with the NCMEC to report child sexual abuse material to the proper authorities. A 2020 New York Times investigation found that Apple reports significantly fewer child sexual abuse material than any other company. This gesture to incorporate child sexual abuse material-scanning AI appears to be Apple’s attempt at trying to fix that.
“Apple is incorporating something that other companies have done for years with a twist, as they are doing it on the client-side,” said Steven Boyce, a former cybersecurity expert at the FBI and senior adviser at the International Foundation for Electoral Systems. According to Boyce, it’s common for units involved in internet crimes against children law enforcement units to access hash data if it relates to child sexual abuse material. But it is always handled inside of law enforcement, not by the tech companies themselves.
“I think what has people most concerned is the fear that if Apple is using AI to view my pictures for certain things (in this case CSAM — child sexual abuse material), then what is stopping them from viewing other pictures or data?” argued Ray Kimble, CEO of the tech security firm Kuma. “This is a reasonable concern, and what Apple has done a pretty good job of in the past is to be very forward in allowing their users to have control over what is shared and what isn’t.”
It also sets a precedent that could be abused. While most people find it acceptable that companies are combating the problem of child sexual abuse material on the internet, the potential use of such AI for other purposes is concerning. For example, other authoritarian countries could demand access to Apple’s software for their purposes, such as tracking political dissidents or identifying particular groups. Apple has stated that it would not abide by those demands and that “Apple’s CSAM detection capability is built solely to detect known CSAM images stored in iCloud Photos that experts have identified at NCMEC and other child safety groups.” A recent report revealed Apple employees are also uncomfortable with the plan, saying the feature could be exploited by repressive governments. Some employees also expressed concern about the company harming its reputation for protecting privacy.
“Ultimately, a team of folks at Apple will be tasked with reviewing these flagged photos. This poses a risk for insiders’ potential misuse of the platform,” Boyce told the Washington Examiner. It will also be highly dependent on how Apple handles the team hired to engage with regulative staff members. Suppose lawmakers are concerned about a misuse of such software. In that case, Boyce believes that “congressional committees may want to hear from Apple and other tech companies to ensure that this platform and others that have been in place will not be misused. However, now is the time to have these conversations.”