Skip to main content

Comment: Apple’s child protection measures get mixed reactions from experts

The announcement yesterday of Apple’s child protection measures confirmed an earlier report that the company would begin scanning for child abuse photos on iPhones. The news has seen mixed reactions from experts in both cybersecurity and child safety.

Four concerns had already been raised before the details were known, and Apple’s announcement addressed two of them …

CSAM scanning concerns

The original concerns included the fact that digital signatures for child sexual abuse materials (CSAM) are deliberately fuzzy, to allow for things like crops and other image adjustments. That creates a risk of false positives, either by chance (concern one) or malicious action (concern two).

Apple addressed these by announcing that action would not be triggered by a single matching image. Those who collect such material tend to have multiple images, so Apple said a certain threshold would be required before a report was generated. The company didn’t reveal what the threshold is, but did say that it reduced the chances of a false positive to less than one in a trillion. Personally, that completely satisfies me.

However, two further risks remain.

Misuse by authoritarian governments

A digital fingerprint can be created for any type of material, not just CSAM. What’s to stop an authoritarian government adding to the database images of political campaign posters or similar?

So a tool that is designed to target serious criminals could be trivially adapted to detect those who oppose a government or one or more of its policies.

Potential expansion into messaging

If you use an end-to-end encrypted messaging service like iMessage, Apple has no way to see the content of those messages. If a government arrives with a court order, Apple can simply shrug and say it doesn’t know what was said. 

But if a government adds fingerprints for types of text – let’s say the date, time, and location of a planned protest – then it could easily create a database of political opponents.

The Electronic Frontier Foundation (EFF) highlighted the misuse risk, pointing out that there is no way for either Apple or users to audit the digital fingerprints. A government can tell Apple that it only contains CSAM hashes, but there is no way for the company to verify that.

Right now, the process is that Apple will manually review flagged images, and only if the review confirms abusive material will the company pass the details to law enforcement. But again, there is no guarantee that the company will be allowed to continue following this process.

Cryptography academic Matthew Green reiterated this point after his pre-announcement tweets.

Whoever controls this list can search for whatever content they want on your phone, and you don’t really have any way to know what’s on that list because it’s invisible to you (and just a bunch of opaque numbers, even if you hack into your phone to get the list.)

The EFF says this is more than a theoretical risk:

We’ve already seen this mission creep in action. One of the technologies originally built to scan and hash child sexual abuse imagery has been repurposed to create a database of “terrorist” content that companies can contribute to and access for the purpose of banning such content. The database, managed by the Global Internet Forum to Counter Terrorism (GIFCT), is troublingly without external oversight, despite calls from civil society. While it’s therefore impossible to know whether the database has overreached, we do know that platforms regularly flag critical content as “terrorism,” including documentation of violence and repression, counterspeech, art, and satire. 

In Hong Kong, for example, criticism of the Chinese government is classified on the same level as terrorism, and is punishable by life imprisonment.

iMessage scanning concerns

Concerns have also been raised about the AI-based scanning iPhones will conduct on photos in iMessage. This scanning doesn’t rely on digital signatures, but instead tries to identify nude photos based on machine-learning.

Again, Apple has protections built in. It’s only for suspected nude photos. It only affects child accounts as part of family groups. The child is warned that an incoming message might be inappropriate, and then chooses whether or not to view it. No external report is generated, only a parent notified if appropriate.

But again, the slippery slope argument is being raised. These are all controls that apply right now, but the EFF asks what if a repressive government forces Apple to change the rules?

Governments that outlaw homosexuality might require the classifier to be trained to restrict apparent LGBTQ+ content, or an authoritarian regime might demand the classifier be able to spot popular satirical images or protest flyers.

The organization also argues that false matches are a definite risk here.

We know from years of documentation and research that machine-learning technologies, used without human oversight, have a habit of wrongfully classifying content, including supposedly “sexually explicit” content. When blogging platform Tumblr instituted a filter for sexual content in 2018, it famously caught all sorts of other imagery in the net, including pictures of Pomeranian puppies, selfies of fully-clothed individuals, and more. Facebook’s attempts to police nudity have resulted in the removal of pictures of famous statues such as Copenhagen’s Little Mermaid.

Again, that’s not an issue with Apple’s current implementation due to the safeguards included, but creating technology that can scan the contents of private messages has huge potential for future abuse.

The EFF also highlights an issue raised by some child-protection experts: that a parent or legal guardian isn’t always a safe person with whom to share a child’s private messages.

This system will give parents who do not have the best interests of their children in mind one more way to monitor and control them.

Some of the discussion highlights that tricky tightrope Apple is trying to walk. For example, one protection is that parents are not automatically alerted: The child is warned first, and then given the choice of whether or not to view or send the image. If they choose not to, no alert is generated. David Thiel was one of many to point out the obvious flaw there:

https://twitter.com/elegant_wallaby/status/1423453567940063236

Apple’s child protection measures can’t please everyone

Everyone supports Apple’s intentions here, and personally I’m entirely satisfied by the threshold protection against false positives. Apple’s other safeguards are also thoughtful, and ought to be effective. The company is to be applauded for trying to address a serious issue in a careful manner.

At the same time, the slippery slope risks are very real. It is extremely common for a government – even a relatively benign one – to indulge in mission-creep. It first introduces a law that nobody could reasonably oppose, then later widens its scope, sometimes salami-style, one slice at a time. This is especially dangerous in authoritarian regimes.

Conversely, you could argue that by making this system public, Apple just tipped its hand. Now anyone with CSAM on their iPhone knows they should switch off iCloud, and abusers know if they want to send nudes to children, they shouldn’t use iMessage. So you could argue that Apple shouldn’t be doing this at all, or you could argue that it should have done it without telling anyone.

The reality is that there’s no perfect solution here, and every stance Apple could take has both benefits and risks.

Yesterday, before the full details were known, the vast majority of you opposed the move. Where do you stand now that the details – and the safeguards – are known? Please again take our poll, and share your thoughts in the comments.

Photo: David Watkis/Unsplash

FTC: We use income earning auto affiliate links. More.

You’re reading 9to5Mac — experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel

Comments

Author

Avatar for Ben Lovejoy Ben Lovejoy

Ben Lovejoy is a British technology writer and EU Editor for 9to5Mac. He’s known for his op-eds and diary pieces, exploring his experience of Apple products over time, for a more rounded review. He also writes fiction, with two technothriller novels, a couple of SF shorts and a rom-com!


Ben Lovejoy's favorite gear