Apple’s anti-child-abuse technology comes under fire from privacy advocates
Written by Finbarr Toesland Thu 12 Aug 2021
The world’s largest technology company, Apple, recently announced the introduction of a new set of technologies that are able to detect child abuse images on the iCloud storage service.
Under the system, photos uploaded to iCloud will be checked to see if they match with known images of child abuse.
To ensure that false positives are reduced, any potential images that match illegal database images will undergo a human review, before being reported to law enforcement. According to Apple, the number of false positives will be one in one trillion.
Privacy researchers, including India McKinney and Erica Portnoy of the Electronic Frontier Foundation, have raised concerns around the privacy ramifications of using such technologies.
“To say that we are disappointed by Apple’s plans is an understatement. Apple has historically been a champion of end-to-end encryption, for all of the same reasons that EFF has articulated time and time again. Apple’s compromise on end-to-end encryption may appease government agencies in the U.S. and abroad, but it is a shocking about-face for users who have relied on the company’s leadership in privacy and security,” said India McKinney and Erica Portnoy in a blog post.
However, Apple is far from the first technology company to check images uploaded to their image storage services against databases of known illegal imagery. Facebook, Microsoft and Google already use technology to perform these checks.
Proponents of this technology, like John Clark, chief executive of the National Center for Missing & Exploited Children, believes that privacy and child protection can co-exist. “With so many people using Apple products, these new safety measures have lifesaving potential for children who are being enticed online and whose horrific images are being circulated in child sexual abuse material,” Clark said in a statement.