apple, privacy, csam mention, long
Everyone should read the CSAM Detection technical report by apple, which explains some details of the technique they are going to deploy. The technical details are also available in another report.
The protocol is technically sound. Indeed, no private information will leak to Apple from the iPhone unless the protocol detects that more images match the hash database pdata than some preset threshold.
pdata, which contains NeuralHash values for contraband images (basically quantized activation values for a neural network trained distinguish between similar and different images) is sent in an encrypted from to the server so that the users can’t learn what images are considered contraband. This is partly a good thing, because otherwise one could compute adversarial preimages of the NeuralHash. Such images, while legal, could trigger an alert. Nevertheless, “the protocol need not prevent a malicious client from causing the server to obtain an incorrect ftPSI-AD output” and “a malicious client that attempts to cause an overcount of the
intersection will be detected by mechanisms outside of the cryptographic protocol” [p. 6 in the second technical report], so it’s still possible to trigger a false alarm by running malicious code on the phone. I imagine the “mechanisms outside of the cryptographic protocol” would be human review.
However, the encrypted pdata means that the users can’t know what kind of images are considered contraband. “The Apple PSI system addresses this issue differently, by using measures implemented outside of the cryptographic protocol.” [ibid., p. 13] It could be CSAM, it could be Tank Man, it could be Collateral Murder, and it might as well be country-specific to appease specific regimes. In fact, the protocol is designed so that the user can’t learn which images triggered a notification of authorities.
The only way to make this palatable would be to get many independent organizations (including governments and NGOs) certify that pdata indeed contains no other hashes than CSAM: “one could
mitigate tampering with the set X by relying on a third party, who knows both pdata and X, to certify that pdata is constructed correctly for X“. Apple explicitly rejects this approach and “addresses this issue differently” [ibid., p. 13]. Even then, the concerns about false positives, malware, and the impossibility of transparency would remain.
Apple already has the capability to scan photos uploaded to iCloud, since Apple holds the key to decrypt iCloud backups. So, while they claim only the photos uploaded to iCloud are scanned on-device, their approach makes very little sense unless they want to eventually extend it to scanning all files.
However, none of these technical details matter much. If we accept the position of @aral that technology makes us cyborgs and digital tools are extensions of our mind, leveraging the users’ devices to prevent their criminal activity is akin to giving everybody a Clockwork Orange-style brainwashing to deter them from breaking the law. In less bombastic terms, it fundamentally violates established principles against forced self-incrimination. This is a line that we shouldn’t cross, no matter how impeccably we implement the crossing.
The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!
apple, privacy, csam mention, long
@sneak @kristof @aral it seems like iOS 15 will be the first release that lets users stick to 14, at least for a while. New UI for this was recently added.