mj>
_

Reflections on Reflections on Trusting Trust

2021-08-11

Apple's recent announcement about its new client-side CSAM detection function, despite its honorable intentions, has generated lots of criticism throughout the infosec world, specifically regarding data privacy, encryption, and the implications of this feature as it relates to its potential use amongst authoritarian governments for broad censorship or mass surveillance purposes. Many organizations and independent researchers have already shared their opinions on these matters, and frankly, many of those are far more authoritative sources than I am. Thus, I'm not going to go into great detail about the technical details of Apple's implementation of this feature nor the implications for human rights or privacy. If you want to get a better insight on that, I would suggest you read this open letter and all linked sources.

Similarly, I would recommend anyone interested in the topic to read the actual technical documentation released by Apple describing the mechanisms the protocols use. You can find all relevant technical documentation and cryptographic reviews at the bottom of this page. There is no doubt that this will get more attention and analysis, so be sure to seek out reputable external analyses as well if you're reading this further into the future.

What I want to discuss in this blog post is how Apple's latest move relates to the age-old discussion on trust in computing, derived from Ken Thompson's 1984 speech "Reflections on Trusting Trust". This was a speech given as a result of winning the Turing Award, oftentimes referred to as the Nobel Prize of computer science. In this speech, he seeks to cast doubt on the very idea of "trustworthy" code and coders. He does so by picking on the C compiler, which was written by his Bell Laboratories colleague and fellow Turing Award winner Dennis Ritchie. He talks in the speech about how he may or may not have written the C compiler so that every time it attempted to compile a clean version of itself, it would be injected with a Trojan Horse which would instruct it to create a backdoor to the Unix login binary any time that binary was compiled from scratch. In essence, even if you built the compiler from scratch using known-clean code and built Unix and all of its binaries from scratch using known-clean code, you'd still end up with a backdoor via a hard-coded master password. It's a tremendously interesting read and remains as relevant today as it was 37 years ago. This speech was essentially a message to the Association for Computing Machinery as well as the broader world, "You think I'm a benevolent guy, but what if I weren’t?" More broadly speaking, that same rhetorical question could equally apply to any tech company that's making software that's not publicly auditable or able to be built from scratch using independent tools.

You see, given that it’s nearly impossible to independently verify much of the software that we use in our day-to-day lives, we’ve seemingly collectively decided that trust should rather be placed in people and/or companies. I don’t know everything that Windows does under the hood, but I trust it enough to store somewhat sensitive data on it. Likewise, I can’t audit a TPM made by Intel, so I have to simply assume it’s trustworthy (which could honestly be an entire discussion on its own). The question then remains, at what point can trust in a specific individual or company fall apart?

Apple has clearly made technical documentation and perhaps even source code available to specific cryptographic researchers for close analysis, but at the end of the day, nobody is going to be able to easily validate what specific implementation of this content monitoring gets put on their devices, including whether different individuals based on, say, location, political orientation, or job title, will get different implementations of the feature in the future. From a technical perspective, it would be absolutely trivial for Apple to do something different than what was described under the hood.

Apple’s latest move, in particular, reignites that question of trust. On one hand, one could assume that publicly announcing this initiative as well as releasing somewhat detailed technical documentation is a good faith effort to maintain confidence among its users, reaffirm its commitment to privacy by defining its monitoring limitations, and solve a social problem using novel methods unimaginable only a few short years ago. They very well could’ve silently implemented this in Thompsonian fashion and chose not to. Given their complete control of their devices’ hardware and software, they could have made it nearly impossible for even the most capable of security researchers to detect. 

On the other hand, Apple has created a technology that could easily be modified to perform essentially any type of monitoring that it (or another, surely benevolent authority who cares deeply about human rights) deems necessary or beneficial to society at large. Of course, Apple says they will "refuse any such demands", much in the same way that I "refuse any such demands" from the deep primal part of my brain to pound down a third Döner Kebab this week. And after all, we can trust them... right? 1 2 3 4 5 6 7 8 9 10 11 12

That type of behavior, coming from a company that prides itself as a privacy champion, is seemingly so out of left field that it may serve to undermine its trustworthiness.

Does the incongruence between Apple’s title as the great protector of privacy and its rollout of a feature with the potential for indiscriminate monitoring demonstrate the shakiness of the foundation of trust in the digital world? Can we trust that the Apple of tomorrow will remain as it presents itself today? And if we can’t, where does that leave us? 

This kind of client-side monitoring is the digital privacy equivalent of a nuclear bomb. In the right hands and when treated with respect and consideration, it could be benign. But much like with nukes, it requires those with power to exercise that power with the utmost discretion. Now we just gotta trust that everyone keeps their finger off the button.


Sent from my iPhone




As always, feedback is greatly appreciated. If you have any questions or comments, feel free to email me at me@infosecmatt.com.