If we can’t trust Huawei, who can we trust?

In today’s Globe and Mail “Pursuits” section (Saturday, December 9, 2018), on page P4, there is a recommendation for a gadget called “Nest Hello”. The copy reads as follows

RING MY BELL

Who’s at your door? Did that package arrive? Unless you’re home, how would you know? Find out with the Nest Hello, a high-tech doorbell that delivers HD video of everything happening on your front porch to your smartphone. It also offers a talk function that allows you to speak to whoever comes calling whether you’re home or not and can record up to 30 days of continuous video via Nest Aware.

Nest is a company that was founded by two former Apple engineers. It was bought by Google in January 2014. 

For this product to do its job as advertised, it must send a continuous video recording of your porch across the internet to a server somewhere. That server is controlled by Google. Luckily (I suppose), to quote the official page,

At Nest, we take your privacy seriously.

You can read the policy here:

https://nest.com/legal/privacy-statement-for-nest-products-and-services/

It might be comforting to read the policy. But the real protection we (think) we are getting is that the company is unlikely to risk a violation if it may damage their business. Of course, as we have seen with Huawei, deciding whether or not there has been a privacy breach is not necessarily easy to determine.

The Internet as originally conceived was a place where a bunch of trusted computer scientists could communicate with each other. The possibility of rogue agents was never given much thought. For example, when I taught a graduate course in about 1996, I showed my class how easy it was for them to send their friends an email that looked like it came from Bill Gates at Microsoft. This created a minor fuss at the time, but I argued that our students should not be ignorant about Internet vulnerabilities. (The idea of “security through obscurity” has long been debunked.)

Unfortunately, the Internet as created originally under the auspices of a DARPA project succeeded beyond anyones expectations. The imperative for most people became how to profit, commercially or otherwise, from its infrastructure. No time to properly design for privacy.

Can we trust anyone?

In 1983, the Association for Computing Machinery gave the “Turing Award” to Ken Thompson, one of the most influential Computer Scientists of all time. He is one of the inventors of the Unix operating systems and of the C programming language, both still in widespread use both commercially and academically. His talk is titled “Reflections on Trusting Trust.” It is a short talk, but technically ingenious. The basic idea is this. Suppose I am paranoid. Instead of using the compiled program you sent me, I want to read all the source code, and then compile the program myself. This way, I can spot any funny business you inserted into the program. However, you wrote the compiler too, so I want its source code as well. I will read that code, compile the vetted compiler source, using that result to compile the original program. Now imagine that there are two versions of the compiler: one version that injects a trojan into any source that it is given, and one that does not. Now I modify the rogue compiler so that, if it is compiling the compiler itself, it will insert the rogue generating software into the resulting binary. Call this the rogue binary. If the rogue binary compiles a clean compiler, you still get the identical rogue binary out. Conclusion: even if you read every line of the compiler source code itself, you cannot be sure that your compiled code is safe. Or, as Thompson himself put it

You cannot trust code that you did not totally create yourself. (Especially code from companies that employ people like me.) No amount of source-level verification or scrutiny will protect you from using untrusted code.

https://amturing.acm.org/award_winners/thompson_4588371.cfm

I have spent a lot of time, since I read a copy of the talk, wondering about the parenthetical remark – “from companies that employ people like me”. People who are often hired exactly because of their creative genius.

If you use a Mac or an iPhone, and have ever read a PDF document on it, then you are using code that I wrote when I worked at Apple. The way it worked (in simplified terms) was this: a colleague and I wrote something that Apple calls a framework. On a daily basis, we would write source code, check it on our local machine and then submit our framework to what is called a “build” – a version of what ultimately be a version of OS X or iOS. Various other teams in the organization would test the code, check for problems, including security violations, and so on. All our original source code was available to be read by anyone in the organization with the appropriate authority. Had I ever embedded rogue code, the chances of being caught were high, and the consequences severe: not just loss of employment, possibly criminal charges. Of course I never did any such thing. 

But could a rogue insider come up with an ingenious way to embed rogue code into a commercial operating system that was not easily detected? I honestly believe that the answer is yes, and I am not sure how even the most conscientious of organizations can guard against this. Remember that the first allegiance of the insider might not be to the company itself.

Now it is easy to pick on a foreign company like Huawei. And I personally would feel a little better if all communication infrastructure in Canada was made, vetted and installed by Canadians. But this is not rational. Even Canadian companies could have embedded rogues. Polite, perhaps, but still rogues.


Leave a Comment

Your email address will not be published. Required fields are marked *