21 August 2016

Apple at BlackHat: Reopening the "Going Dark" Debate

Friday, August 12, 2016, 

Just over a week ago, at the BlackHat hacker convention in Las Vegas, Ivan Krstić, Head of Security Engineering and Architecture at Apple gave a talk entitled “Behind the scenes of iOS Security,” the slides of which are available here.
It’s a historic talk for a couple of reasons. First, Apple is traditionally very secretive about how it technically does security on its devices. Apple also announced its first bug bounty program. So far, so newsworthy.
But something else happened at that talk. Unbeknownst to the presenter or anybody in the audience, Apple just reopened the “Going Dark” dispute between the FBI and the privacy community, and it turned the entire dispute on its head. In the cold light of day, I suspect Apple, the US government, and privacy activists are going to be rather unhappy when they digest the sobering implications of the talk, though they will likely be upset for entirely different reasons.

In short, Apple built the very thing that they and the privacy community have been saying for years is reckless, dangerous or impossible: a high-value encryption key secured in a vault such that the key can’t be stolen or misused by hackers or malicious insiders. And without a hint of self-awareness Apple’s head of security engineering and architecture went all the way to BlackHat in Las Vegas to boast about how they did it.
But I’m getting ahead of myself. Let’s start at the beginning.
The breath of Krstić’s talk is fantastically wide. He talked about technically making the iPhone browser more resilient to zero day vulnerabilities; Apple’s new bug bounty program; technical details of how the iPhone uses its “Secure Enclave” processor to defend against law-enforcement repeatedly guessing PINs to break into iPhones; and a design for sharing saved passwords between your iPhone and your Mac such that Apple cannot intercept them.

Those who followed the Apple-vs-FBI-San-Bernardino-All-Writs-Act case in February may remember some side discussions in the media about the “Secure Enclave” processor. It’s a tiny processor on modern iPhones that’s cryptographically involved in turning the user PIN code into the device encryption key to unlock data on the iPhone.
To defend against brute-force attacks—that is, repeatedly guessing PIN codes until the right one works—the “Secure Enclave” maintains a little counter of the number of failed guesses, and, after ten failed attempts, it permanently locks out the device, forever destroying the possibility of decrypting the user’s data.
But it was another part of the talk that really caught my attention—a remark hidden on slide 44 about “iCloud Keychain Backup.” It’s rather technical and it needs a bit of background, so let me try to explain.


Apple has for a long time had an iPhone feature called “iOS Keychain.” This feature lets apps and your iPhone browser remember your saved website passwords and credit cards and so on. Recently, Apple extended the feature into “iCloud Keychain,” automatically synchronizing your saved passwords with your other Mac computers.


In order to avoid uploading all of your passwords to iCloud in directly subpoena-able form, Apple first encrypts them with your iCloud Security Code, which, in practice, is almost always your device PIN code, and then proceeds to upload them in encrypted form to iCloud.


But there’s a subtle problem with this setup that Apple needed to solve.


Normally, the FBI has a really hard time getting into iPhones because the Secure Enclave only lets it try passwords really slowly, and even then only ten times before user data becomes permanently inaccessible. But unless Apple is careful, the FBI would be able to do something clever if it subpoenaed the encrypted keychain file.


On the face of things, the file shouldn’t be very interesting to law-enforcement. After all, it’s encrypted with the user’s PIN code, and the FBI doesn’t know the user’s PIN code. But the FBI can do something with the file that it can’t do with user devices. It can guess the PIN code quickly—and more than 10 times.


For Apple, this represents a problem, and one it resolved to fix just two slides later in the presentation. Apple’s solution? Cloud Key Vault.




Calling Cloud Key Vault complicated is something of an understatement, but conceptually it’s an enormous Secure Enclave for the entire iCloud data-center. Normally, on your iPhone, PIN guesses are handled by the Secure Enclave processor, and the Secure Enclave ensures that after ten failed guesses, you’re permanently locked out of your phone. Cloud Key Vault does the same thing but on a much larger scale. Cloud Key Vault manages PIN guesses against the uploaded encrypted files, and the Cloud Key Vault ensures that after ten failed PIN guesses, the data in the encrypted keychain file becomes permanently inaccessible.


The fundamental purpose of Cloud Key Vault is to prevent Apple employees, hackers who somehow hack into the data-center and—perhaps most importantly—Apple employees compelled by court order from circumventing the PIN guess limit when trying to force open uploaded keychain files when an iPhone synchronizes saved passwords with the user’s Mac.


There are, however, two ways to cheat the system, and Apple has thought of both of them. The first is that the attacker could get ahold of the master encryption key hidden inside the Cloud Key Vault. With that key, the FBI would be able to try PIN guesses without asking Cloud Key Vault, and thus without the limitation of only being able to guess PINs ten times.


The second way would be to replace or upload new code to the Cloud Key Vault itself and instruct it to reset or disable the counter for a specific file while the FBI brute-forces the PIN.


Apple’s defense to both of these attacks is the same, and it’s ingenious. The company destroys the access cards to the device. Once destroyed, Cloud Key Vault operates entirely autonomously according to the code written at the time when it started. Apple employees have no administrative access to the device. They cannot reach in and take out the master encryption key. They cannot reach in and reset the “failed PIN-guess” counter on any file. They cannot even upload new code to the device. Any attempt to do so would lead inevitably to the destruction of the master-key, thereby causing the cryptographic self-destruction of all of the uploaded keychain files.


It is a devious defense, and it’s a robust design. It’s smart. It works. It’s been built and it objectively works at scale. And on its face, it’s secure against hackers, malicious insiders and court-orders.


But here’s the thing: while Apple’s intention in building Cloud Key Vault was to thwart pesky law-enforcement officers brandishing court-orders, the system represents a very big problem to the privacy community’s ongoing dispute with the FBI in the “Going Dark” debate. Here’s why. 


For a while now, the most compelling technical arguments against “Going Dark” and any technical “front-door” designs have been that the technology community simply does not know how to securely store high-value encryption keys such that they can be used, but can’t be stolen or misused by hackers or malicious insiders. Moreover, many privacy advocates assert that any technical design that looks OK on paper will probably fall apart when deployed at scale in practice.


Apple just blew all of those arguments out of the water. 


Apple’s Cloud Key Vault’s solves this problem exactly. By building a system that even Apple employees cannot access when compelled by court-order, Apple ironically and seemingly inadvertently built a system that keeps a master encryption key sealed in a special vault and technically prevents the key from being stolen or misused by hackers or malicious insiders at Apple.


Apple’s unintentional broadside against the privacy community’s best “Going Dark” arguments isn’t the only way Cloud Key Vault turns the “Going Dark” dispute on its head. Cloud Key Vault is also something that sits in a very unusual “Going Dark” category—autonomous non-administrable code— that sits very uncomfortably with the way the “Going Dark” debate has been presented in public. 


The reason is that autonomous non-administrable code has nothing to do with encryption. For the past year, much of the public discourse about “Going Dark” has rather craftily conflated “Going Dark” and “strong encryption.” It’s a clever framing error and one law-enforcement has never effectively rebutted. But it is an error. For example, data erasure does not involve encryption, and yet poses a problem to law-enforcement. Conversely, HTTPS to your bank uses strong encryption, but does it not prevent law-enforcement from investigating any financial crime you happen to commit using the bank’s website.


Rather, “Going Dark” has much less to do with “strong encryption” than it has to do with specific system designs that prevent law-enforcement from gaining access to information with a court order. Some of those designs—like device encryption and end-to-end communications encryption—happen to work by deploying encryption in a configuration so the provider never sees user content or keys. But other “Going Dark” problems are much less sophisticated and do not involve encryption. For example, by simply choosing not to record relevant transaction logs, a website can thwart any attempt by law-enforcement to obtain the content that would otherwise have been in those logs.


Autonomous non-administrable code is another example of this non-encryption category of “Going Dark” problem: Apple’s inability to help law-enforcement reset the PIN-guessing counters exists because the counters are intentionally inaccessible outside Cloud Key Vault and, by design, nobody at Apple can administer or upload new code to the Cloud Key Vault computers to reset them.


It’s a meaningfully new category of problem. Normally, “Going Dark” problems boil down to “sorry, we don’t have the content” or “sorry, we don’t have the keys.” But this variant is something else: “sorry, the data is in the computer over there, and we’ve intentionally locked ourselves out of the computer.”


It’s a category that raises lots of new questions.


For technology companies, autonomous non-administrable code raises lots of opportunities to hold and process data in a way that can’t be hacked. For example, a technology company could use autonomous non-administrable code to scan user content for advertising keywords without ever exposing the user-content to application developers, malicious insiders or someone who hacks the server.


For lawyers, there’s the question of how secure enclaves interact with orders under the All Writs Act, and how much burden is an “undue burden” when it comes to compelling Apple to disarm its cryptographic self-destruct on Cloud Key Vault and other designs like it.


For the US government, there are many new questions that this new category of Going Dark raise if it starts to become widely adopted by other cloud providers in other contexts. In particular, the possibility that a company might be able to hold and process user data without company staff, hackers, or law-enforcement officers with court-orders being able to gain access to the data holds the potential to seismically shift law-enforcement’s ability to obtain data from cloud providers.


I suspect that when Apple’s head of security engineering and architecture stood up to go on stage at BlackHat, none of this was going through his head. His team had built several cool ideas for securing Apple devices against hackers. He had overseen the construction of an ingenious design to keep the FBI out of iCloud files. He was about to make news by announcing Apple’s first ever Bug Bounty program.


Little did he know he was also about to turn much of what we know about “Going Dark” on its head, and re-open that Pandora’s box

No comments: