Apple's growing arsenal of encryption techniques - shielding data on devices as well as real-time video calls and instant messages - has spurred the US government to sound the alarm that such tools are putting the communications of terrorists and criminals out of the reach of law enforcement.
But a group of Johns Hopkins University researchers has found a bug in the company's vaunted encryption, one that would enable a skilled attacker to decrypt photos and videos sent as secure instant messages.
This specific flaw in Apple's iMessage platform probably would not have helped the FBI pull data from an iPhone recovered in December's San Bernardino, California, terrorist attack, but it shatters the notion that strong commercial encryption has left no opening for law enforcement and hackers, said Matthew Green, a computer science professor at Johns Hopkins University who led the research team.
The discovery comes as the US government and Apple are locked in a widely watched legal battle in which the Justice Department is seeking to force the company to write software to help FBI agents peer into the encrypted contents of the iPhone used by Syed Rizwan Farouk, one of two attackers who were killed by police after the shooting rampage that claimed 14 lives.
Cryptographers such as Green say that asking a court to compel a tech company such as Apple to create software to undo a security feature makes no sense - especially when there may already be bugs that can be exploited.
"Even Apple, with all their skills - and they have terrific cryptographers - wasn't able to quite get this right," said Green, whose team of graduate students will publish a paper describing the attack as soon as Apple issues a patch. "So it scares me that we're having this conversation about adding back doors to encryption when we can't even get basic encryption right."
It scares me that we're having this conversation about adding back doors to encryption when we can't even get basic encryption right.
SHARE THIS QUOTE:
The Justice Department contends in the San Bernardino case that it is not asking Apple for a back door or a way to weaken encryption for all its iPhones. Instead, the government says it wants Apple to dismantle a password security feature on one device so that the FBI can try its hand at cracking the encryption without risking that all the data will be wiped after too many failed attempts.
The California case involves information that is stored on a phone, whereas Green's students were focused on intercepting data in transit between devices. But they share a principle - that all software has vulnerabilities. And messing with the software hurts overall security, Green said.
"Apple works hard to make our software more secure with every release," the company said in a statement. "We appreciate the team of researchers that identified this bug and brought it to our attention so we could patch the vulnerability. . . . Security requires constant dedication and we're grateful to have a community of developers and researchers who help us stay ahead."
Apple said it partially fixed the problem last fall when it released its iOS 9 operating system, and it will fully address the problem through security improvements in its latest operating system, iOS 9.3, which will be released Monday.
Green suspected there might be a flaw in iMessage last year after he read an Apple security guide describing the encryption process and it struck him as weak. He said he alerted the firm's engineers to his concern. When a few months passed and the flaw remained, he and his graduate students decided to mount an attack to show that they could pierce the encryption on photos or videos sent through iMessage.
The cryptographic history books are filled with examples of crypto-algorithms designed behind closed doors that failed spectacularly.
SHARE THIS QUOTE:
It took a few months, but they succeeded, targeting phones that were not using the latest operating system on iMessage, which launched in 2011.
To intercept a file, the researchers wrote software to mimic an Apple server. The encrypted transmission they targeted contained a link to the photo stored in Apple's iCloud server as well as a 64-digit key to decrypt the photo.
Although the students could not see the key's digits, they guessed at them by a repetitive process of changing a digit or a letter in the key and sending it back to the target phone. Each time they guessed a digit correctly, the phone accepted it. They probed the phone in this way thousands of times.
"And we kept doing that," Green said, "until we had the key."
A modified version of the attack would also work on later operating systems, Green said, adding that it would probably have taken the hacking skills of a nation-state.
With the key, the team was able to retrieve the photo from Apple's server. If it had been a true attack, the user would not have known.
To prevent the attack from working, users should update their devices to iOS 9.3. Otherwise, their phones and laptops could still be vulnerable, Green said.
Christopher Soghoian, principal technologist at the American Civil Liberties Union, said that Green's attack highlights the danger of companies building their own encryption without independent review. "The cryptographic history books are filled with examples of crypto-algorithms designed behind closed doors that failed spectacularly," he said.
We don't have the capabilities, that people sometimes on TV imagine us to have.
SHARE THIS QUOTE:
The better approach, he said, is open design. He pointed to encryption protocols created by researchers at Open Whisper Systems, who developed Signal, an instant message platform. They publish their code and their designs, but the keys, which are generated by the sender and user, remain secret.
Some academics have advocated that law enforcement use software vulnerabilities to wiretap targets. That, they said, is preferable to building in a back door to enable access, which they said would broadly damage security.
Susan Landau of Worcester Polytechnic Institute recommends that the government also disclose the bugs to the software's maker. "That gives you a shorter amount of time to use the vulnerability, but you still have some time," she said.
Green said that technologists such as those at the National Security Agency could easily have found the same flaw. "If you put resources into it, you will come across something like this," he said.
He said that law enforcement could use his students' attack or something similar on an unpatched iPhone to obtain photos sent via iMessage in an active criminal or terrorist investigation.
Federal investigators have been stymied when trying to intercept iMessage content. Last year, Apple and prosecutors in Baltimore wrangled for months in court over the issue, with the government trying to compel the firm to find a way to give it data in clear text, and the firm insisting it would be unduly expensive and burdensome and harmful to security. Apple reportedly does not have the technical capability to provide encrypted iMessage content in real time. The prosecutors eventually stood down in the case, which involved guns and drugs; the Obama administration had decided at that point not to push the issue in the courts.
The FBI has said that hacking phones and computers using software bugs is not something it can do easily or at scale. Officials argue it is more efficient to get a wiretap order from a judge and have the company turn on the tap. Also, certain tools might be classified for use by intelligence agencies and not available to criminal investigators.
FBI Director James Comey told lawmakers this month that the FBI had sought help from intelligence agencies to crack the code on Farouk's phone - without success. "We don't have the capabilities," he said, "that people sometimes on TV imagine us to have."