Questions Leftover From The Apple-FBI Debate . . .

An ominous “What now?” hung in the air after the FBI circumvented intransigent Apple to hack the San Bernardino shooter Syed Farook’s iPhone back in March. The FBI paid a third-party firm that had come forward offering to unlock the phone but wouldn’t disclose its methods to the feds. Thus concluded a contentious lawsuit—which most of us had followed on our iPhones, naturally. But the broader issues it raised were hardly settled.

Now that government hacking of civilian data has solid precedent—regardless of our tech overlords’ attempted rebellion—law enforcement, tech companies, and cybersecurity agencies public and private have to ask: When it comes to hacking, “What are the new rules of engagement?” The House Internet Caucus hosted a panel to address the question this week, and not unsurprisingly, Apple’s anti-government, pseudo-countercultural stance didn’t get a ton of support there.

Apple had been eager to get back in the position of David, as opposed to Goliath, since around the time the iPhone achieved unquestionable supremacy. In standing up to a supposedly overweening federal government, Apple CEO Tim Cook sensed his moment. The FBI’s asking for access to a terrorist’s locked iPhone, Cook averred, was “chilling.”

In reality—a different sort of place from Apple’s PR war room—our everyday tech’s vulnerability to malicious hackers is just as chilling. Kurt Opsahl, a cybersecurity lawyer for a group called Electronic Frontier Foundation, which mostly supported Apple in the dispute, explained that all smart phones come with accidental backdoors. Microsoft, for example, discloses its software vulnerabilities on Tuesdays, and, typically, hackers go wild exploiting them until users get around to downloading the “patch,” which comes in the form of a software update.

“Once a vulnerability has been discovered there are a number of things you can do with that vulnerability,” he explained. Cybersecurity operatives who discover cracks and bugs that make sensitive data accessible to hackers might publicly or privately disclose their findings to the vendor so that the backdoor access point can be sealed. They might sell their knowledge to a third party, or they might sit on it until someone, a good-guy government agency or a private firm, is ready with a hefty sum. Just as the FBI was: The Bureau reportedly paid upwards of a million dollars to Israeli firm Cellebrite to unlock terrorist Syed Farook’s phone. Cellebrite allegedly transferred Farook’s data to the FBI without revealing its methods.

Plus, Opsahl said, the advent of an “Internet of things”—a not-too-distant future in which your house, your car, everything you own will be programmable from your handheld rectangle—and the fact that all new tech is riddled with vulnerabilities makes the black market for backdoor hacks a massive problem, the administration’s answer to which is insufficient. In April, the Obama administration’s cybersecurity coordinator Michael Daniel published an announcement that the president had formed a nonpartisan commission to guide the implementation of the “cybersecurity action plan” published in February. But it’s just a blog post, said panelist Harley Geiger, policy director for Boston cybersecurity firm Rapid7, referring to the White House’s announcement.

The panel agreed we need laws to regulate the disclosure of vulnerabilities. The question of whether government hacking should occur or not is moot. “Government hacking is already happening,” Geiger said, but the current imperative is that its procedural regulations “evolve to meet society’s needs.” In other words, in order to ensure safe and effective government hacking, and to avoid high-publicity disputes over constitutionally objectionable surveillance, government hacking disclosure procedures must be codified in law. It’s a pressing concern shared by modern Americans, as the Apple case confirmed, and one that’s still awaiting an official answer.

Related Content