Why an Arms Control Pact Has Security Experts Up in Arms

Security researchers say a proposed set of export rules meant to restrict the sale of surveillance software to repressive regimes are so broadly written that they could criminalize some research and restrict legitimate tools that professionals need to make software and computer systems more secure.

Critics liken the software rules, put forth by the US Commerce Department, to the Crypto Wars of the late ’90s, when export controls imposed against strong encryption software prevented cryptographers and mathematicians from effectively sharing their research abroad.

Critics liken it to the Crypto Wars of the late ’90s when export controls imposed against strong encryption software prevented cryptographers and mathematicians from sharing their work.

At issue is the so-called Wassenaar Arrangement, an international agreement on which the proposed US rules are based. Other countries are in the process of developing their own rules around the WA, potentially putting researchers overseas in the same troubled boat as ones in the US.

To clarify why people are alarmed about the WA and the proposed US rules, we’ve compiled a primer on what they are and why they could harm not only researchers and security companies but the state of computer security itself.

What Is the Wassenaar Arrangement?

The Wassenaar Arrangement, also known as Export Controls for Conventional Arms and Dual-Use Goods and Technologies, is an international arms-control agreement among 41 nations, including most of western and eastern Europe and the US.

It takes its name from a town in the Netherlands and was first developed in 1996 to control the sale and trafficking of conventional weapons and so-called “dual-use technologies,” which can have both a civilian and military purpose. An example of dual-use technologies are centrifuges that can be used to enrich uranium for civilian nuclear power plants and also produce fissile material for nuclear weapons.

Countries that are party to the WA agree to establish and enforce export controls for listed items in a way that would either prohibit their exportation to specific countries or require a license. Although the WA is not a treaty or legal document, participating nations are expected to implement local export laws or rules to comport with it.

The motive for the new change is a noble one—to restrict the sale and distribution of computer surveillance tools to oppressive regimes.

Historically, the WA has covered conventional munitions and materials related to the production of nuclear weapons, chemical and biological agents, and other items. But in December 2013, the control list was updated to encompass certain surveillance and intelligence-gathering software. It was the first time the WA implemented controls on software since it restricted the export of certain types of encryption products in 1998.

The motive for the new change is noble: to restrict the sale and distribution of computer surveillance tools to oppressive regimes—tools like the DaVinci system made by the Italian firm Hacking Team or FinFisher made by the UK firm Gamma Group International. Both tools, designed for law enforcement and intelligence agencies, are considered intrusion software and have extensive capabilities for spying on desktop and mobile users while avoiding detection. And both have fallen into the hands of governments with a record of human rights violations. Although the makers of the systems have long denied selling their products to repressive regimes, the tools have nonetheless popped up in places like Syria and Bahrain, where critics say they have been used to spy on and harm human rights activists and political dissidents.

This All Sounds Good; So Why Is Wassenaar So Bad?

There’s a saying that applies here about good intentions and the road to hell they pave. Although the intentions behind the WA amendment are sound, the definition of the software being controlled is so broad as to potentially encompass many legitimate security tools. It would apply, for example, to certain penetration-testing tools used by security professionals to uncover and fix vulnerable systems and would even apply to some security research.

The WA specifically calls for export restrictions on systems, equipment, and components that are designed to generate, operate, deliver, or communicate with “intrusion software.” It defines intrusion software as anything designed to “avoid detection from monitoring tools or to defeat protective countermeasures” and which can also modify or extract data from a system or modify the system. Oddly, the WA doesn’t restrict intrusion software itself, just the command and delivery systems that install or communicate with intrusion software. This would appear to encompass exploit code—code that attackers use against vulnerabilities in systems to install malicious tools … including intrusion software. But, confusingly, the Commerce Department has said that exploits are not themselves covered under the WA.

The language has left many in the security community scratching their heads about what exactly all of this covers.

The WA also places controls on so-called IP surveillance software and tools. These are tools that, instead of infecting individual systems, can monitor a network or the Internet backbone of an entire country or region.

The language of the WA has left many in the security community confused about what it covers. Critics want the definition of software and tools to be narrowly defined and they want the word “intrusion” changed to “exfiltration,” to distinguish between tools that test systems and ones that siphon data and intelligence. So far, this hasn’t occurred.

Last year, the US Department of Commerce began developing US export controls that comport with the WA. It first called for input from the public about any adverse affects the rules might have. Then last month, the department’s Bureau of Industry and Security published its proposed set of rules. The language of the rules is just as broad and vague as the WA and so far has done little to assuage the concerns of the security community. The Commerce Department published an FAQ to help clarify and has held two public conference calls to further define what would be restricted under the rules, but many people are still confused.

“It was clear that even though most of us were on the same call and heard the same words, we heard different things from it,” says Katie Moussouris, chief policy officer at HackerOne and a former senior security strategist at Microsoft, who was on one of the calls.

The problem lies in the fact that the Commerce Department is trying to anticipate all possible scenarios and software tools that might fall within the category of systems the WA is trying to control. But critics say there are too many nuances at play to have language that is broad enough to be useful but not have unintended consequences.

To be fair, the department is handling the new rules more carefully than past changes to the Wassenaar Arrangement to account for the potential damage caused by them.

“In past, Commerce has just largely implemented what has come out of Wassenaar without much debate or fanfare,” says Kevin King, an export regulation expert with the law firm Cooley LLP. “To their credit, I think they appreciate the challenges that are proposed by this new rule and are trying to make sure that they get it right, so they’ve requested comment. [And] they’re getting a lot of comments.”

What Would be Controlled Under the US Rules?

The good news for the security community is that anti-virus scanners would not be controlled. Nor would technology “related to choosing, finding, targeting, studying and testing a vulnerability,” Randy Wheeler, director of the Bureau of Industry and Security, said in a conference call last month. This means “fuzzers” and other tools researchers use are fine.

Exploits also wouldn’t be controlled. But products that have zero-day exploits or rootkits in them, or that have built-in capability for using zero days and rootkits with them, would likely be automatically denied for export, absent extraordinary circumstances. The problem with this, however, is that the Commerce Department hasn’t defined what it means by zero day and root kit.

A root kit is malware designed to hide an attacker’s code or activity on a system. But zero-day exploit has different meanings depending on who you ask. Some people define it as exploit code attacking a software vulnerability that the software maker doesn’t know about yet; while others define it as code attacking a vulnerability that the vendor might know about but hasn’t patched yet. If the Commerce Department goes by the latter definition, it could have a big impact on companies that include such zero-day exploits in their penetration-testing tools.

Often, researchers will disclose zero-day software vulnerabilities at conferences, or to journalists, before the software maker knows about them and has time to patch them. Some security companies will write exploit code that attacks the vulnerability and add it to their commercial and open-source penetration-testing tools. Security professionals will then use the tool to test computer systems and networks to see if they’re vulnerable to attack from the exploits—this is particularly important to know if the vendor hasn’t released a patch for the vulnerability yet.

Under the proposed rules, however, some penetration-testing tools would be controlled if they contain zero days. The Metasploit Framework, for example, is a tool distributed by the US company Rapid7 that uses multiple types of exploits to test systems, including zero-days. But only the proprietary commercial versions of Metasploit and other penetration-testing tools would be subject to license control. Open-source versions wouldn’t. Rapid7 has two commercial versions of Metasploit that it sells, but it also has an open-source version available for download from the code repository site GitHub. This version would not be subject to an export license, because as a general matter, export controls don’t apply to information available in the public domain. For this same reason, products that only use regular exploits wouldn’t be controlled under the new rules, because those exploits are already known. But products that contain zero-days would—because the latter are generally not public information yet.

King says this presumably is because a product that contains zero days is more attractive to hackers—because there are no defenses available against it—and is therefore more likely to be misused for malicious purposes.

But if all of this isn’t confusing enough, there’s another point around regular exploits that has people in the security community stymied. Although these exploits aren’t controlled, nor are products that use them, “the development, testing, evaluating and productizing of an exploit or intrusion software” would be controlled, according to the Wheeler. She described it as the “underlying technology” behind exploits.

What exactly “underlying technology” means is unclear. King says it likely refers to information about the nature of the vulnerability the exploit attacks and how the exploit works. If this is the case, it could have a great impact on researchers.

This is because researchers often develop proof-of-concept exploit code to demonstrate that a software vulnerability they’ve uncovered is real and can be attacked. These exploits, and information around them, get shared with other researchers. For example, a US researcher collaborating with a researcher in France might send the researcher a proof-of-concept exploit to evaluate, along with information about how it was developed and works. That additional information would likely be controlled, King says.

He thinks that because the Commerce Department knows it would be nearly impossible to try to control exploits themselves, it’s focusing on trying to control the technology behind the exploits. But there’s a fine line between the two that would have a “very chilling effect” on cross-border research and collaboration.

But like exploits and zero-day exploits, there’s a distinction made around research. Any research that will be publicly disclosed wouldn’t be controlled—because, again, the Commerce Department can’t control public information. But information about exploit techniques that is not made public would require a license to be shared across a border. The problem is, researchers don’t always know during the collaboration stage what may go public.

What’s the Big Deal? It’s only a License

Under these proposed US rules, anyone wishing to sell or distribute one of the restricted goods, software programs or technologies to an entity in another country other than Canada would have to apply for a license. There is some leniency when the other country is one of the members of the so-called Five Eyes spying partnership—Australia, the UK, New Zealand, Canada and the US make up the Five Eyes. Although someone in the US would still have to apply for a license to ship to one of the Five Eyes countries, the Commerce Department’s policy is to view these applications favorably, and the expectation is that the license would be granted, says King.

But all of these varied licensing requirements and applications could prove to be burdensome for individuals and small companies who don’t have the resources to apply for them and wait. It could also have important repercussions for multi-national companies.

King notes that currently if a system administrator at the US headquarters of a multinational corporation purchases a product covered under the export regulation and wants to deploy that software worldwide to all of the company’s offices to improve the company’s security, she can do this with few exceptions. But this exception “will be ripped away” under the new rules, he notes.

“So what do these rules say to the head of security of a multi-national corporation? That if you buy a product you’ll have to get a license to export it to all of your facilities. And if your facility in France is under attack [you will] have to get a license before you can send this product over to address it? I just think that’s crazy.”

He notes one other alarming scenario. Currently, if a security professional travels with a penetration-testing tool on her computer for personal use, there is no problem. “But going forward, as a security professional if you’re traveling with this stuff on your hard drive, you’re going to need a license,” King says. “Why would we make it harder for legitimate security professionals to do their job?”

Why would we make it harder for legitimate security professionals to do their job?

If someone makes a mistake and fails to apply for a required license, a violation of US export control rules can be very serious (.pdf). Punishment can result in up to 20 years in prison and a $1 million fine per violation. Though realistically, the government has only applied severe penalties in criminal violations where the perpetrator has intentionally violated the export control, not accidental violations.

How Else Can the Controls Harm Security?

The new rules would also affect bug bounty programs and, in turn, the security of people who use vulnerable software. Generally, when someone uncovers a vulnerability in software, they can either sell the information to cybercriminals or a government. Or they disclose the vulnerability to the public or to the vendor—through a vendor’s bug bounty program, for example—so the vulnerability will be fixed.

The former would now be a problem if a US researcher sold it to someone in one of the restricted countries and the vulnerability were not publicly disclosed. The aim, presumably, is to prevent a researcher in the US from selling secret information about an attack technique to a country like Iran or China, which could use it for offensive purposes.

But the rule creates a problem for researchers in a Wassenaar country who want to disclose a vulnerability or attack technique to someone in another country. Moussouris, who was instrumental in establishing Microsoft’s bug bounty program when she worked for the software vendor, understands the proposed US rules to mean that if the technology and materials underpinning a vulnerability are disclosed to a bug bounty program and then disclosed to the public, this would be fine. But if a security researcher in a Wassennaar nation wanted to turn over information about a new attack technique privately to a vendor in another country, without that information ever being disclosed publicly, “they are now going to be subject to having to pass that through their home country first, before they can turn it over to the vendor,” Moussouris says.

This isn’t a far-fetched scenario.There are many cases in which researchers will disclose a new attack technique to a vendor who with them fix it quietly so that attackers won’t discover the details and design exploits using the technique. “There are things that are very valuable like exploitation techniques that are…not something that the vendor is probably ever going to want to be made public—things for which their are no defenses,” Moussouris says.

It may involve an architectural flaw that the vendor plans to fix in the next version of its software platform but cannot release in a patch to fix current versions. “The vendor in that case would probably never want to disclose what the technique was, because there will still be vulnerable systems out there,” she notes.

The government could deny the export license and decide to use the technology for its own offensive purposes.

If a researcher had to obtain a license for this before disclosing it, it could hurt efforts to secure systems. The government could deny the export license and decide to use the technology for its own offensive purposes. Or there could be a long delay in getting the license application processed, preventing important information about vulnerable systems from getting to the people who need to fix them.

“In the US, a lot of license applications can take up to six weeks [to be processed],” she notes. “How much damage cane be done in six weeks?”

Moussouris says the proposed rules as they now stand “get us back to the arguments that happened during the Crypto Wars. We know you’re trying to keep this tech out of the hands of people who will use it for bad,” she says. “However, [you’re doing it in a way] that forces us into a downgrade of security for all.”

Related posts