Premature vulnerability disclosures and the collateral damage done by Tom Cross

This week I’m in Berlin, Germany for Virus Bulletin, the premier technical conference for the anti-malware industry. I have the honor of appearing twice on the conference agenda this year. The first event is a joint presentation with Holly Stewart, who is a Sr. Program Manager Lead at Microsoft’s Malware Protection Center. We’re talking about the ethics of public vulnerability disclosure – specifically the ethics of disclosing the fact that vulnerabilities are being exploited in the wild. My second appearance is on a panel about collateral damage in cyber conflict, moderated by infosec journalist Ryan Naraine. These are two topics that have a direct relationship to each other.

Gallons of ink have been spilled on the topic of responsible or ethical vulnerability disclosure, but these discussions are usually written from the perspective of an independent security researcher who has discovered a vulnerability that no one else knows about yet and must decide whether to post the details to an open mailing list or privately inform the responsible software vendor so that they can release a patch. Obviously, in most cases it would be best if they did the latter, but some cases can get tricky, particularly when dealing with vendors who aren’t getting patches out in a timely manner.

However, many vulnerability disclosures today involve a slightly different set of circumstances. Sophisticated threat actors are doing their own vulnerability research, and so we have the increasingly common scenario wherein an incident responder or malware analyst discovers a new vulnerability that is being exploited in the wild in real attacks and must decide what to do about it. The ethical considerations in this case are slightly different. The vendor still doesn’t know about the vulnerability and still needs to get a patch out, but can we really afford to wait for that before informing the public, given that real attacks are going on?

In order to answer this question, Holly and I delved into a number of cases spanning the past few years where vulnerabilities had been publicly disclosed while real attack activity was going on, but before patches were available. What the examples we’ve found demonstrate is that when you disclose an unpatched vulnerability that has been used successfully, there can be a significant bandwagoning effect.

Thousands of new vulnerabilities are disclosed every year and most of them are not all that useful to attackers. When real attack activity confirms the practical value of a vulnerability, and attackers know that no one can defend themselves because patches aren’t available, that’s an opportunity that they tend to jump on in large numbers.

What this means is that even in cases where attack activity is occurring in the wild, it may make sense to privately coordinate vulnerabilities with the responsible software vendor and wait for a patch, because public disclosure before a patch release can result in even more attack activity. Getting the right answer involves considering how quickly the vendor will be able to produce a patch, whether there are practical workarounds available before the patch comes out, and how quickly attack activity is spreading on the Internet.

One of the examples that we utilize in our paper is that of the .lnk vulnerability that was exploited in the Stuxnet worm. This is a particularly difficult case, because Stuxnet was self-reproducing malware, so infections were spreading rapidly when it was discovered in the wild. However, the public disclosure of this vulnerability did result in a significant bandwagoning effect, wherein various threat actors quickly adopted the vulnerability and started exploiting it in numbers that exceeded the rate at which Stuxnet itself was spreading. Also, Microsoft managed to get a patch out quickly. So, with the benefit of hindsight, it is possible to argue that it would have been better to wait a couple of weeks for the patch to become available before disclosing this vulnerability to the public.

What we also now know is that Stuxnet was an example of international conflict in cyberspace. The worm was designed by a nation state to physically damage machines attached to the computers it infected. All of the innocent people’s computer systems that were infected by Stuxnet represent collateral damage in this conflict. Which brings me to my second appearance at Virus Bulletin, on a panel discussion about collateral damage in cyber conflict.

A few months ago I presented with some colleagues in Tallinn, Estonia at CyCon – the International Conference on Cyber Conflict. This presentation discussed techniques used by malware throughout history to limit unintended infections. The subject is of interest at CyCon because the Law of Armed Conflict requires combatants to take reasonable steps to limit collateral damage whenever they launch attacks (whether they be kinetic or cyber).

Arguably, the authors of Stuxnet took steps to limit collateral damage. Although the worm itself infected many computer systems all over the world, the payload that actually caused physical damage was very unlikely to execute against a machine that was not the intended target. There is some ambiguity in the Law of Armed Conflict regarding whether or not unintended malware infections are considered a type of collateral damage that nation states are required to avoid, but nation states are clearly required to avoid unintentional physical damage to machines and equipment, and in this case, the authors of the malware appear to have done so.

I do think that malware infections are a type of collateral damage and that the Law of Armed Conflict should require nation states to take reasonable steps to avoid unnecessary infections where those infections cause significant harm or data loss. This is an area where further refinement of the law is needed, to more clearly differentiate between effects that are merely disruptive and data loss that causes significant economic consequences (which is obviously possible).

One of the wildcards in this discussion is premature vulnerability disclosure. When nation states develop malware, that malware often exploits new, unpatched security vulnerabilities. When you use an unpatched security vulnerability in an attack against an adversary, it is possible that the technical details of that vulnerability could fall into the wrong hands. Stuxnet is proof of this, as the .lnk vulnerability was used by organized criminal groups to spread malware before a patch was available. Is the nation state that launched the initial attack responsible for the follow-on consequences of subsequent attacks launched by other parties using the same vulnerability?

To some extent, the answer is yes. On some level, we would hold a nation state responsible for leaving dangerous weapons in a place where they could be captured and used by irresponsible third parties. Imagine if crates of machine guns were left where a drug cartel could easily get them – we would hold the country that left them there at least partially responsible for the consequences that unfolded.

How should the Law of Armed Conflict deal with this scenario? What steps can a nation state take to prevent unintended vulnerability disclosure in the course of disseminating malware, or to limit the impact of that disclosure by ensuring that those vulnerabilities get patched rapidly in the event that they are publicly disclosed? This is an area in need of further discussion and exploration.

One suggestion is that nation states employing new vulnerabilities in conflict could prepare disclosure packages for responsible software vendors with detailed information about the nature of the vulnerabilities and could take steps to proactively provide those packages to vendors if, in the course of a conflict, those vulnerabilities became exposed. This sort of socially responsible effort would reduce the harm associated with unintended vulnerability disclosure by accelerating the pace at which vendors are able to respond with patches. The negative consequence, however, is that it could compromise the identity of the attacker, so I expect that nation states involved in covert action over the Internet would not embrace such a suggestion.

I’m looking forward to the panel discussion in Berlin, and I’m hoping that the experts there have constructive ideas that can further this thought process.

If you are a malware analyst or incident responder and you think you might one day find yourself in possession of an unpatched security vulnerability that is being used in the wild, please have a look at the guidance Holly and I have put together. There will be some information in our presentation slides, which will be available on Virus Bulletin’s website after our presentation is over. However, if this is a topic that is relevant to you, we strongly suggest purchasing a copy of the Virus Bulletin 2013 conference proceedings, which will include a copy of our full 11-page paper. You can obtain a copy by contacting Virus Bulletin at the following email address: conference@virusbtn.com.