Posted by: Michael Morisy
NAISG, Security, Vulnerability Disclosure
Samsara: In Buddhism and Hinduism, the endless round of birth, death, and rebirth to which all conditioned beings are subject. – Britannica Concise Encyclopedia
At last month’s Boston NAISG meeting, Zach Lanier gave an excellent presentation entitled “Disclosure Samsara: The Endless Responsible Vulnerability Disclosure Debate.” He’s since posted the slides, with a shorter summary also available.
The gist of Zach’s talk was that security researchers and the major software firms they cover are in a constant, mutually destructive cycle: Since much security exploit research, particularly for cross-site scripting (XSS) attacks, involves at least technical legal violations, researchers make themselves vulnerable to lawyer’s threats if they go approach vendors with discovered vulnerabilities.
When researchers do still go forward, there’s often strong disagreement about when public disclosure will happen, if at all (researchers typically strongly favor disclosure because it’s the only way they’ll be credited for their discoveries).
On the other side of the fence, there are lawyers, corporate goons … and developers who feel they’re being held hostage by pay-to-play schemes. In covering network vulnerabilities, the latter was the usual excuse, lame or not, for why vendors refused to discuss vulnerabilities with researchers.
Zach’s presentation outlines some of the benefits a peace agreement could be bring, including letting system administrators and security professionals craft workarounds more quickly, ultimately lowering the chance of a successful breach when an organization is on top of its security news.
Legislation has a done a good job in pushing companies to disclose when there have been security breaches involving user data, but could it be used to help security researcher/vendor tensions and work for the good of the overall (generally law abiding) IT community? After all, it’s often these vulnerabilities (though behind human error) that allows for these breaches in the first place.
The immediate answer would seem to be ‘no’: Allowing “research exemptions” to laws like the DMCA has worked poorly, if at all, in the past, and allowing greater legal leeway for researchers that are often misunderstood already seems like a tricky political sell even in the best of times.
Any legislation that did emerge could well cause more harm than good.
But what other options are there for a broadly applied vulnerability disclosure framework? Is “Samsara” even a realistic goal? Perhaps, and perhaps in the slow, piecemeal form it has taken: A more enlightened vendor here who offers a process to work with researchers, another security firm there willing to consistently abide by RFPolicy or another disclosure framework.
What are your thoughts? Are security research disclosures more public nuisance than public good, or should there be a better understanding between companies and researchers when it comes to full disclosure? I’d love to hear your thoughts in the comments, or directly at Michael@ITKnowledgeExchange.com. I’ll keep your information private if requested.
- EFF “Gray Hat Guide”
- IT services and The Three Chinese Curses
- In IT Answers: Security Advisories
- In IT Answers: How much should IT disclose post-intrusion?