Security Bytes


December 7, 2017  1:57 PM

OWASP Top Ten: Surviving in the cyber wilderness

Peter Loshin Peter Loshin Profile: Peter Loshin
OWASP

If you think the latest iteration of the Open Web Application Security Project’s Top Ten list of the “top” web application security risks has important news for your organization, well, you may be disappointed. And that’s fine because that’s not what the OWASP Top Ten is intended to do.

The 2017 edition of the OWASP Top Ten is quite like the 2013 version, which in turn was quite like the 2010 version, and so on, all the way back to the first version published in 2003 (see table). The new version is different, but the differences are evolutionary rather than revolutionary — and that’s fine, too.

The OWASP list isn’t meant to be a source of new and flashy security vulnerabilities; it’s a top ten list. That means it’s the top ten most basic risks that everyone should be aware of. It’s a list of the most important things to worry about in defending web applications — not the list of everything that information security professionals should worry about, just the bare minimum.

Use the OWASP Top Ten to stay safe

The OWASP Top Ten list should guide infosec pros in the same way hikers and backpackers are guided by their favorite version of the “ten essentials” lists for outdoor activities. There are minor differences between the lists — the Boy Scouts of America put a pocket knife at the top of their list, while the Appalachian Mountain Club starts its list with map and compass at number one — the goal of these lists is to define the minimum you need to stay safe if you get lost or injured in the woods.

If you want to avoid dying of hypothermia, you should carry extra clothes and, maybe, a tarp for emergency shelter. If you want to avoid dehydration, you should carry water. Do you want to avoid getting lost? Carry a map and compass. The advice is mostly the same in 2017 as it was in 1917.

Want to prevent hackers from pwning your web application? The advice in 2017 is, mostly, the same as it’s been for 15 years since the first edition of the OWASP Top Ten was published.

You can avoid injection attacks by validating input and parameters: Injection is at the top of the list as it has been since 2010; it went from #6 in 2003 to #2 in 2007.

Same for cross-site scripting, which debuted in the #4 spot in 2003 and went up to #1 in 2007 — but dropped to seventh place in 2017. That doesn’t mean it’s time to stop worrying about defending against XSS attacks, because as long as XSS is on the OWASP Top Ten list, it means it’s essential that you defend against them, for web app security.

No need to rank “essentials”

The order of the ten essentials lists for hikers doesn’t matter because they are ALL essential. The Boy Scouts list water at number five, but you won’t see a scout leaving water at home because it’s not as important as a first aid kit (#2).

The same should go for infosec pros looking to tighten up their web application security: the OWASP Top Ten lists the fundamentals. If you’re not addressing these things, the odds are that your web application won’t survive very long against even the least sophisticated attack.

“Yes, but there are lots of risks that were once listed on the OWASP Top Ten, and now they’re not,” you say? “What happened to buffer overflows and error handling, which were ranked at #5 and #7, respectively, in 2004?”

Even if an older risk has dropped off the OWASP list, it is still probably worth keeping it in mind. If I were an infosec professional, I’d keep the historical risks in mind, if only because most risks don’t disappear; instead, they evolve over time. Humans have been venturing into the wilderness for tens of thousands of years, so we have a pretty good idea of the risks there. Web app security is still in its infancy, so we probably don’t even yet know what the biggest risks are yet.

Meanwhile, defenders shouldn’t consider protecting against the OWASP Top Ten to be their goal — it should instead be the barrier to entry, in the same way that many trip leaders impose the requirement that all participants in their hikes must show up equipped with the ten essentials of hiking.

OWASP Top 10 Lists through history:

OWASP Top Ten lists

The OWASP Top Ten list has evolved since it was first published in 2003

November 30, 2017  7:36 PM

The CASB market is (nearly) gone but not forgotten

Rob Wright Profile: Rob Wright
CASB

Cloud access security brokers arrived on the scene with a bang in 2015, thanks to plentiful venture capital funding, compelling use cases and alluring customer testimonials.

Almost three years later, much has changed with cloud access security broker (CASB) market. There are barely any stand-alone players left in the market following a flurry of acquisitions during that span that drastically reshaped the cloud security industry. A space that was once dominated by borne-in-the-cloud startups is now filled with giants such as Microsoft, Cisco and Symantec.

The latest deal in the CASB market saw McAfee acquire Skyhigh Networks, arguably the top vendor in the space, this week. But there were many acquisitions before it. To recap:

  • Microsoft got the ball rolling with CASB acquisitions when it purchased startup Adallom in July of 2015, which later became part of Microsoft’s Cloud App Security business. Terms of the deal were not disclosed but news outlets reported the price ranged between $250 million and $320 million.
  • Also in July of 2015, Blue Coat Systems acquired Perspecsys for an undisclosed amount (Blue Coat was later acquired by Symantec in 2016).
  • In November of 2015, Blue Coat made another CASB acquisition with its purchase of Elastica for $280 million.
  • Cisco acquired CloudLock in June of 2016 for $293 million.
  • In September of 2016, Oracle purchased Palerra for an undisclosed amount.
  • Proofpoint acquired FireLayers in October of 2016. Terms of the deal were not disclosed.
  • In February of this year, Forcepoint acquired Skyfence from Imperva for approximately $40 million. Imperva had purchased the CASB in 2014.

There are a few remaining stand-alone players in the CASB market, including Netskope (which analysts considered a market leader alongside Skyhigh), Bitglass and CipherCloud. But going forward, the space will be increasingly dominated by the old guard rather than the startups.

However, that’s not necessarily a bad thing. While the CASB market as a stand-alone category may soon be a thing of the past, the CASB model is very much alive. Cloud application and SaaS usage, whether approved by IT departments or not, will only increase, and enterprises will continue to need products and services that can discover, manage and secure those apps.

In addition, the vendors that moved into the CASB space have a clearly stated desire to increase their cloud security presence. That includes companies like Microsoft and Oracle, which have their own SaaS offerings as well as a need to protect customer usage of those cloud apps. Even McAfee, which at one point under Intel’s ownership had scaled back on its cloud offerings, has a new vision of combining endpoint and cloud security.

The stand-alone CASB market may be nearly gone, but the business case for the technology remains. It’s unclear whether  CASB will continue to be a security product category in the coming years, or if the functionality will simply be folded into existing categories like web security gateways or other cloud security services. But the enterprise challenges of managing shadow cloud services and securing third-party SaaS offerings will remain, and so too will CASB technology.


November 22, 2017  5:33 PM

Uber data breach raises unsettling questions for infosec

Rob Wright Profile: Rob Wright
Security

Uber Technologies, Inc., is no stranger to self-inflicted wounds, but the latest visit to the infirmary goes far beyond the kinds of running-with-scissors episodes that have made the ride sharing company infamous.

Bloomberg Technology reported Tuesday that Uber suffered a massive data breach in the fall of 2016 that exposed names, email addresses and phone numbers of 50 million customers worldwide as well as the personal information of an additional 7 million customers. The Uber data breach was concealed by the company for more than a year, according to the report, thanks to efforts by the company’s former CSO and another member of the infosec team.

Rather than disclose the breach to regulatory officials and notify affected drivers and customers, Joe Sullivan, who was ousted from his CSO position this week, and Craig Clark, another member of the security team, engaged in a cover up that included paying $100,000 to the hackers behind the breach to delete the stolen data and keep quiet about the incident.

Newly-appointed CEO Dara Khosrowshahi said he only recently became aware of the Uber data breach and pledged to take several actions to correct the dysfunction that led to the cover-up. “None of this should have happened, and I will not make excuses for it,” Khosrowshahi wrote in a statement. “While I can’t erase the past, I can commit on behalf of every Uber employee that we will learn from our mistakes.

It’s easy to look at the Uber data breach and its ensuing cover-up and localize it to Uber’s rotten corporate culture. After all, the company has an established track record of engaging in unethical and possibly illegal practices while skirting government regulators.

However, sitting back and saying “Forget it, Jake – it’s Uber” may be missing a larger concern. There are a number of troubling aspects about this incident, starting with the fact that Sullivan was a former federal prosecutor with the U.S. Department of Justice. Presumably, he knew the legal risks of covering up the Uber data breach, to say nothing of the ethical implications.

It’s also worth noting that Sullivan isn’t an inexperienced nobody who might claim ignorance to proper infosec and data breach notification practices. He was the CSO at Facebook for more than five years and also served as the social networking giant’s associate general counsel (in a separate story, Bloomberg reported Sullivan also served Uber as deputy general counsel while he was CSO, though the company never officially named him to such a position).

Again, it’s easy to argue that Uber’s culture somehow got its hooks into a respected and experienced CSO and influenced him to the point where he abandoned his legal and ethical duties. But viewing this data breach cover up as an incident that only Uber could commit misses the writing on the wall.

First, I’ve heard numerous stories at infosec conferences this year about unnamed companies, including healthcare and financial services organizations, that were hit with ransomware and then paid the ransom without disclosing the incident to regulators or the public. Is a ransomware attack technically a data breach? That’s a debatable question and a subject for another time. But I suspect that a resistance to disclosures and notifications for security incidents, whether ransomware or network intrusions, has been growing within corporate America in recent years.

And second, this isn’t the first time an organization has engaged in a reckless cover up of data breaches. Last year a congressional investigation revealed the FDIC engaged in repeated cover ups of major cyberattacks and data breaches and even retaliated against whistleblowers within the department. And that’s just an example of where the cover up was both wanton and exposed later on. There are other curious incidents of breaches and cyberattacks that occurred many months and even years earlier and for mysterious reasons have only become public knowledge recently.

We may want to believe that only the truly reckless and lawless companies would do what Uber did, but I think it’s time to start asking how many other enterprises may be running with scissors and on the verge of gutting both themselves and their customers.


November 1, 2017  12:55 AM

The Equation Group malware mystery: Kaspersky offers an explanation

Rob Wright Profile: Rob Wright

The ongoing drama between Kaspersky Lab and the U.S. government received some much-needed sunlight last week as the antivirus vendor finally uttered two very important words: Equation Group.

Kaspersky issued a statement describing how it came to possess Equation Group malware, which was a response to recent news reports claiming the vendor had National Security Agency (NSA) cyberweapons on its network in 2015. Both the government and the antivirus vendor have quietly tip-toed around Equation Group since the Kaspersky controversy began rolling earlier this year. And it’s easy to see why – the government doesn’t want to officially acknowledge that the NSA is in the business of creating and using malware, and Kaspersky likely didn’t want to highlight a sore spot for the U.S. government that could further inflame the situation (after all, Kaspersky was the first to blow the lid off Equation Group with its 2015 report).

But Kaspersky was backed into a corner with mounting political pressure and government-wide bans on its products. The company played one of its last remaining cards: it came clean and offered a somewhat plausible explanation why it had possession of Equation Group malware.

In short, Kaspersky’s statement claims that in 2014 its antivirus software scanned a system and detected a simple backdoor in a product-key generator for a pirated version of Microsoft Office (this system is presumed to belong to the NSA contractor/employee that reportedly took cyberweapons home and installed them on a personal computer). The antivirus program also detected a 7-Zip archive of “previously unknown” malware, which the antivirus program via Kaspersky Security Network (KSN) relayed to the company for further analysis.

The statement offers some answers to lingering questions on the matter, but it also produces new questions and concerns for Kaspersky and the U.S. government. Here are some important ones:

  • “As a routine procedure, Kaspersky Lab has been informing the relevant U.S. Government institutions about active APT infections in the USA,” the statement reads. This assumes that after detecting and analyzing the 7-Zip archive of new Equation Group malware, the company alerted the U.S. government. But that statement is just left hanging there, and Kaspersky never explicitly states it contacted the relevant authorities about the malware. Did it? If Kaspersky did, then why not spell it out in no uncertain terms? If it didn’t, could that be a source of contention between the vendor and U.S. government?
  • After analyzing the Equation Group malware, Kaspersky researchers notified CEO Eugene Kaspersky. “Following a request from the CEO, the archive was deleted from all our systems,” the statement read. This suggests that Kaspersky did not, in fact, contact the U.S. government about its findings. So why did the company delete the files? It could be, as some have speculated, that the archive had files with classified markings on them. But Kaspersky throws cold water on the media reports of “NSA classified data” being on its servers and states no such incident took place. If it is true, then why did it take extensive analysis from Kaspersky researchers to find those markings?
  • Kaspersky said it detected other instances of the Equation Group malware on systems in the “same IP range” as the original system. These detections were made after Kaspersky published its Equation Group report in February of 2015; according to the statement, the company believes these systems, which had KSN enabled, were set up as honeypots. However, Kaspersky doesn’t explain why it believes they were honeypots, and why they were set up. But this point suggests the U.S. government, or at least individuals within the NSA, knew the Equation Group malware had been exposed and uploaded to Kaspersky. That would contradict earlier news reports claiming the U.S. didn’t know about exposure of NSA cyberweapons until 2016.
  • Kaspersky wrote “No other third-party intrusions, besides Duqu 2.0, were detected” on its networks. This is presumably a response to the aforementioned media reports, which claimed that Israeli intelligence officers (who reportedly hacked into Kaspersky’s network) observed Russian hackers on the company’s network abusing antivirus scans to search for U.S. government data. But it doesn’t confront the allegation in The Wall Street Journal report that Kaspersky willingly let state-sponsored threat actors into its environment and was actively working with Russian government. It also dances around the question of who was behind the Duqu 2.0 attack.

Kaspersky’s statement on the Equation Group malware is quite detailed, offering names for malicious code samples and files and specifics about the system on which the malware was first detected. But the statement also skips over important details and key questions in the ongoing Kaspersky controversy. If the company and the government continue to withhold vital information that could clear up this mess, both will look increasingly bad as this drags on.


October 31, 2017  9:18 PM

Is “responsible encryption” the new answer to “going dark”?

Peter Loshin Peter Loshin Profile: Peter Loshin

“Three may keep a Secret, if two of them are dead.”

So wrote Benjamin Franklin, in Poor Richard’s Almanack, in 1735. Franklin knew a thing or two about secrets, as well as about cryptography, given his experience as a diplomat for the fledgling United States, and he’s right: a secret shared is a secret exposed.

But it’s 2017 now, and the Department of Justice and the FBI are still hacking away at encryption, and the conversation about encryption and the need for the government to be able to access any and all encrypted data continues to hit the same talking points as when then FBI Director Louis Freeh and Attorney General Janet Reno were pushing them in the 1990s — and, we might imagine, the same arguments could have been offered by King George’s government in the run-up to the Revolutionary War.

FBI Director Christopher Wray and Deputy Attorney General Rod Rosenstein have been taking the latest version of the “strong encryption is bad” show on the road, again, with a new buzzword: “responsible encryption.” While phrasing continues to morph, the outline is the same: the forces of evil are abusing strong encryption and running wild, destroying our civilization.

Some things have changed since the first battles in the crypto wars were waged more than 25 years ago. For example, the FBI and DOJ have listed money launderers and software pirates alongside the terrorists, human traffickers and drug dealers as part of the existential threat posed by unbreakable encryption.

It all boils down to a single question: Should law-abiding citizens be forbidden to defend themselves with encryption so strong that not even a government can break it, just so criminals can be denied it?

Rosenstein makes it clear that any piece of encrypted data subject to a valid court order must be made accessible to law enforcement agencies. “I simply maintain that companies should retain the capability to provide the government unencrypted copies of communications and data stored on devices, when a court orders them to do so,” he said at the 2017 North American International Cyber Summit, in Detroit on October 30.

If the person who encrypted the data chooses not to unlock it, Rosenstein and Wray believe the company that provided the encryption technology must be able to make that data available upon presentation of a warrant.

In the 1990s, the government demanded a key escrow platform through which all encryption could be reversed on demand. The resulting Clipper Chip was a spectacular failure, both technically and politically. And during the 2015 campaign, former FBI Director James Comey promoted the term “going dark” into the conversation.

This time around, we’re offered the concept of “responsible encryption.” This is presumably some form of encryption that includes some (as yet undetermined) mechanism by means of which lawful access is provided to the encrypted data. The phrase itself is not new — it seems to have originated in 1996 Senate testimony by Freeh:

The only acceptable answer that serves all of our societal interests is to foster the use of “socially-responsible” encryption products, products that provide robust encryption, but which also permit timely law enforcement and national security access and decryption pursuant to court order or as otherwise authorized by law.

As for how that might be achieved, well, that’s not the business of the government, Rosenstein now tells us. Speaking in Detroit, he said, “I do not believe that the government should mandate a specific means of ensuring access. The government does not need to micromanage the engineering.”

However, he does seem to think that the answer is not as difficult as the experts would have us believe — and it would not be necessary to resort to back doors, either. Rosenstein said:

“Responsible encryption is effective secure encryption, coupled with access capabilities. We know encryption can include safeguards. For example, there are systems that include central management of security keys and operating system updates; scanning of content, like your e-mails, for advertising purposes; simulcast of messages to multiple destinations at once; and key recovery when a user forgets the password to decrypt a laptop. No one calls any of those functions a “backdoor.” In fact, those very capabilities are marketed and sought out.”

It seems Rosenstein is suggesting these functions — key management, data scanning, “simulcast” of data and key recovery — can each be a part of a “responsible encryption” solution. And since these features have already been deployed individually in commercial products, tech firms need to “nerd harder” and come up with a “responsible encryption” solution by:

  • maintaining a giant key repository database, so all encryption keys are accessible to government agents with court orders — but also secure enough to protect against all unauthorized access
  • scanning all content before it is encrypted, presumably to look for evidence of criminal activity — but hopefully without producing too many false positives
  • “simulcasting” all data, either before it is encrypted or maybe after it is encrypted and the keys are stored for government access — so it can be retrieved or scanned at the government’s leisure
  • deploying “key recovery” for encrypted laptops, but for all laptops, everywhere, and accessible to authorized government agents only

Unfortunately, the answers the government provides can’t make key escrow scalable or secure. There are many, many reasons the law enforcement community’s demand for breakable encryption is not a reasonable (or even practical) solution, but two spring to mind immediately:

  • Key escrow schemes are massively complicated and produce huge new attack surfaces that could, if successfully breached, destroy the world’s economy. And, they would be breached (see Office of Personnel Management, Yahoo, Equifax and others).
  • “Responsible encryption” means law-abiding organizations and people can no longer trust their data. With cryptography backdoored, forget about privacy; there no longer is any way to verify that data has not been altered.

A ban on end to end encryption in commercial tech products will only prevent consumers from enjoying the benefits — it won’t prevent criminals and threat actors from using it.

We shouldn’t be surprised that this or any government is interested in having unfettered, universal access to all encrypted data (subject, of course, to lawful court orders).

However, once we allow the government to legislate weaker encryption, we’re lost. As Franklin wrote in the 1741 edition of Poor Richard’s Almanack:

“If you would keep your secret from an enemy, tell it not to a friend.”


October 20, 2017  6:46 PM

Latest Kaspersky controversy brings new questions, few answers

Rob Wright Profile: Rob Wright

Kaspersky Lab’s latest salvo in its ongoing feud with the U.S. government and media offered some answers but raised eve more questions.

The company on Tuesday broke its silence a week after a series of explosive news reports turned up the heat on the Kaspersky controversy. We discussed the reports and the questions surrounding them in this week’s episode of the Risk & Repeat podcast, but I’ll summarize:

  • The New York Times claimed that Israeli intelligence officers hacked into Kaspersky Lab in 2015 and, more importantly, observed Russian hackers using the company’s antivirus software to search for classified U.S. government documents.
  • The Washington Post published a similar report later that day and also claimed Israeli intelligence discovered NSA hacking tools on Kaspersky’s network.
  • The Wall Street Journal also had a similar story the next day on the Kaspersky controversy, with a very important detail that Kaspersky antivirus scans were searching for

These reports resulted in the most serious and detailed allegations yet against Kaspersky; anonymous government officials had accused the company of, among other things, helping state-sponsored Russian hackers by tailoring Kaspersky antivirus scans to hunt for U.S. secrets.

As a result, Kaspersky responded this week with an oddly-worded statement (and a curious URL) that offered some rebuttals to the articles but also raised more questions. Much of the statement focuses on the “English-speaking media,” their use of anonymous sources and the lack of specific details about the secret files that were stolen, among other elements of the news reports.

But there are some important details in the statement that both shed light on the situation and raise further questions on the Kaspersky controversy. Here are a few points that stood out:

  • Kaspersky doesn’t directly confront the allegation that it had, or has, NSA cyberweapons on its servers. But it did provide reasoning for why it would have possession of them: “It sounds like this contractor decided to work on a cyberweapon from home, and our antivirus detected it. What a surprise!”
  • Kaspersky explained how its antivirus scanning works, specifically how Kaspersky Security Network (KSN) identifies malicious and suspicious files and transfers them to Kaspersky’s cloud repository. This is also where Kaspersky throws some shade: “If you like to develop cyberweapons on your home computer, it would be quite logical to turn KSN off — otherwise your malicious software will end up in our antivirus database and all your work will have been in vain.”
  • Ironically, the above point raised more questions about the reported NSA breach. Wouldn’t an NSA contractor know that having hacking tools (a.k.a. malware) on their computer would alert Kaspersky’s antivirus software? Wouldn’t the individual know to turn off KSN or perhaps use Kaspersky’s Private Security Network? It’s entirely possible that a person foolish enough to bring highly classified data home to their personal computer could commit an equally foolish error by fully enabling Kaspersky antivirus software, but it’s difficult to believe.
  • Kaspersky provided an explanation for why it would have NSA hacking tools on its network, but it didn’t offer any insight into how hackers could gain access to KSN data and use it to search for government documents. When Kaspersky was breached in 2015 (by Russian hackers, not Israeli hackers), did they gain access to KSN? Could threat actors somehow intercept transmissions from Kaspersky antivirus software to KSN? The company isn’t saying.
  • Let’s assume Kaspersky did have NSA cyberweapons on its network when the company was breached in 2015 (which, again, the company has not confirmed or denied). This makes sense since Kaspersky was the first to report on the Equation Group in February of 2015. But this raises the possibility that Kaspersky had possession of exploits like EternalBlue, DoublePulsar and others that were exposed by the Shadow Brokers in 2016 – but for whatever reason didn’t disclose them. Based on the Equation Group report, which cited a number of exploit codenames, that there is some overlap between what Kaspersky discovered and what was later released by the Shadow Brokers (FoggyBottom and Grok malware modules, for example, were included in last month’s UNITEDRAKE dump). But other hacking tools and malware samples discovered by Kaspersky were not identified by their codenames and instead were given nicknames by the company. Did Kaspersky have more of the cyberweapons that were later exposed by the Shadow Brokers? The idea that two largely distinct caches of NSA exploits were exposed – with one obtained by Kaspersky, and one stolen by the Shadow Brokers – is tough to wrap my head around. But considering the repeated blunders by the intelligence community and its contractors in recent years, maybe it’s not so far-fetched.
  • The Equation Group is the elephant in the room. Kaspersky’s landmark report on the covert NSA hacking group seems relevant in light of last week’s news, but the company hasn’t referenced it in any capacity. Does Kaspersky think the Equation Group reveal played a part in the U.S. government’s decision to ban its products? Again, the company isn’t saying. Instead, Kaspersky’s statement took some shots at the news media and made vague references to “geopolitics.”
  • Finally, a big question: Why did Israeli intelligence officers hack into Kaspersky’s network in 2015? The articles in question never make that clear, and Kaspersky never directly addresses that information in its statement. Instead, the company cites its report on the breach and the Duqu 2.0 malware used in the attack. Bu this is an important question, and it’s one that Kaspersky has shown no interest in raising. And that is strange, because it’s important part of this mess that has seemingly been overlooked. What was the motive for the attack? Was the attack in some way a response to Kaspersky’s exposure of the Equation Group? Was Israeli intelligence hoping to gain deeper insight into Kaspersky’s technologies to avoid detection? It’s unclear.

More bombshell stories on the Kaspersky controversy are likely to drop in the coming weeks and months. But until the U.S. government officially discloses what it has on the antivirus maker, and until Kaspersky itself comes clean on unanswered questions, we won’t have anything close to a clear picture.


September 29, 2017  8:16 PM

FBI’s Freese: It’s time to stop blaming hacking victims

Rob Wright Profile: Rob Wright

The infosec industry needs to express more empathy for hacking victims and engage in less public shaming.

That was the message from  Don Freese, deputy assistant director of the FBI and former head of the bureau’s National Cyber Investigative Joint Task Force (NCIJTF), at the (ISC)2 Security Congress this week. In his opening keynote discussion with Brandon Dunlap, senior manager of security, risk and compliance at Amazon, Freese focused on the importance of proper risk management in building a strong enterprise security posture.

But he reserved a portion of his talk to confront an oft-criticized and occasionally ugly practice of the infosec industry: blaming and shaming hacking victims.

In discussing the lack of communication and trust between security professionals and the rest of the enterprise, including C-suite executives, Freese talked about what he called an “unhealthy sense of superiority” in the cybersecurity field, which can lead to victim blaming.

“Certainly the FBI struggled with this in our culture,” Freese said. “The FBI was rightfully criticized in the last decade for victimizing people twice in cyber [attacks]. We certainly don’t do this when there’s a violent crime, when somebody is involved in a terrorist incident, or something like that. We don’t rush in and blame the victim. [But] we do it, and we have done it, in cybersecurity.”

That practice, Freese said, not only harms relationships with people inside an organization as well as third-parties, but it makes the difficult process of solving the problem of a cyberattack even harder.

“You’ve got to be willing to humble yourself a bit to really understand what’s going on with the victim,” he said.

Freese went on to say the situation at the FBI “is absolutely getting better.” But his point remained that the bureau as well as the infosec industry in general needs to do less victim-shaming in order to build better relationships and lines of communications.

Freese is absolutely right. The pile-on that ensues in both the media and social media following the latest breach can be alarming. This isn’t to say companies like Equifax shouldn’t be criticized for some of their actions – they absolutely should. And we shouldn’t let “breach fatigue” take hold and allow these events to be completely shrugged off. But there’s a line where the criticism becomes so wanton that it’s both self-defeating and self-destructive, and industry professionals as well as the media should at least make good faith efforts to find that line and stay on the right side of it.

And blaming hacking victims may have detrimental effects that are more tangible than we would like to believe. Freese’s words this week echoed Facebook CSO Alex Stamos’ keynote at Black Hat 2017 this summer.

“As a community we tend to punish people who implement imperfect solutions in an imperfect world,” Stamos said. “As an industry, we have a real problem with empathy. We have a real inability to put ourselves in the shoes of the people we are trying to protect. It’s really dangerous for us to do this because it makes it very easy for us to shift the responsibility for building trustworthy, dependable systems off of ourselves [and] on to other people.”

In short, security professionals may be making a hard job even harder. But the issue may go beyond shifting responsibilities and breaking down relationships. As someone who’s done his fair share of criticizing enterprises and government agencies that have suffered catastrophic breaches or committed seemingly incomprehensible errors, I’ve often wondered about the larger effects of negative media attention on a victim organization as well as the industry as a whole.

More specifically, I’ve wondered if the constant flow of embarrassing headlines and negative news regarding the latest data breaches and hacks act as a contributing factor in one of the industry’s biggest problems: the workforce shortage. Perhaps filling jobs and promoting the infosec profession to a younger and more diverse population is harder because no security professional wants the next breach headline on their resume and no one wants to take the fall as disgraced CISO; a college student considering a future infosec career may see the swirl of negativity and shaming around the dozens of companies that fall prey to threat actors each month and think that both outcomes are not just probable but inevitable.

Infosec careers offer steady employment and good pay. But in the event of a breach, these careers also offer massive stress, negative publicity and, in some cases, damaged reputations and job losses. I’m reminded of what Adobe CSO Brad Arkin said during a presentation on his experiences with the Adobe data breach in 2013; Arkin said he was so stressed dealing with the fallout of the breach, he grinded his teeth to the point where he cracked two molars during a meeting.

Yes, the pay for infosec jobs may be very good. But for a lot of people, that may not be enough to justify the costs.


September 22, 2017  10:04 PM

DerbyCon cybersecurity conference is unique and troubling

Michael Heller Michael Heller Profile: Michael Heller
Conferences, cybersecurity

Walking up to DerbyCon 7.0 cybersecurity conference it immediately has a very different feel from the “major” infosec conferences. Attendees would never be caught loitering outside of the Black Hat or DEFCON venues, because no one willingly spends more time than necessary outdoors in Las Vegas. RSAC attendees might be outside, but only because it’s simply impossible to fit 40,000-plus humans into the Moscone Center without triggering claustrophobics.

DerbyCon is different though. The cybersecurity conference literally begins on the sidewalk outside the Hyatt Regency in Louisville and the best parts (as many will tell you) take place in the lobby at the so-called “LobbyCon,” where anyone with the means to get to Louisville can participate in the conference without a ticket.

Groups of hackers, pen testers, researchers, enthusiasts and other various infosec wonks can be found talking about any and all topics from cybersecurity to “Rick and Morty” and everything in between. This feel of community extends into DerbyCon proper with attendees lining the hallways, not looking desperately in need of rest like other major infosec conferences, but talking, sharing and connecting.

The feel of DerbyCon cybersecurity conference  is not unlike a miniature DEFCON – but with a higher emphasis on the lounge areas – and that appearance seems intentional. Just like DEFCON, the parties in the evening get as much promotion as the daytime talks (Paul Oakenfold and Busta Rhymes this year), and just like DEFCON, DerbyCon hosts “villages” for hands-on experiences with lock-picking, social engineering, IoT hacking, hacking your derby hat and more.

On top of all that, DerbyCon is a place for learning all aspects of infosec. The tracks are broken into four – Break Me, Fix Me, Teach Me and The 3-Way (for talks that don’t fit neatly into one of the other buckets.)

The two keynotes had bigger messages but were rooted in storytelling; Matt Graeber, security researcher for SpecterOps, told the story of how he became a security researcher and how it led him to finding a flaw in Windows. John Strand, a SANS Institute instructor and owner of Black Hills Information Security, told a heart-wrenching tale of the lessons he learned from his mother as she was dying of cancer and how those lessons convinced him the infosec industry needs to be more of a community and not fight so much.

The good with the bad

For all of the unique aspects of DerbyCon it was hard to ignore one way that it is very similar to other cybersecurity conferences – the vast majority of attendees are white males.

Without demographics data, it is unclear if the proportion of minorities and women is lower for DerbyCon, but it is something that feels more prominent given that the tone of the conference is one of community. DerbyCon feels like a space where everyone is welcome, so noticing that there isn’t a more diverse base of attendees can serve to highlight an issue that is often talked about but may not get the real engagement needed to create meaningful change.

While it could be argued that size and location of DerbyCon might contribute to there being a low proportion of women and minorities here, it can’t be used as an excuse, and the issue is not unique to DerbyCon. The infosec world overall needs to make more of an effort to promote diversity, and DerbyCon cybersecurity conference serves as one more example of that.


September 15, 2017  7:15 PM

Fearmongering around Apple Face ID security announcement

Michael Heller Michael Heller Profile: Michael Heller
Apple, biometric, Facial recognition, iPhone

As fears grow over government surveillance, the phrase “facial recognition” often triggers a bit of panic in the public, and some commentators are exploiting that fear to overstate any risks associated with Apple’s new Face ID security system.

One of the more common misunderstandings is around who has access to the Face ID data. There are those who claim Apple is building a giant database of facial scans and that the government could compel Apple to share that data through court orders.

However, just as has been seen with the FBI attempting to bypass the Touch ID fingerprint scanner on iPhones, the same security measures are in place for Face ID security. The facial scan data is stored in the Secure Enclave on the user’s iPhone, according to Apple, and is never transmitted to the cloud — the same system that has protected users’ fingerprint data since 2013.

Over the past four years, Apple’s Touch ID has been deployed on hundreds of millions of iPhones and iPads, but not one report has ever surfaced of that fingerprint data being compromised, gathered by Apple, or shared with law enforcement. But, there are those out there who would claim that somehow this system would suddenly fail because it is holding Face ID security data.

The fact is that the data stored in the iOS Secure Enclave is only accessible on the specific device. Apple has built in layers of security so that even it cannot access that data — let alone share it with law enforcement — without rearchitecting iOS at a fundamental level, which often means the burden of doing so is too high to be compelled by court order.

Face ID security vs facial recognition

This is not to say that there is no reason to be wary of facial recognition systems. Having been a cybersecurity reporter for close to three years now, I’ve learned that there will always be an organization that abuses a system like this or fails to protect important data (**cough**Equifax**cough**).

Facial recognition systems should be questioned and the measures to protect user privacy should be scrutinized. But, just because a technology is “creepy” or has the potential to be abused doesn’t mean all logic should go out the window.

Facebook and Google have been doing facial recognition for years to help users better tag faces in photos. Those are legitimate databases of faces that should be far more worrying than Face ID security, because the companies holding those database could be compelled to share them via court order.

Of course, one could also argue that many of the photos used by Facebook and Google to train those recognition systems are public, and it is known that the FBI has its own facial recognition database of more than 411 million photos.

To create panic over Apple’s Face ID security when the same data is already in the hands of law enforcement is little more than clickbait and not worthy of the outlets spreading the fear.


August 23, 2017  6:47 PM

Project Treble is another attempt at faster Android updates

Michael Heller Michael Heller Profile: Michael Heller
Android, Google

Google has historically had a problem with getting mobile device manufacturers to push out Android updates, which has left hundreds of millions in the Android ecosystem at risk. Google hopes that will change with the introduction of Project Treble in Android 8.0 Oreo, but it’s unclear if this latest attempt to speed up Android updates will be successful.

Project Treble is an ambitious and impressive piece of engineering that will essentially make the Android system separate from any manufacturer (OEM) modifications. Theoretically, this will allow Android OS updates to be pushed out faster because they won’t be delayed by any custom software being added. In a perfect world, this could even make OEMs happier because they will also be able to push updates to custom software without going down the path of putting those custom apps in the Play Store.

Project Treble

Wrangling the Android ecosystem is by no means an easy feat. According to Google’s latest statistics from May 2017, there were more than 2 billion active Android devices worldwide. Attempting to get consistent, quick updates to all of those devices with dozens of OEMs needing modifications for hundreds of hardware variations, as well as verification from carriers around the world, is arguably one of the most difficult system update problems in history.

Project Treble isn’t Google’s first attempt at fixing Android updates by trying to implement policies around updates, nudging OEMs towards lighter customization and pushing updates through apps in the Play Store, but history has proven these changes don’t make much difference. Project Treble is a major change that has the potential to make a significant impact on system updates, but there will still be issues.

First in 2011, Google put in place an informal policy asking OEMs to fully support devices for 18 months after being put on the market, including updating to the latest Android OS version released within that timeframe. Unfortunately, there was no way to enforce this policy and no timetable mandating when updates needed to be pushed. As a result, Android 7.x Nougat is only on 13.5% of the more than 2 billion active devices worldwide (per the August numbers from Google) and, althoughAndroid 6.0 Marshmallow is found on the plurality of devices, it only reaches 32.3% of the ecosystem.

After that, Google added a rule that a device had to be released with the latest version of Android in order to be certified to receive the Play Store, Google Apps and Play services. This helped make sure new devices didn’t start out behind, but didn’t address the update problem.

Google even tried to sidestep the issue altogether by putting a number of security features into Google Play services, which is updated directly by Google separately from the Android OS and therefore can be pushed to nearly all Android devices without OEM or carrier interaction.

Remaining speedbumps for Android updates

Project Treble has the potential to make an impact to how quickly devices receive Android updates, but it doesn’t necessarily address delays from carrier certification of updates. Unlike iOS updates, which come straight from Apple, Android updates need OEM tinkering due to the variety of hardware and therefore need to also be tested by carriers. Perhaps, carrier testing will be faster since the Android system updates should be relatively similar from device to device, but there’s no way to know that yet.

Android update process

Additionally, Project Treble will add another layer of complexity to Android updates by creating three separate update packages for OEMs to worry about — the Android OS, the OEM custom software layer and the monthly security patches.

Google has been rumored to be considering a way to publicly shame OEMs who fall behind on the security patch releases, indicating that even those aren’t being pushed out in a timely manner. In the meantime, an unofficial tracker for security updates gives a rough idea of the situation.

OEM software teams are likely used to working on Android OS updates in conjunction with the OEM customizations, so a workflow change will be needed. Once that’s done, there’s no guarantee the Android OS update will get any priority over custom software or even that it will get the resources to get device-specific adaptations quickly.

Ultimately, Project Treble provides OEMs a much easier path to pushing faster updates for Android, but there are still enough speedbumps to cause delays and OEMs still haven’t proven able to even push small security patch releases. Android updates will never be as fast and seamless as iOS, but given the challenges Google faces, Project Treble may be the best solution yet. If OEMs get on board.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: