Security Bytes

February 23, 2018  9:31 PM

Facebook’s 2FA bug lands social media giant in hot water

Rob Wright Profile: Rob Wright

At Black Hat USA 2017, Facebook CSO Alex Stamos said “As a community we tend to punish people who implement imperfect solutions in an imperfect world.”

Now, Facebook has found itself on the receiving end of such punishment after users who had enabled two-factor authentication reported receiving non-security-related SMS notifications on their phones.

News reports of the issue led several security experts and privacy advocates to slam Facebook for leveraging two-factor authentication numbers for what many viewed as Facebook spam. Critics assumed the Facebook 2FA notifications, which alerted users about friends’ activity, were intentional and part of Facebook’s larger effort to improve engagement on the site, which has been steadily losing users lately. However, in a statement last week acknowledging the issue, Stamos said this was not the case.

“It was not our intention to send non-security-related SMS notifications to these phone numbers, and I am sorry for any inconvenience these messages might have caused,” Stamos wrote. “To reiterate, this was not an intentional decision; this was a bug.”

It’s unclear how a bug led the Facebook 2FA system to be used for engagement notification, but the unwanted texts weren’t the only issue; when users responded to Facebook’s notifications to request the company stop texting them, those messages were automatically posted to users’ Facebook pages. Stamos said this was an unintended side effect caused by an older feature.

“For years, before the ubiquity of smartphones, we supported posting to Facebook via text message, but this feature is less useful these days,” Stamos wrote in his statement. “As a result, we are working to deprecate this functionality soon.”

The Facebook 2FA bug did more than rattle users of the social media site – it led to a notable public feud on Twitter. Matthew Green, a cryptography expert and professor at Johns Hopkins University, was one of several people in the infosec community to sharply criticize Facebook for its misuse of 2FA phone numbers and argued that sending unwanted texts to users would turn people away from an important security feature.

However, Alec Muffett, infosec consultant and developer of the Unix password cracker tool “Crack,” took issue with Green’s argument and the critical media coverage of Facebook’s 2FA bug, which he claimed was having a more negative effect on 2FA adoption than the bug itself.

At one point during the Twitter feud between Muffett and Green, Stamos himself weighed in with the following reply to Green:

“Look, it was a real problem. You guys, honestly, overreacted a bit,” Stamos Tweeted. “The media covered the overreaction without question, because it fed into emotionally satisfying pre-conceived notions of FB. Can we just admit that this could have been better handled by all involved?”

I, for one, cannot admit that. If this episode had occured in a vacuum, then Stamos might have a point. But it didn’t. First, Facebook isn’t some middling tech company; it’s a giant, flush with money and filled with skilled people like Stamos who have a stated mission to provide first-class security and privacy protection. I don’t think it’s unfair to hold such a company to a higher standard in this case.

Also, Stamos complains about “emotionally satisfying pre-conceived notions” of Facebook as if the company doesn’t have a well-earned reputation of privacy violations and questionable practices over the years. To ignore that extensive history in coverage of the Facebook 2FA bug would be absurd.

This is not to argue that this was, as many first believed, a deliberate effort to send user engagement notifications. If Stamos says it was a bug, then that’s good enough for me. But any bug that inadvertently sends text spam to 2FA users is troubling, and there are other factors that make this episode unsettling. There are reports that this bug has existed for quite some time, and it’s difficult to believe that among Facebook’s massive user base, not a single person contacted Facebook to alert them to the situation, and no one among Facebook’s 25,000 employees noticed text notifications were being sent to 2FA numbers.

I don’t expect Facebook to perfect, but I expect it to be better. And Stamos is right: Facebook could have handled this better. And if there’s been damage done to the adoption of 2FA because of this incident, then it starts with his company and not the media coverage of it.

February 8, 2018  8:55 PM

Symantec’s untrusted certificates: How many are still in use?

Rob Wright Profile: Rob Wright

The fallout from Google’s decision last year to stop trusting Symantec certificates has been difficult to quantify, but one security researcher has provided clarity on how many untrusted certificates are still being used.

Arkadiy Tetelman, senior application security engineer at Airbnb, posted research over the weekend about the number of untrusted certificates still in use by Symantec customers (Symantec’s certificate authority (CA) business was acquired late last year by rival CA DigiCert). According to Tetelman, who scanned the Alexa Top 1 Million sites, approximately 103,000 Symantec certificates that are set to have trust removed this year are still in use; more than 11,000 of those will become untrusted certificates in April with the release of Chrome 66, and more than 91,000 will become untrusted in October with Chrome 70.

“Overall the issue is not hugely widespread,” Tetelman wrote, “but there are some notable hosts still using Symantec certificates that will become distrusted in April.”

According to Tetelman’s research, those notable sites include, and He noted that some users running beta versions of Chrome 66 are already seeing connections to websites using these untrusted certificates rejected, along with a browser security warning that states “Your connection is not private.”

Google’s decision to remove trust for Symantec-issued certificates stems from a series of incidents in recent years with the antivirus maker’s CA business. Among those incidents were numerous misissued certificates (including certificates for Google) and repeated auditing problems. Last March, Google announced its intent to remove trust from Symantec certificates based on its investigation into the company’s CA operations. After months of negotiations – and hostile public sparring – between Symantec and the web browser community, Symantec finally agreed to a remediation plan offered by Google, Mozilla and other browser companies.

That remediation plan gave Symantec a choice: either build a completely new PKI for its certificates or turn over certificate issuance operations to one or more third-party CAs. Symantec ultimately opted to sell its PKI business to DigiCert in August.

DigiCert, meanwhile, still has to make good on the remediation to which Symantec agreed. And so far, it has; DigiCert met a Dec. 1 deadline to integrate Symantec’s PKI with its own backend operations and ensure all certificates are now issued and validated through DigiCert’s PKI.

But DigiCert will still have to contend with untrusted certificates currently used by Symantec customers. Along with the Chrome 66 and 70 release dates, new versions of Mozilla’s Firefox will also remove trust for Symantec certificates; Firefox 60, scheduled for May, will distrust Symantec certificates issued before June 1, 2016, while Firefox 63, scheduled for December, will distrust the rest of Symantec’s certificates.

In other words, more work needs to be done before this mess is completely cleaned up.

January 26, 2018  3:05 PM

Blizzard security flaw should put game developers on notice

Rob Wright Profile: Rob Wright

Imagine you could reach into an application that had none of the enterprise security protections we’ve come to appreciate but was still used by millions of people — themselves blissfully unaware of the risks the application posed — and use that vulnerable application to hack into millions of PCs.

That may sound like a dream scenario for cybercriminals, but it’s all too real thanks to modern video games.

Tavis Ormandy of Google’s Project Zero this week published details of a DNS rebinding flaw contained in the PC games of Blizzard Entertainment, including World of WarCraft, Overwatch, Hearthstone and StarCraft. The Blizzard security flaw, which is contained in a shared utility tool called “Blizzard Update Agent,” allows a malicious actor to impersonate the company’s network and issue privileged commands and files to the tool — which, again, is contained within all of Blizzard’s games and would theoretically put millions of players’ PCs at risk.

“Any website can simply create a DNS name that they are authorized to communicate with, and then make it resolve to localhost,” Ormandy wrote in the Chromium bug report. “To be clear, this means that *any* website can send privileged commands to the agent.”

The actual number of gamers at risk is unknown. Ormandy referenced a report claiming “500 million monthly active users [MAUs],” however that number refers to the total number of MAUs for Blizzard’s parent company, Activision Blizzard. According to Activision Blizzard’s third quarter 2017 financial results, Blizzard alone reached a record 42 million MAUs for the period, but it’s unclear how many of those users would be affected by the Blizzard security bug (the Blizzard Update Agent is only contained in the PC version of the company’s games and not used in game console versions).

If the DNS rebinding vulnerability itself wasn’t bad enough, there was a lack of communication from Blizzard as well as later miscommunication about how the issue was being addressed. In the Chromium bug report, Ormandy wrote that he notified Blizzard of the issue on Dec. 8, but weeks later the company had cut off contact with him.

Blizzard (partially) addressed the critical DNS rebinding vulnerability with an update to the tool that checks requested against blacklisted applications and executables. But the company didn’t alert Project Zero that it had updated the tool; Ormandy learned about it on his own.

As a result, Ormandy, believing the Blizzard security flaw had been silently patched, publicly disclosed the vulnerability. But Blizzard quickly restored contact with Ormandy to say the previous update wasn’t the final fix for the issue and that it was working on a different patch for the DNS rebinding vulnerability.

“We have a more robust Host header whitelist fix in QA now and will deploy soon. The executable blacklisting code is actually old and wasn’t intended to be a resolution to this issue,” a Blizzard representative said on the Chromium post. “We’re in touch with Tavis to avoid miscommunication in the future.”

Blizzard finally issued a new Blizzard Update Agent, version 2.13.8, on Wednesday with the host header whitelist to completely fix the issue.

Uncovering critical bugs in games

I’ve long worried about the relative insecurity of PC games, specifically massively multiplayer online games or MMOs. I had the fortune of covering the video game industry for several years, and it left me with several prevailing beliefs about how inherently broke the modern game development process is. Big budget games are routinely subjected to what’s known in the industry as “crunch,” which are period of substantially longer work hours and increased pressure in an effort meet deadlines and development milestones.

And yet even with these crunch periods (or, as some claim, because of them), PC games routinely launch with bugs. In fact, even blockbuster games with the biggest budgets often ship with bugs, whether it’s a minor but obvious graphical glitch or a major flaw that renders the game unplayable. And these are the bugs that “matter” to the industry’s bottom line. If these sorts of flaws are slipping by, I shudder to think what type of security vulnerabilities are lurking inside these games.

The case of Blizzard’s security flaw is concerning, and not just because of the nature of the DNS rebinding vulnerability. Blizzard is one of the most successful and respected game companies in the industry, not just for the quality of its game development but also for its technical support and service. And yet the company seemingly fumbled its way through the bug disclosure and patching processes in this case. Bugs and vulnerabilities are inevitable, which is why proper handling of the discovery, disclosure and mitigation is, for lack of a better word, critical.

Blizzard’s security flaw should serve as a wake-up call for other MMO makers and the games industry as a whole. Ormandy said he plans to look at other popular PC games in the near future. That’s a good thing; video game companies should welcome the news with open arms.

You never want one of the best known and most prolific bug hunters in infosec knocking on your door. But game companies have to answer the knock before they find themselves, and their customers, getting pwned.

January 18, 2018  4:38 PM

The strange case of the ‘HP backdoor’ in Lenovo switches

Rob Wright Profile: Rob Wright

Concern about government-mandated backdoors in technology products may be at an all-time high, but the recent discovery of an “HP backdoor” in Lenovo networking gear should prove equally alarming for the IT industry.

The computer maker last week issued a security advisory, LEN-16095, for an authentication bypass in Enterprise Networking Operating System (ENOS), the firmware that runs several of Lenovo’s networking switches. The vulnerability in question, according to Lenovo, could allow an attacker to gain access to the switch management interface and change setting that could expose traffic or cause denial of service.

That seems straightforward enough, but the advisory gets complicated in a hurry. Here’s how Lenovo laid it all out:

  • The authentication bypass mechanism in Lenovo’s ENOS software is known as “HP backdoor,” though the advisory doesn’t explain what that means.
  • The “HP backdoor” was discovered, according to Lenovo, during a security audit in the Telnet and Serial Console management interfaces and the SSH and web management interfaces “under certain limited and unlikely conditions.”
  • Lenovo said a “source code revision history audit” revealed the authentication bypass formerly known as “HP backdoor” has been hidden inside ENOS for quite a long time – since 2004, to be exact, when ENOS was part of Nortel Network’s Blade Server Switch Business Unit (BSSBU).
  • This is where it gets truly strange: The Lenovo security advisory, which at this point had moved firmly into CYA mode, drops this bomb: ” The [authentication bypass] mechanism was authorized by Nortel and added at the request of a BSSBU OEM customer.”
  • Lenovo – as if to shout at the top of its lungs “We are not responsible for this backdoor!” – painstakingly explains that Nortel owned ENOS at that time, then spun off BSSBU two years later as Blade Network Technologies (BNT), which was then acquired by IBM in 2010. Then in 2014, BNT, ENOS and the HP backdoor ended up in Lenovo’s lap after it acquired IBM’s x86 server business.
  • Lenovo said it has given “relevant source code” to a third-party security partner for an independent investigation of the authentication bypass, but the company doesn’t say who the partner is.
  • And finally – in case it wasn’t clear already that the ENOS backdoor is absolutely not Lenovo’s doing – the computer maker states for the record that such backdoors are “unacceptable to Lenovo and do not follow Lenovo product security or industry practices.” Lenovo’s reaction here is understandable considering some of the recent security issues like using hardcoded passwords and pre-installing Superfish adware on its systems.

By the time the security advisory is over, the fix for the ENOS backdoor – a firmware update that removes the authentication bypass mechanism – seems like an afterthought. And to be sure, the conditions required for this vulnerability to be exploited are indeed limited and unlikely, in Lenovo’s words (for example, SSH is only vulnerable for certain firmware released between May and June of 2004).

However, there are a number of questions, starting with:

  • Is the “HP backdoor” a reference to Hewlett-Packard? And is HP the unnamed BSSBU OEM customer that requested the backdoor access for ENOS? Again, Lenovo’s security advisory doesn’t say, but according to reports, HP was indeed a Nortel customer at that time. When asked for comment, Lenovo said “This is the name of the feature in the user interface, on the command line interface, hence the name.”
  • Why would Nortel build a ubiquitous authentication bypass into its networking operating system and undermine its security based on a customer request? Getting an answer to this one will be tough since Nortel was dissolved after declaring bankruptcy in 2009.
  • How did the backdoor go unnoticed by both IBM and Lenovo for several years while ENOS was part of their respective product families?
  • Were there any factors that led Lenovo to examine ENOS interfaces “under certain limited and unlikely conditions” nearly four years after Lenovo agreed to acquire IBM’s x86 server business? Lenovo replied, “This was part of a routine security audit.”
  • Was the source code audit that Lenovo performed to find the HP backdoor the first such audit the company had performed on ENOS? Lenovo said “HP Backdoor was found through new techniques added to our recurring internal product security assessments, the first time that these techniques were applied to ENOS.” However, it’s not entirely clear from the response if Lenovo did perform earlier source code audits for ENOS and simply missed an authentication bypass that literally has the word “backdoor” in it.

An authentication bypass in a legacy networking OS with narrow parameters isn’t exactly an urgent matter. But Lenovo’s security advisory does raise some serious issues. The tech community has collectively wrung its hands – with good reason – over government-mandated backdoors. Yet it’s abundantly clear that prominent vendors within that same community have been poking their own holes in product for decades. And those self-inflicted wounds become even more dangerous as the tech changes hands again and again over the years, with little if any security oversight.

It’s troubling to think that a single customer could lead a major technology company to install a potentially dangerous backdoor in a widely used product. And it’s even more troubling to wonder how many other vendors have done exactly that – and forgotten about the doors they opened up so many years ago.

January 9, 2018  6:50 PM

Intel keynote misses the mark on Meltdown and Spectre vulnerabilities

Rob Wright Profile: Rob Wright

When Intel CEO Brian Krzanich took the stage last night at CES 2018 in Las Vegas, he began his keynote by addressing the elephants in the room – the recently disclosed Meltdown and Spectre vulnerabilities affecting modern CPU architectures, including Intel’s.

Those remarks lasted approximately two minutes.

Then Krzanich turned to his prepared keynote address, which featured flying drones and a guest appearance from former Dallas Cowboys quarterback and NFL analyst Tony Romo. Celebrity sightings and gimmicky gadgets are much more of the CES culture than information security talks, so Krzanich’s keynote wasn’t exactly a surprise in that respect.

However, it was disappointing to see the world’s largest chip maker waste an opportunity to provide clarity and reassurances about its plans – both short term and long term – to address the Meltdown and Spectre vulnerabilities. Krzanich did discuss the issues briefly; he thanked the technology industry for coming together to work on the problems.

“The collaboration among so many companies to address this industry-wide issue, across several different processor architectures, has been truly remarkable,” Krzanich said during his keynote.

On that note, Intel’s CEO is absolutely correct. But Krzanich did little to explain how that collaboration happened and what benefits it will provide in the future as companies continue to grapple with these problems. And as far as Intel’s individual efforts go, Krzanich mostly repeated what the company had previously announced – that updates for more than 90% of the products released in the last five years will arrive within a week. He added that Intel expects the remaining products to received updates “by the end of January.”

But that was about it. Krzanich didn’t say what the updates would be (again, he repeated previous company statements that the performance impacts of the updates would be “highly workload-dependent”) or how Intel would “continue working with the industry to minimize the [performance] impact” of the updates. He didn’t say what Intel’s long-term plan was for the Meltdown and Spectre vulnerabilities, or even say if there was such a plan.

It’s important to note that Intel actually had new information to provide; according to a report from The Oregonian, Krzanich authored an internal memo announcing the formation of a new internal group, dubbed Intel Product Assurance and Security. But for whatever reason, Krzanich didn’t mention it.

Meltdown and Spectre are critical findings with industry-altering implications. A consumer-focused show may not seem like the best setting to discuss the intricacies of microprocessor designs and technical roadmaps, but CES is still the biggest technology event in the world. Last night was an opportunity to reach an enormous amount of consumers, enterprises and media and communicate a clear strategy for the future. And Intel largely wasted it.

“Security is job number one for Intel and our industry,” Krzanich said last night.

If that were true, then the Meltdown and Spectre vulnerabilities should have warranted more than two minutes on the biggest technology stage of the year.

December 29, 2017  6:58 PM

Official TLS 1.3 release date: Still waiting, and that’s OK

Peter Loshin Peter Loshin Profile: Peter Loshin

“Measure twice, cut once,” is a good way to approach new protocols, and TLS 1.3 is no exception.

When it comes to approving updates to key security protocols, the Internet Engineering Task Force may seem to move slowly as it measures the impact of changes to important protocols. In the case of the long-awaited update to version 1.3 of the Transport Layer Security protocol, the IETF’s TLS work group has been moving especially slowly — but for good reasons.

The TLS protocol provides privacy (encryption) and authentication for internet applications operating over the Transmission Control Protocol (TCP); TLS is a critically important protocol for enforcing security over the web. The current proposed standard, TLS 1.2, was published in 2008 and is the latest in a line of protocols dating back to the first transport layer security protocol used to secure the web, the Secure Sockets Layer (SSL), which was first published by Netscape in 1995.

The first draft version of the latest update was published for discussion by the TLS work group in April 2014, and SearchSecurity has been covering the imminent release of TLS 1.3 since 2015. Despite the long wait for a TLS 1.3 release date, I’m happy to continue waiting given all that we’ve learned from the process so far.

There is no questioning that TLS is in need of updating. The new version of the protocol will add several  important features, including most prominently the preference for support of perfect forward secrecy. Perhaps more important is the thorough pruning of obsolete or otherwise “legacy” algorithms. As the authors of the latest TLS 1.3 draft (version 22!) put it, “[s]tatic RSA and Diffie-Hellman cipher suites have been removed; all public-key based key exchange mechanisms now provide forward secrecy.” Other updates include some performance boosts as well as improvements in the security of key generation and the handshake protocol.

As Akamai Technologies’ Rich Salz put it in an October 2017 blog post, the new version of TLS is faster, more reliable and more secure; all things to be desired in a security protocol.

All in all, these are positive moves, but why is it taking so long to officially update the TLS 1.3 specification and publish it as an RFC?

It’s taking so long because the system is working.

Protocol problems

The process is intended to flush out protocol issues, especially as they relate to protocol implementations. Just because TLS 1.3 is not yet officially released, vendors — like Cloudflare, Akamai, Google and many others — have been rolling out support for TLS 1.3, and reporting on the issues as they uncover them. And issues have been uncovered:

David Benjamin, a Google developer who works on the Chromium project, wrote in a post to a TLS working group list that “TLS 1.3 appears to have tripped over” a dodgy version of RSA’s BSAFE library that some have theorized was put in place by RSA at the request of the National Security Agency (NSA).

Matthew Green, cryptography expert and professor at Johns Hopkins University, spelled out why the discovery of that BSAFE flaw may shed light on how that “theorized NSA backdoor” worked.

That is a positive outcome, especially as it gives security professionals an excellent reason to root out the potentially exploitable code.

Boxes in the middle

Another important issue that the process uncovered was that of misbehaving middleboxes, which earlier in 2017 were cited as the primary reason that TLS 1.3 appeared to be breaking the internet.

Middleboxes are the systems, usually security appliances, that sit “in the middle” between servers and clients and provide security through packet inspection. To actually inspect packets that have been encrypted using TLS, the middleboxes need to act as proxies for the clients, setting up one secure TLS connection between the client and the middlebox and another between the middlebox and the server.

It turns out that TLS 1.3 causes some issues with middleboxes, not because of anything wrong with the new protocol, but because of problems with the way middlebox vendors implemented TLS support into their products, causing them to fail when attempting to negotiate TLS connections with endpoints using the TLS 1.3. The solution will likely involve some tweaking of the updated protocol to compensate for unruly implementations while pressuring vendors to fix their implementations.

These issues with the TLS 1.3 draft likely contributed to the lengthy delay in the specification being finalized. I expect TLS 1.3 to be published sometime next year, although that’s what I’ve expected for the last three years.

But given what we’ve already learned from the process, that’s just fine.

December 28, 2017  9:15 PM

After 2017, data breach fatigue should be a thing of the past

Rob Wright Profile: Rob Wright

After the number of major data breaches in 2017, it wouldn’t be surprising to see some measure of data breach fatigue set in for both the general public and enterprises. Such an occurrence, however, would mean we missed valuable lessons from some of this year’s worst breaches.

First, a disclaimer: there have been too many major breaches and cyberattacks this year to count. Most infosec news sites, including SearchSecurity, can’t cover all of them. In fact, they may not get to most of them. Rampant nation-state hacking, global ransomware campaigns and a continuing series of baffling accidental data exposures have generated too much material to cover.

In addition, the scale and scope of damage has changed. So many names, email addresses and credit card numbers have been spilled over the last five years that it’s hard to get worked up about another breach that exposes information that is in all likelihood already on the dark web. Again, some level of data breach fatigue – or at least, acceptance – is to be expected.

What may have seemed like a major data breach five years ago might not even garner a second look today. An incident that exposes a few million customer usernames and email addresses might have stopped the presses back then, but today it barely registers as a speed bump.

That is, unless there are unique circumstances involved in these incidents, which should stave off data breach fatigue. We’ve witnessed several such breaches this year, and those unique circumstances should serve as lessons for both consumers and infosec professionals. Here’s a summary of those breaches.

  • Equifax: The credit rating agency’s data breach exposed the names, birth dates, addresses and Social Security numbers of 143 million U.S. consumers, but that was only half the story. Equifax’s breach response was a series of confounding errors and missteps, from setting up an insecure website for consumers to check if they were affected or not, to an interim CEO who didn’t know whether consumers’ personal data had been encrypted following the breach. It’s easy to look at Equifax and see yet another major breach that exposed a lot of personal information that may have already been exposed in other, unrelated breaches. But that shouldn’t be the takeaway; breaches are bad, but they can be made even worse by incompetent responses and ill-prepared leadership that put customers and the organization at further risk.
  • Uber: In 2016, the ride-sharing startup suffered a major breach that exposed the names, email addresses and phone numbers of 50 million users. On the surface, the incident doesn’t look like much – until you consider we didn’t learn about the breach until a year later. Uber officials concealed the incident and paid the hackers to stay quiet. It’s unclear why the breach was covered up – Uber fired two executives for their alleged involvement in the cover up – but the company has since been hit with a number of lawsuits from both users and state attorneys general. There are grave practical implications — If customers and employees don’t know an incident has occurred, then they obviously can’t do anything to protect themselves or their company – as well as ethical implications for this kind of corporate behavior. It’s impossible to know if it’s a common practice, but the Uber incident could be an indication that breach concealment is not as rare as we’d like to believe.
  • Amazon Web Services (AWS) exposures: There have too many of these accidental breaches to list, which offers some idea of how dire the situation is. To summarize: Cybersecurity vendor UpGuard has been scanning the internet for publically accessible AWS Simple Storage Service (S3) instances and discovered that many of these S3 buckets were misconfigured. As a result, organizations ranging from the Pentagon to Dow Jones & Company have had their sensitive data exposed on the internet. Most experts agree these accidental breaches are the fault of the customers and not AWS (after all, S3 buckets are private by default). Unfortunately, the scale of the problem suggests enterprises are either suffering from a lack of proper access control knowledge or allowing untrained and ill-equipped personnel to spin up cloud services for sensitive data. Neither explanation speaks well of enterprise security, which is apparently struggling so mightily that some companies don’t even need hackers to expose their data – they’ll do it on their own.

These cases offer valuable lessons on breach response, ethics and prevention for enterprises and consumers alike. They should serve as potent remedies for data breach fatigue. And if these breach lessons aren’t heeded, then we’ll be doomed to repeat them for years to come.

December 7, 2017  1:57 PM

OWASP Top Ten: Surviving in the cyber wilderness

Peter Loshin Peter Loshin Profile: Peter Loshin

If you think the latest iteration of the Open Web Application Security Project’s Top Ten list of the “top” web application security risks has important news for your organization, well, you may be disappointed. And that’s fine because that’s not what the OWASP Top Ten is intended to do.

The 2017 edition of the OWASP Top Ten is quite like the 2013 version, which in turn was quite like the 2010 version, and so on, all the way back to the first version published in 2003 (see table). The new version is different, but the differences are evolutionary rather than revolutionary — and that’s fine, too.

The OWASP list isn’t meant to be a source of new and flashy security vulnerabilities; it’s a top ten list. That means it’s the top ten most basic risks that everyone should be aware of. It’s a list of the most important things to worry about in defending web applications — not the list of everything that information security professionals should worry about, just the bare minimum.

Use the OWASP Top Ten to stay safe

The OWASP Top Ten list should guide infosec pros in the same way hikers and backpackers are guided by their favorite version of the “ten essentials” lists for outdoor activities. There are minor differences between the lists — the Boy Scouts of America put a pocket knife at the top of their list, while the Appalachian Mountain Club starts its list with map and compass at number one — the goal of these lists is to define the minimum you need to stay safe if you get lost or injured in the woods.

If you want to avoid dying of hypothermia, you should carry extra clothes and, maybe, a tarp for emergency shelter. If you want to avoid dehydration, you should carry water. Do you want to avoid getting lost? Carry a map and compass. The advice is mostly the same in 2017 as it was in 1917.

Want to prevent hackers from pwning your web application? The advice in 2017 is, mostly, the same as it’s been for 15 years since the first edition of the OWASP Top Ten was published.

You can avoid injection attacks by validating input and parameters: Injection is at the top of the list as it has been since 2010; it went from #6 in 2003 to #2 in 2007.

Same for cross-site scripting, which debuted in the #4 spot in 2003 and went up to #1 in 2007 — but dropped to seventh place in 2017. That doesn’t mean it’s time to stop worrying about defending against XSS attacks, because as long as XSS is on the OWASP Top Ten list, it means it’s essential that you defend against them, for web app security.

No need to rank “essentials”

The order of the ten essentials lists for hikers doesn’t matter because they are ALL essential. The Boy Scouts list water at number five, but you won’t see a scout leaving water at home because it’s not as important as a first aid kit (#2).

The same should go for infosec pros looking to tighten up their web application security: the OWASP Top Ten lists the fundamentals. If you’re not addressing these things, the odds are that your web application won’t survive very long against even the least sophisticated attack.

“Yes, but there are lots of risks that were once listed on the OWASP Top Ten, and now they’re not,” you say? “What happened to buffer overflows and error handling, which were ranked at #5 and #7, respectively, in 2004?”

Even if an older risk has dropped off the OWASP list, it is still probably worth keeping it in mind. If I were an infosec professional, I’d keep the historical risks in mind, if only because most risks don’t disappear; instead, they evolve over time. Humans have been venturing into the wilderness for tens of thousands of years, so we have a pretty good idea of the risks there. Web app security is still in its infancy, so we probably don’t even yet know what the biggest risks are yet.

Meanwhile, defenders shouldn’t consider protecting against the OWASP Top Ten to be their goal — it should instead be the barrier to entry, in the same way that many trip leaders impose the requirement that all participants in their hikes must show up equipped with the ten essentials of hiking.

OWASP Top 10 Lists through history:

OWASP Top Ten lists

The OWASP Top Ten list has evolved since it was first published in 2003

November 30, 2017  7:36 PM

The CASB market is (nearly) gone but not forgotten

Rob Wright Profile: Rob Wright

Cloud access security brokers arrived on the scene with a bang in 2015, thanks to plentiful venture capital funding, compelling use cases and alluring customer testimonials.

Almost three years later, much has changed with cloud access security broker (CASB) market. There are barely any stand-alone players left in the market following a flurry of acquisitions during that span that drastically reshaped the cloud security industry. A space that was once dominated by borne-in-the-cloud startups is now filled with giants such as Microsoft, Cisco and Symantec.

The latest deal in the CASB market saw McAfee acquire Skyhigh Networks, arguably the top vendor in the space, this week. But there were many acquisitions before it. To recap:

  • Microsoft got the ball rolling with CASB acquisitions when it purchased startup Adallom in July of 2015, which later became part of Microsoft’s Cloud App Security business. Terms of the deal were not disclosed but news outlets reported the price ranged between $250 million and $320 million.
  • Also in July of 2015, Blue Coat Systems acquired Perspecsys for an undisclosed amount (Blue Coat was later acquired by Symantec in 2016).
  • In November of 2015, Blue Coat made another CASB acquisition with its purchase of Elastica for $280 million.
  • Cisco acquired CloudLock in June of 2016 for $293 million.
  • In September of 2016, Oracle purchased Palerra for an undisclosed amount.
  • Proofpoint acquired FireLayers in October of 2016. Terms of the deal were not disclosed.
  • In February of this year, Forcepoint acquired Skyfence from Imperva for approximately $40 million. Imperva had purchased the CASB in 2014.

There are a few remaining stand-alone players in the CASB market, including Netskope (which analysts considered a market leader alongside Skyhigh), Bitglass and CipherCloud. But going forward, the space will be increasingly dominated by the old guard rather than the startups.

However, that’s not necessarily a bad thing. While the CASB market as a stand-alone category may soon be a thing of the past, the CASB model is very much alive. Cloud application and SaaS usage, whether approved by IT departments or not, will only increase, and enterprises will continue to need products and services that can discover, manage and secure those apps.

In addition, the vendors that moved into the CASB space have a clearly stated desire to increase their cloud security presence. That includes companies like Microsoft and Oracle, which have their own SaaS offerings as well as a need to protect customer usage of those cloud apps. Even McAfee, which at one point under Intel’s ownership had scaled back on its cloud offerings, has a new vision of combining endpoint and cloud security.

The stand-alone CASB market may be nearly gone, but the business case for the technology remains. It’s unclear whether  CASB will continue to be a security product category in the coming years, or if the functionality will simply be folded into existing categories like web security gateways or other cloud security services. But the enterprise challenges of managing shadow cloud services and securing third-party SaaS offerings will remain, and so too will CASB technology.

November 22, 2017  5:33 PM

Uber data breach raises unsettling questions for infosec

Rob Wright Profile: Rob Wright

Uber Technologies, Inc., is no stranger to self-inflicted wounds, but the latest visit to the infirmary goes far beyond the kinds of running-with-scissors episodes that have made the ride sharing company infamous.

Bloomberg Technology reported Tuesday that Uber suffered a massive data breach in the fall of 2016 that exposed names, email addresses and phone numbers of 50 million customers worldwide as well as the personal information of an additional 7 million customers. The Uber data breach was concealed by the company for more than a year, according to the report, thanks to efforts by the company’s former CSO and another member of the infosec team.

Rather than disclose the breach to regulatory officials and notify affected drivers and customers, Joe Sullivan, who was ousted from his CSO position this week, and Craig Clark, another member of the security team, engaged in a cover up that included paying $100,000 to the hackers behind the breach to delete the stolen data and keep quiet about the incident.

Newly-appointed CEO Dara Khosrowshahi said he only recently became aware of the Uber data breach and pledged to take several actions to correct the dysfunction that led to the cover-up. “None of this should have happened, and I will not make excuses for it,” Khosrowshahi wrote in a statement. “While I can’t erase the past, I can commit on behalf of every Uber employee that we will learn from our mistakes.

It’s easy to look at the Uber data breach and its ensuing cover-up and localize it to Uber’s rotten corporate culture. After all, the company has an established track record of engaging in unethical and possibly illegal practices while skirting government regulators.

However, sitting back and saying “Forget it, Jake – it’s Uber” may be missing a larger concern. There are a number of troubling aspects about this incident, starting with the fact that Sullivan was a former federal prosecutor with the U.S. Department of Justice. Presumably, he knew the legal risks of covering up the Uber data breach, to say nothing of the ethical implications.

It’s also worth noting that Sullivan isn’t an inexperienced nobody who might claim ignorance to proper infosec and data breach notification practices. He was the CSO at Facebook for more than five years and also served as the social networking giant’s associate general counsel (in a separate story, Bloomberg reported Sullivan also served Uber as deputy general counsel while he was CSO, though the company never officially named him to such a position).

Again, it’s easy to argue that Uber’s culture somehow got its hooks into a respected and experienced CSO and influenced him to the point where he abandoned his legal and ethical duties. But viewing this data breach cover up as an incident that only Uber could commit misses the writing on the wall.

First, I’ve heard numerous stories at infosec conferences this year about unnamed companies, including healthcare and financial services organizations, that were hit with ransomware and then paid the ransom without disclosing the incident to regulators or the public. Is a ransomware attack technically a data breach? That’s a debatable question and a subject for another time. But I suspect that a resistance to disclosures and notifications for security incidents, whether ransomware or network intrusions, has been growing within corporate America in recent years.

And second, this isn’t the first time an organization has engaged in a reckless cover up of data breaches. Last year a congressional investigation revealed the FDIC engaged in repeated cover ups of major cyberattacks and data breaches and even retaliated against whistleblowers within the department. And that’s just an example of where the cover up was both wanton and exposed later on. There are other curious incidents of breaches and cyberattacks that occurred many months and even years earlier and for mysterious reasons have only become public knowledge recently.

We may want to believe that only the truly reckless and lawless companies would do what Uber did, but I think it’s time to start asking how many other enterprises may be running with scissors and on the verge of gutting both themselves and their customers.

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: