Security Bytes


October 20, 2017  6:46 PM

Latest Kaspersky controversy brings new questions, few answers

Rob Wright Profile: Rob Wright

Kaspersky Lab’s latest salvo in its ongoing feud with the U.S. government and media offered some answers but raised eve more questions.

The company on Tuesday broke its silence a week after a series of explosive news reports turned up the heat on the Kaspersky controversy. We discussed the reports and the questions surrounding them in this week’s episode of the Risk & Repeat podcast, but I’ll summarize:

  • The New York Times claimed that Israeli intelligence officers hacked into Kaspersky Lab in 2015 and, more importantly, observed Russian hackers using the company’s antivirus software to search for classified U.S. government documents.
  • The Washington Post published a similar report later that day and also claimed Israeli intelligence discovered NSA hacking tools on Kaspersky’s network.
  • The Wall Street Journal also had a similar story the next day on the Kaspersky controversy, with a very important detail that Kaspersky antivirus scans were searching for

These reports resulted in the most serious and detailed allegations yet against Kaspersky; anonymous government officials had accused the company of, among other things, helping state-sponsored Russian hackers by tailoring Kaspersky antivirus scans to hunt for U.S. secrets.

As a result, Kaspersky responded this week with an oddly-worded statement (and a curious URL) that offered some rebuttals to the articles but also raised more questions. Much of the statement focuses on the “English-speaking media,” their use of anonymous sources and the lack of specific details about the secret files that were stolen, among other elements of the news reports.

But there are some important details in the statement that both shed light on the situation and raise further questions on the Kaspersky controversy. Here are a few points that stood out:

  • Kaspersky doesn’t directly confront the allegation that it had, or has, NSA cyberweapons on its servers. But it did provide reasoning for why it would have possession of them: “It sounds like this contractor decided to work on a cyberweapon from home, and our antivirus detected it. What a surprise!”
  • Kaspersky explained how its antivirus scanning works, specifically how Kaspersky Security Network (KSN) identifies malicious and suspicious files and transfers them to Kaspersky’s cloud repository. This is also where Kaspersky throws some shade: “If you like to develop cyberweapons on your home computer, it would be quite logical to turn KSN off — otherwise your malicious software will end up in our antivirus database and all your work will have been in vain.”
  • Ironically, the above point raised more questions about the reported NSA breach. Wouldn’t an NSA contractor know that having hacking tools (a.k.a. malware) on their computer would alert Kaspersky’s antivirus software? Wouldn’t the individual know to turn off KSN or perhaps use Kaspersky’s Private Security Network? It’s entirely possible that a person foolish enough to bring highly classified data home to their personal computer could commit an equally foolish error by fully enabling Kaspersky antivirus software, but it’s difficult to believe.
  • Kaspersky provided an explanation for why it would have NSA hacking tools on its network, but it didn’t offer any insight into how hackers could gain access to KSN data and use it to search for government documents. When Kaspersky was breached in 2015 (by Russian hackers, not Israeli hackers), did they gain access to KSN? Could threat actors somehow intercept transmissions from Kaspersky antivirus software to KSN? The company isn’t saying.
  • Let’s assume Kaspersky did have NSA cyberweapons on its network when the company was breached in 2015 (which, again, the company has not confirmed or denied). This makes sense since Kaspersky was the first to report on the Equation Group in February of 2015. But this raises the possibility that Kaspersky had possession of exploits like EternalBlue, DoublePulsar and others that were exposed by the Shadow Brokers in 2016 – but for whatever reason didn’t disclose them. Based on the Equation Group report, which cited a number of exploit codenames, that there is some overlap between what Kaspersky discovered and what was later released by the Shadow Brokers (FoggyBottom and Grok malware modules, for example, were included in last month’s UNITEDRAKE dump). But other hacking tools and malware samples discovered by Kaspersky were not identified by their codenames and instead were given nicknames by the company. Did Kaspersky have more of the cyberweapons that were later exposed by the Shadow Brokers? The idea that two largely distinct caches of NSA exploits were exposed – with one obtained by Kaspersky, and one stolen by the Shadow Brokers – is tough to wrap my head around. But considering the repeated blunders by the intelligence community and its contractors in recent years, maybe it’s not so far-fetched.
  • The Equation Group is the elephant in the room. Kaspersky’s landmark report on the covert NSA hacking group seems relevant in light of last week’s news, but the company hasn’t referenced it in any capacity. Does Kaspersky think the Equation Group reveal played a part in the U.S. government’s decision to ban its products? Again, the company isn’t saying. Instead, Kaspersky’s statement took some shots at the news media and made vague references to “geopolitics.”
  • Finally, a big question: Why did Israeli intelligence officers hack into Kaspersky’s network in 2015? The articles in question never make that clear, and Kaspersky never directly addresses that information in its statement. Instead, the company cites its report on the breach and the Duqu 2.0 malware used in the attack. Bu this is an important question, and it’s one that Kaspersky has shown no interest in raising. And that is strange, because it’s important part of this mess that has seemingly been overlooked. What was the motive for the attack? Was the attack in some way a response to Kaspersky’s exposure of the Equation Group? Was Israeli intelligence hoping to gain deeper insight into Kaspersky’s technologies to avoid detection? It’s unclear.

More bombshell stories on the Kaspersky controversy are likely to drop in the coming weeks and months. But until the U.S. government officially discloses what it has on the antivirus maker, and until Kaspersky itself comes clean on unanswered questions, we won’t have anything close to a clear picture.

September 29, 2017  8:16 PM

FBI’s Freese: It’s time to stop blaming hacking victims

Rob Wright Profile: Rob Wright

The infosec industry needs to express more empathy for hacking victims and engage in less public shaming.

That was the message from  Don Freese, deputy assistant director of the FBI and former head of the bureau’s National Cyber Investigative Joint Task Force (NCIJTF), at the (ISC)2 Security Congress this week. In his opening keynote discussion with Brandon Dunlap, senior manager of security, risk and compliance at Amazon, Freese focused on the importance of proper risk management in building a strong enterprise security posture.

But he reserved a portion of his talk to confront an oft-criticized and occasionally ugly practice of the infosec industry: blaming and shaming hacking victims.

In discussing the lack of communication and trust between security professionals and the rest of the enterprise, including C-suite executives, Freese talked about what he called an “unhealthy sense of superiority” in the cybersecurity field, which can lead to victim blaming.

“Certainly the FBI struggled with this in our culture,” Freese said. “The FBI was rightfully criticized in the last decade for victimizing people twice in cyber [attacks]. We certainly don’t do this when there’s a violent crime, when somebody is involved in a terrorist incident, or something like that. We don’t rush in and blame the victim. [But] we do it, and we have done it, in cybersecurity.”

That practice, Freese said, not only harms relationships with people inside an organization as well as third-parties, but it makes the difficult process of solving the problem of a cyberattack even harder.

“You’ve got to be willing to humble yourself a bit to really understand what’s going on with the victim,” he said.

Freese went on to say the situation at the FBI “is absolutely getting better.” But his point remained that the bureau as well as the infosec industry in general needs to do less victim-shaming in order to build better relationships and lines of communications.

Freese is absolutely right. The pile-on that ensues in both the media and social media following the latest breach can be alarming. This isn’t to say companies like Equifax shouldn’t be criticized for some of their actions – they absolutely should. And we shouldn’t let “breach fatigue” take hold and allow these events to be completely shrugged off. But there’s a line where the criticism becomes so wanton that it’s both self-defeating and self-destructive, and industry professionals as well as the media should at least make good faith efforts to find that line and stay on the right side of it.

And blaming hacking victims may have detrimental effects that are more tangible than we would like to believe. Freese’s words this week echoed Facebook CSO Alex Stamos’ keynote at Black Hat 2017 this summer.

“As a community we tend to punish people who implement imperfect solutions in an imperfect world,” Stamos said. “As an industry, we have a real problem with empathy. We have a real inability to put ourselves in the shoes of the people we are trying to protect. It’s really dangerous for us to do this because it makes it very easy for us to shift the responsibility for building trustworthy, dependable systems off of ourselves [and] on to other people.”

In short, security professionals may be making a hard job even harder. But the issue may go beyond shifting responsibilities and breaking down relationships. As someone who’s done his fair share of criticizing enterprises and government agencies that have suffered catastrophic breaches or committed seemingly incomprehensible errors, I’ve often wondered about the larger effects of negative media attention on a victim organization as well as the industry as a whole.

More specifically, I’ve wondered if the constant flow of embarrassing headlines and negative news regarding the latest data breaches and hacks act as a contributing factor in one of the industry’s biggest problems: the workforce shortage. Perhaps filling jobs and promoting the infosec profession to a younger and more diverse population is harder because no security professional wants the next breach headline on their resume and no one wants to take the fall as disgraced CISO; a college student considering a future infosec career may see the swirl of negativity and shaming around the dozens of companies that fall prey to threat actors each month and think that both outcomes are not just probable but inevitable.

Infosec careers offer steady employment and good pay. But in the event of a breach, these careers also offer massive stress, negative publicity and, in some cases, damaged reputations and job losses. I’m reminded of what Adobe CSO Brad Arkin said during a presentation on his experiences with the Adobe data breach in 2013; Arkin said he was so stressed dealing with the fallout of the breach, he grinded his teeth to the point where he cracked two molars during a meeting.

Yes, the pay for infosec jobs may be very good. But for a lot of people, that may not be enough to justify the costs.


September 22, 2017  10:04 PM

DerbyCon cybersecurity conference is unique and troubling

Michael Heller Michael Heller Profile: Michael Heller
Conferences, cybersecurity

Walking up to DerbyCon 7.0 cybersecurity conference it immediately has a very different feel from the “major” infosec conferences. Attendees would never be caught loitering outside of the Black Hat or DEFCON venues, because no one willingly spends more time than necessary outdoors in Las Vegas. RSAC attendees might be outside, but only because it’s simply impossible to fit 40,000-plus humans into the Moscone Center without triggering claustrophobics.

DerbyCon is different though. The cybersecurity conference literally begins on the sidewalk outside the Hyatt Regency in Louisville and the best parts (as many will tell you) take place in the lobby at the so-called “LobbyCon,” where anyone with the means to get to Louisville can participate in the conference without a ticket.

Groups of hackers, pen testers, researchers, enthusiasts and other various infosec wonks can be found talking about any and all topics from cybersecurity to “Rick and Morty” and everything in between. This feel of community extends into DerbyCon proper with attendees lining the hallways, not looking desperately in need of rest like other major infosec conferences, but talking, sharing and connecting.

The feel of DerbyCon cybersecurity conference  is not unlike a miniature DEFCON – but with a higher emphasis on the lounge areas – and that appearance seems intentional. Just like DEFCON, the parties in the evening get as much promotion as the daytime talks (Paul Oakenfold and Busta Rhymes this year), and just like DEFCON, DerbyCon hosts “villages” for hands-on experiences with lock-picking, social engineering, IoT hacking, hacking your derby hat and more.

On top of all that, DerbyCon is a place for learning all aspects of infosec. The tracks are broken into four – Break Me, Fix Me, Teach Me and The 3-Way (for talks that don’t fit neatly into one of the other buckets.)

The two keynotes had bigger messages but were rooted in storytelling; Matt Graeber, security researcher for SpecterOps, told the story of how he became a security researcher and how it led him to finding a flaw in Windows. John Strand, a SANS Institute instructor and owner of Black Hills Information Security, told a heart-wrenching tale of the lessons he learned from his mother as she was dying of cancer and how those lessons convinced him the infosec industry needs to be more of a community and not fight so much.

The good with the bad

For all of the unique aspects of DerbyCon it was hard to ignore one way that it is very similar to other cybersecurity conferences – the vast majority of attendees are white males.

Without demographics data, it is unclear if the proportion of minorities and women is lower for DerbyCon, but it is something that feels more prominent given that the tone of the conference is one of community. DerbyCon feels like a space where everyone is welcome, so noticing that there isn’t a more diverse base of attendees can serve to highlight an issue that is often talked about but may not get the real engagement needed to create meaningful change.

While it could be argued that size and location of DerbyCon might contribute to there being a low proportion of women and minorities here, it can’t be used as an excuse, and the issue is not unique to DerbyCon. The infosec world overall needs to make more of an effort to promote diversity, and DerbyCon cybersecurity conference serves as one more example of that.


September 15, 2017  7:15 PM

Fearmongering around Apple Face ID security announcement

Michael Heller Michael Heller Profile: Michael Heller
Apple, biometric, Facial recognition, iPhone

As fears grow over government surveillance, the phrase “facial recognition” often triggers a bit of panic in the public, and some commentators are exploiting that fear to overstate any risks associated with Apple’s new Face ID security system.

One of the more common misunderstandings is around who has access to the Face ID data. There are those who claim Apple is building a giant database of facial scans and that the government could compel Apple to share that data through court orders.

However, just as has been seen with the FBI attempting to bypass the Touch ID fingerprint scanner on iPhones, the same security measures are in place for Face ID security. The facial scan data is stored in the Secure Enclave on the user’s iPhone, according to Apple, and is never transmitted to the cloud — the same system that has protected users’ fingerprint data since 2013.

Over the past four years, Apple’s Touch ID has been deployed on hundreds of millions of iPhones and iPads, but not one report has ever surfaced of that fingerprint data being compromised, gathered by Apple, or shared with law enforcement. But, there are those out there who would claim that somehow this system would suddenly fail because it is holding Face ID security data.

The fact is that the data stored in the iOS Secure Enclave is only accessible on the specific device. Apple has built in layers of security so that even it cannot access that data — let alone share it with law enforcement — without rearchitecting iOS at a fundamental level, which often means the burden of doing so is too high to be compelled by court order.

Face ID security vs facial recognition

This is not to say that there is no reason to be wary of facial recognition systems. Having been a cybersecurity reporter for close to three years now, I’ve learned that there will always be an organization that abuses a system like this or fails to protect important data (**cough**Equifax**cough**).

Facial recognition systems should be questioned and the measures to protect user privacy should be scrutinized. But, just because a technology is “creepy” or has the potential to be abused doesn’t mean all logic should go out the window.

Facebook and Google have been doing facial recognition for years to help users better tag faces in photos. Those are legitimate databases of faces that should be far more worrying than Face ID security, because the companies holding those database could be compelled to share them via court order.

Of course, one could also argue that many of the photos used by Facebook and Google to train those recognition systems are public, and it is known that the FBI has its own facial recognition database of more than 411 million photos.

To create panic over Apple’s Face ID security when the same data is already in the hands of law enforcement is little more than clickbait and not worthy of the outlets spreading the fear.


August 23, 2017  6:47 PM

Project Treble is another attempt at faster Android updates

Michael Heller Michael Heller Profile: Michael Heller
Android, Google

Google has historically had a problem with getting mobile device manufacturers to push out Android updates, which has left hundreds of millions in the Android ecosystem at risk. Google hopes that will change with the introduction of Project Treble in Android 8.0 Oreo, but it’s unclear if this latest attempt to speed up Android updates will be successful.

Project Treble is an ambitious and impressive piece of engineering that will essentially make the Android system separate from any manufacturer (OEM) modifications. Theoretically, this will allow Android OS updates to be pushed out faster because they won’t be delayed by any custom software being added. In a perfect world, this could even make OEMs happier because they will also be able to push updates to custom software without going down the path of putting those custom apps in the Play Store.

Project Treble

Wrangling the Android ecosystem is by no means an easy feat. According to Google’s latest statistics from May 2017, there were more than 2 billion active Android devices worldwide. Attempting to get consistent, quick updates to all of those devices with dozens of OEMs needing modifications for hundreds of hardware variations, as well as verification from carriers around the world, is arguably one of the most difficult system update problems in history.

Project Treble isn’t Google’s first attempt at fixing Android updates by trying to implement policies around updates, nudging OEMs towards lighter customization and pushing updates through apps in the Play Store, but history has proven these changes don’t make much difference. Project Treble is a major change that has the potential to make a significant impact on system updates, but there will still be issues.

First in 2011, Google put in place an informal policy asking OEMs to fully support devices for 18 months after being put on the market, including updating to the latest Android OS version released within that timeframe. Unfortunately, there was no way to enforce this policy and no timetable mandating when updates needed to be pushed. As a result, Android 7.x Nougat is only on 13.5% of the more than 2 billion active devices worldwide (per the August numbers from Google) and, althoughAndroid 6.0 Marshmallow is found on the plurality of devices, it only reaches 32.3% of the ecosystem.

After that, Google added a rule that a device had to be released with the latest version of Android in order to be certified to receive the Play Store, Google Apps and Play services. This helped make sure new devices didn’t start out behind, but didn’t address the update problem.

Google even tried to sidestep the issue altogether by putting a number of security features into Google Play services, which is updated directly by Google separately from the Android OS and therefore can be pushed to nearly all Android devices without OEM or carrier interaction.

Remaining speedbumps for Android updates

Project Treble has the potential to make an impact to how quickly devices receive Android updates, but it doesn’t necessarily address delays from carrier certification of updates. Unlike iOS updates, which come straight from Apple, Android updates need OEM tinkering due to the variety of hardware and therefore need to also be tested by carriers. Perhaps, carrier testing will be faster since the Android system updates should be relatively similar from device to device, but there’s no way to know that yet.

Android update process

Additionally, Project Treble will add another layer of complexity to Android updates by creating three separate update packages for OEMs to worry about — the Android OS, the OEM custom software layer and the monthly security patches.

Google has been rumored to be considering a way to publicly shame OEMs who fall behind on the security patch releases, indicating that even those aren’t being pushed out in a timely manner. In the meantime, an unofficial tracker for security updates gives a rough idea of the situation.

OEM software teams are likely used to working on Android OS updates in conjunction with the OEM customizations, so a workflow change will be needed. Once that’s done, there’s no guarantee the Android OS update will get any priority over custom software or even that it will get the resources to get device-specific adaptations quickly.

Ultimately, Project Treble provides OEMs a much easier path to pushing faster updates for Android, but there are still enough speedbumps to cause delays and OEMs still haven’t proven able to even push small security patch releases. Android updates will never be as fast and seamless as iOS, but given the challenges Google faces, Project Treble may be the best solution yet. If OEMs get on board.


August 8, 2017  6:38 PM

The Symantec-Google feud can’t be swept under the rug

Rob Wright Profile: Rob Wright

The feud between Symantec and the web browser community, most notably Google, appears to be over now that DigiCert has agreed to acquire Symantec Website Security for close to $1 billion.

But according to Symantec CEO Greg Clark, there never was a feud to begin with.

Clark presented a charitable view of the Symantec-Google  dispute, which stemmed from Google’s findings of improper practices within Symantec’s certificate authority business, in a recent interview. “Some of the media reports were of a hostile situation,” Clark told CRN. “I think we had a good collaboration with Google during that process.”

It’s unclear what Clark considers “hostile” and which media outlets he’s referring to (Editor’s note: SearchSecurity has covered the proceedings extensively), but most of the back and forth between Symantec and the browser community was made public through forum messages, blog posts and corporate statements.

And those public statements show quite clearly that there was indeed a Symantec-Google feud that began on March 23rd when Google developer Ryan Sleevi announced the company’s plan on the Chromium forum to systematically remove trust from Symantec web certificates. The post, titled “Intent to Deprecate and Remove: Trust in existing Symantec-issued Certificates,” pulled few punches in describing a “series of failures” within Symantec Website Security, which included certificate misissuance, failures to remediate issues raised in audits, and other practices that ran counter to the Certificate Authority/Browser Forum’s Baseline Requirements.

What followed was a lengthy and at times tense tug-of-war between Google (and later other web browser companies, including Mozilla) and Symantec over issues with the antivirus vendor’s certificate authority (CA) practices and how to resolve them.

But one would need to go no further than Symantec’s first statement on the matter to see just how hostile the situation was right from the start. On March 24th, the day after Google’s plan to deprecate trust was made public, the antivirus vendor responded with defiant blog post titled “Symantec Backs Its CA.” In it, the company clearly suggests Google’s actions were not in the interests of security but instead designed to undermine Symantec’s certificate business.

“We strongly object to the action Google has taken to target Symantec SSL/TLS certificates in the Chrome browser. This action was unexpected, and we believe the blog post was irresponsible,” the statement read. “We hope it was not calculated to create uncertainty and doubt within the Internet community about our SSL/TLS certificates.”

Symantec didn’t stop there, either. Later in the blog post, the company accused Google of exaggerating the scope of Symantec’s past certificate issues and said its statements on the matter were “misleading.”

Symantec also wrote that Google “singled out” its certificate authority business and pledged to minimize the “disruption” caused by Google’s announcement. And throughout the post, Symantec repeatedly claimed that everything was fine, outside of previously disclosed issues, and that there was nothing to see here.

Clark believes the Symantec-Google dispute wasn’t hostile, but the antivirus vendor’s own words contradict that. Right from the start, Symantec accused Google of unfairly targeting it; acting irresponsibly and causing needless disruption for Symantec; and acting upon ulterior and malicious motives rather than genuine infosec concerns.

It should be noted that none of those claims were supported by what followed. Mozilla joined Google and found new issues with Symantec Website Security certificate. And instead of denying Google and Mozilla’s findings and refusing to adopt their remediation plan – which required Symantec to hands over its CA operations to a third party – Symantec agreed to make sweeping changes to its certificate business in order to regain.

Clark said in the interview that Symantec-Google dispute “came to a good outcome.”

That’s true; DigiCert will pay $950 million for Symantec’s certificate business, and Symantec will retain a 30% stake in the busy while bearing none of the responsibility for the operate. But if Google hadn’t announced its plan to deprecate trust and put this process in motion, Symantec wouldn’t have lifted a finger to address the obvious and lengthy list of issues with its certificate authority operations. Symantec Website Security would have continued along its current path of lax reviews, questionable audits and other certificate woes.

Clark also said the situation is largely resolved, and there are no hard feelings between the two companies. “I think Symantec and Google have a better relationship because of it,” he said.

It may be true that Symantec and Google have effectively buried the hatchet, but to suggest there never was a hatchet to begin with is absurd.


June 6, 2017  4:58 PM

Symantec certificate authority aims for more delays on browser trust

Peter Loshin Peter Loshin Profile: Peter Loshin
Symantec

Is the Symantec certificate authority operation too big to fail?

That seems to be the message the security giant is sending in its latest response to a proposal from the browser community to turn over Symantec certificate authority operations to one or more third parties starting August 8. Doing so has become a requirement for Symantec to be retained in Google Chrome, Mozilla and Opera browser trusted root stores and to regain trust in its PKI operations.

Google, Mozilla and Opera seem to be united in agreement with the proposal from the Chrome developer team, under which Symantec would cooperate with the third-party CAs while at the same time re-certifying its extended validation certificates and revoking trust in extended validation certificates issued after Jan. 1, 2015.

“[W]e understand that any failure of our SSL/TLS certificates to be recognized by popular browsers could disrupt the business of our customers,” Symantec wrote in its blog post responding to Google’s proposal. “In that light, we appreciate that Google’s current proposal does not immediately pose compatibility or interoperability challenges for the vast majority of users.”

At first glance, Symantec appeared to praise the latest proposal from Chrome, noting it allows their customers, “for the most part, to have an uninterrupted and unencumbered experience.” However, the CA giant raised issues on almost all of the actions called for in the proposal, stating “there are some aspects of the current proposal that we believe need to be changed before we can reasonably and responsibly implement a plan that involves entrusting parts of our CA operations to a third party.”

Google’s proposal requires that new Symantec-chaining certificates be issued by “independently operated third-parties” starting August 8, 2017; Google’s timetable requires the transition be complete by Feb. 1, 2018, with all Symantec certificates issued and validated by those third-parties — although Symantec is making its case that the timetable is too short.

Symantec’s strategy seems to be to continue to seek further reductions in the limits placed on existing certificates, while dragging out the process — a tactic that reduces the impact of removing untrusted certificates as the questionable certificates continue aging and expiring on their own, without any further action on the part of the Symantec certificate authority operation.

The gist of the argument is that as “the largest issuer of EV and OV certificates in the industry,” the Symantec certificate authority is so much larger than its competitors that “no other single CA operates at the scale nor offers the broad set of capabilities that Symantec offers today.” In fact, over the course of several months, Symantec has frequently cited the size of its CA business and customer base in pushing back against Google’s and Mozilla’s proposals.

In other words, the Symantec certificate authority is so big that you can forget about having a CA partner ready to issue Symantec certificates by August 8. “Suitable CA partners” will need to be identified, vetted and selected; requests for proposals must be solicited and reviewed; and even then, Symantec will still need “time to put in place the governance, business and legal structures necessary to ensure the appropriate accountability and oversight for the sub-CA proposal to be successful.”

And even then, Symantec said, after it partners with one or more sub-CAs, all of the involved parties will need to do even more work to engineer the new operating model — and once that is done, there’s the need for extensive testing.

“Based on our initial research, we believe the timing laid out above is not achievable given the magnitude of the transition that would need to occur,” Symantec wrote.

What kind of timetable will work for Symantec?

Symantec can’t give any firm estimates for how long it will take to comply with Google’s proposal until Symantec’s candidate third-party partners respond to its requests for proposals. Those are due at the end of June, Symantec said.

After that, there’s the question of “ramp-up time,” the time Symantec’s third-party providers need for building infrastructure and authentication capabilities, which “may be greater than four months.”

“Symantec serves certain international markets that require language expertise in order to perform validation tasks,” the company wrote. “Any acceptable partner would also need to service these markets.” Signing up multiple CAs capable of serving these different markets will “require multiple contract negotiations and multiple technical integrations.”

Alternatively, Symantec could “[p]artner with a single sub-CA, which would require such CA to build up the compliant and reliable capacity necessary to take over our CA operations in terms of staff and infrastructure.”

Symantec did not indicate which alternative it preferred.

Symantec stated that “designing, developing, and testing new auth/verif/issuance logic, in addition to creating an orchestration layer to interface with multiple sub-CAs will take an estimated 14 calendar weeks. This does not include the engineering efforts required by the sub-CAs, systems integration and testing with each sub-CA, or testing end-to-end with API integrated customers and partners, although some of this effort can occur in parallel.”

It’s not clear whether this task is part of the ramp-up time Symantec referred to, but there’s also the question of revalidating “over 200,000 organizations in full, in order to maintain full certificate validity for OV and EV certificates.” Symantec needed more than four months to fully revalidate CrossCert’s active certificate issuances — about 30,000 certificates and far fewer organizations — that were issued by Symantec’s former SSL/TLS RA partners.

Could Symantec be purposely dragging its heels to mitigate the impact on itself and its customers through delaying the deadline for distrusting Symantec certificates until the questionable ones have expired? Or could Symantec be attempting to whittle down the pain points in Google’s plan by continually pushing back on them while at the same time asking for deadline extensions?

It’s unclear what Symantec’s strategy is, and the company is only addressing the ongoing controversy through official company statements (Symantec has not responded to requests for further comments or interview). But the clock is ticking, and the longer action is delayed, the harder it will likely be to fix the situation.


May 3, 2017  8:14 PM

Verizon DBIR 2017 loses international contributors

Michael Heller Michael Heller Profile: Michael Heller
Security

Looking at the overall numbers for the contributors to the Verizon Data Breach Investigations Report (DBIR) from the past five years, it would seem like the amount of partners is hitting a plateau, but looking at the specifics raises questions about international data sharing.

The number of partners contributing data to the Verizon DBIR exploded from 2013 (19) through 2014 (50) and peaking in 2015 (70), while there has been a slight dip in 2016 (67) and 2017 (65). The total numbers gloss over the churn of contributors added and lost year-to-year.

For example, the slight dip in Verizon DBIR 2017 partners was due to the loss of 19 contributors and the addition of 17 new ones, but these are the biggest names lost:

  • Australian Federal Police
  • CERT-EU
  • CERT Polska/NASK
  • European Crime Center
  • Imperva
  • International Computer Security Association Labs
  • Policia Metropolitana Ciudad de Buenos Aires, Argentina
  • Tenable Network Security
  • Splunk
  • UK-CERT
  • Verizon Cyber Intelligence Center

And compare the biggest names added:

  • Rapid7
  • Veracode
  • VERIS Community Database
  • Verizon Digital Media Services
  • Verizon Fraud Team
  • Verizon Network Operations and Engineering
  • Verizon Enterprise Services

A Verizon spokesperson said the difference between 2016 and 2017 was due to “a combination of factors, including sample sizes may have been too small, an organization wasn’t able to commit to this year’s report due to other priorities or the deadline was missed for content submission.”

However, just looking at the Verizon DBIR partners involved, there was a notable drop in international contributors while Verizon listed more of its own projects as well as the VERIS Community Database, which has been integral to the DBIR since the database was launched in 2013.

It is unclear why these organziations have dropped out, and none responded to questions on the topic. Maybe they left due to changes in international data sharing laws, including the upcoming GDPR. It is also possible there were other mitigating factors such as the climate surrounding data privacy or political uncertainty in the U.S. and abroad. Or, Verizon could be correct and this is nothing more than an odd coincidence.

Effects on analyzing DBIR data

Over the years Verizon has warned that the results of the DBIR can be affected by the partners involved and one expert noted the Verizon DBIR 2017 had dearth of information related to industrial control systems. But, it appears there may also be a loss of international data to take into account when analyzing the results of the report.

Each year, Verizon does add new data to the DBIR statistics for previous years based on newly contributed information. This means the data regarding 2014 or 2015 incidents and data breaches would be more accurate in the 2017 Verizon DBIR than in the reports for those respective years. So, the data of past reports may be less reliable than the latest info in the newer reports.

That’s not a great thing for trying to tease out trends or pinpoint the biggest new threats, but Verizon has also admitted to shying away from offering suggestions on actions enterprises should take based on the DBIR data.

Maybe IT pros should take more care to consider the quality and volume of the sources when analyzing the Verizon DBIR. There is good data, like confirmation of trends we already saw or felt, like the rise of ransomware and cyberespionage or failures of basic security, and new trends, like pretexting. But, without more transparency regarding what organizations are contributing and why partners leave, other analysis could be challenging.


February 24, 2017  7:39 PM

RSA Conference 2017: Are software regulations coming for developers?

Rob Wright Profile: Rob Wright
Security

Security expert Bruce Schneier dragged an uncomfortable but very real possibility into public view during RSA Conference 2017, and it should have developers of all types pondering a very grim future full of software regulations.

Schneier discussed his case for internet of things (IoT) regulation in not one but two sessions at RSA Conference 2017 last week. The growing potential for IoT regulations are hardly a surprise given the run of recent high-profile DDoS attacks using insecure IoT devices. And while Schneier’s support for IoT regulations may have surprised some, he made a well-reasoned case that government action is coming whether the technology industry approves or not and IT professionals would be well-served by taking an active role in the process to ensure the government enacts the least-bad option.

Within one of those RSA Conference sessions, Schneier urged the audience to think more broadly about the responsibilities of developers in a “connect it all” world.

We need to start talking about our future. We rarely if ever have conversations about our technological future and what we’d like to have. Instead of designing our future, we let come as it comes without forethought or architecting or planning. When we try to design, we get surprised by emergent properties,” Schneier told the audience. “I think this also has to change. I think we should start making moral and ethical and political decisions about how technology should work.”

Schneier then made another point that went far beyond simple IoT regulations and had chilling implications for the technology industry.

“Until now, we have largely given programmers a special right to design [and] to code the world as they saw fit. And giving them that right was fine, as long as it didn’t matter. Fundamentally, it doesn’t matter what Facebook’s design is,” he said. “But when it comes to “things,” it does matter, so that special right probably has to end.”

MADRID, 20-11-07. II JORNADA SEGURIDAD DE LA INFORMACION FOTO: ANGEL MARTINEZ

First, Schneier is obviously right. Facebook’s software design affects a very large but finite number of people, and they can choose to stay on or leave the platform; whatever programming sins it may commit won’t extend to users at Microsoft or Google or other companies. A connected physical device, however, can extend outside of Facebook’s user base and affect others. To this point, we’ve gotten off easy by only having to contend with potent DDoS attacks and data breaches. But the possibility of physical harm from hacked IoT devices is certainly in play.

That doesn’t make the possibility of general software regulations any easier to swallow, however. The idea that programmers could lose the right to code what they want and how they want seems as incomprehensible as the government suddenly regulating what I write as a journalist. While the latter scenario is clear violation of the Constitution, the former probably isn’t (more on that in moment).

I don’t know if Schneier truly believes that software developers should have their rights curbed by the government, or if his aim was to spark concern – and potential action – from the audience. Maybe it was both.

But is Schneier’s idea – that unfettered freedom for programming should be replaced with software regulations – really that far-fetched? Consider the aggressive measures proposed in Congress regarding the “going dark” issue and encryption technology. And keep in mind it was the judiciary and not lawmakers that ordered Apple to design a tool to hack its own security protections for iOS. (At one point before achieving a legal victory in this case, Apple was reportedly preparing an argument that its code was protected as speech under the First Amendment, which would have be fascinating to see.)

I mostly agree with Schneier’s argument that government regulation is coming whether we like it or not. There seems to be little incentive — and even less desire — for the industry to solve these IoT security problems. Perhaps that will change as IoT-related attacks become more common and more powerful this year. But perhaps not; the fact that manufacturers have allowed outdated connected medical devices to linger with known vulnerabilities gives me little confidence.

What would these regulations look like? During the question-and-answer session, Schneier was asked about whether certifications, either for individuals or for technologies, could address some of the concerns about connected devices leading to physical harm. Schneier said government-regulated certifications or licenses for software developers were a possibility.

“You had to be a licensed architect to design this building,” Schneier said, referring to the hotel at which the session was hosted. “You couldn’t be just anybody. So we could have that sort of certification – a licensed software engineer.”

I’m neither a structural engineer nor a programmer, but this seems like a bad idea. There aren’t that many ways to design a structurally sound building relative to the vast number of ways a programmer could design a perfectly sound application. The complexity of software doesn’t lend itself to the kind of regulation we see with building codes, for example. Even if the codes for coding, so to speak, were straightforward and tackled only the no-brainers – Thou shalt not use SHA-1 ever again! – there is a haystack’s worth of questions (backward compatibility, support, enforcement, etc.) that need to be answered, with no guarantee of actually getting to the desired needle.

If we accept a world in which a hypothetical government agency dictates what devices can and cannot be connected to the internet and how they are connected, it’s worth asking now what it will mean in the coming years for potentially broader, sweeping software regulations. It’s possible the Trump Administration’s stated commitment to roll back federal regulations will buy the IT industry some time before such a future is realized.

On the other hand, we’re one bad headline away from Congress enacting knee-jerk legislation to police not just how IoT devices are built and connected but how developers write and deploy code across the entire digital realm. And unlike journalists, programmers may have nothing in the Constitution to prevent it.


February 15, 2017  4:53 PM

Christopher Young: Don’t sleep on the Mirai botnet

Rob Wright Profile: Rob Wright
Security

SAN FRANCISCO — While much of the talk at this year’s RSA Conference has been about future IoT threats and new attacks, Intel Security’s Christopher Young urged attendees not forget the past — specifically, the Mirai botnet.

“We can’t think of the Mirai botnet in the past tense,” Young, senior vice president and general manager at Intel Security, said during his keynote Tuesday at RSA Conference 2017. “It’s alive and well today and recruiting new players.”

To illustrate his point, Young described how Intel Security CTO Steve Grobman and his team decided to test the theory that Mirai was still highly active and looking for more vulnerable IoT devices to infect.

“Our hypothesis was simple,” Young said. “Given the amount of connected or infected devices that are out there today, what’s the risk that a new, unprotected device can be coopted into the Mirai botnet? We wanted to know, how pervasive was this threat?”

To that end, Grobman’s team set up a honeypot, disguised as a DVR, on an open network. And in just over a minute, Young said, they found the DVR had been compromised by the Mirai botnet.

“It just puts a fine point on the problem,” Young said. “The Mirai botnet is alive and well, recruiting drones…that can be used for the next attack.”

As a result, Young said the industry needs to address the insecurities of connected devices within the home, which have become lucrative targets for Mirai and other types of IoT malware. He stressed that a combination of approaches are required to address these IoT threats, including consumer education, changes in security policies for device manufacturers and better protection measures from the infosec industry.

“The question I’d ask all of us in cybersecurity here at RSA is: ‘How many of us take the home into account when designing our cybersecurity architectures and when we provision our cybersecurity tools?’” Young asked the audience.

Young said the threat landscape has changed, and as a result so too must the mentality of security professionals. “Today, the target has now become the weapon,” he said, referring to connected devices. “The game has changed on us yet again.”


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: