Security Bytes

November 1, 2017  12:55 AM

The Equation Group malware mystery: Kaspersky offers an explanation

Rob Wright Profile: Rob Wright

The ongoing drama between Kaspersky Lab and the U.S. government received some much-needed sunlight last week as the antivirus vendor finally uttered two very important words: Equation Group.

Kaspersky issued a statement describing how it came to possess Equation Group malware, which was a response to recent news reports claiming the vendor had National Security Agency (NSA) cyberweapons on its network in 2015. Both the government and the antivirus vendor have quietly tip-toed around Equation Group since the Kaspersky controversy began rolling earlier this year. And it’s easy to see why – the government doesn’t want to officially acknowledge that the NSA is in the business of creating and using malware, and Kaspersky likely didn’t want to highlight a sore spot for the U.S. government that could further inflame the situation (after all, Kaspersky was the first to blow the lid off Equation Group with its 2015 report).

But Kaspersky was backed into a corner with mounting political pressure and government-wide bans on its products. The company played one of its last remaining cards: it came clean and offered a somewhat plausible explanation why it had possession of Equation Group malware.

In short, Kaspersky’s statement claims that in 2014 its antivirus software scanned a system and detected a simple backdoor in a product-key generator for a pirated version of Microsoft Office (this system is presumed to belong to the NSA contractor/employee that reportedly took cyberweapons home and installed them on a personal computer). The antivirus program also detected a 7-Zip archive of “previously unknown” malware, which the antivirus program via Kaspersky Security Network (KSN) relayed to the company for further analysis.

The statement offers some answers to lingering questions on the matter, but it also produces new questions and concerns for Kaspersky and the U.S. government. Here are some important ones:

  • “As a routine procedure, Kaspersky Lab has been informing the relevant U.S. Government institutions about active APT infections in the USA,” the statement reads. This assumes that after detecting and analyzing the 7-Zip archive of new Equation Group malware, the company alerted the U.S. government. But that statement is just left hanging there, and Kaspersky never explicitly states it contacted the relevant authorities about the malware. Did it? If Kaspersky did, then why not spell it out in no uncertain terms? If it didn’t, could that be a source of contention between the vendor and U.S. government?
  • After analyzing the Equation Group malware, Kaspersky researchers notified CEO Eugene Kaspersky. “Following a request from the CEO, the archive was deleted from all our systems,” the statement read. This suggests that Kaspersky did not, in fact, contact the U.S. government about its findings. So why did the company delete the files? It could be, as some have speculated, that the archive had files with classified markings on them. But Kaspersky throws cold water on the media reports of “NSA classified data” being on its servers and states no such incident took place. If it is true, then why did it take extensive analysis from Kaspersky researchers to find those markings?
  • Kaspersky said it detected other instances of the Equation Group malware on systems in the “same IP range” as the original system. These detections were made after Kaspersky published its Equation Group report in February of 2015; according to the statement, the company believes these systems, which had KSN enabled, were set up as honeypots. However, Kaspersky doesn’t explain why it believes they were honeypots, and why they were set up. But this point suggests the U.S. government, or at least individuals within the NSA, knew the Equation Group malware had been exposed and uploaded to Kaspersky. That would contradict earlier news reports claiming the U.S. didn’t know about exposure of NSA cyberweapons until 2016.
  • Kaspersky wrote “No other third-party intrusions, besides Duqu 2.0, were detected” on its networks. This is presumably a response to the aforementioned media reports, which claimed that Israeli intelligence officers (who reportedly hacked into Kaspersky’s network) observed Russian hackers on the company’s network abusing antivirus scans to search for U.S. government data. But it doesn’t confront the allegation in The Wall Street Journal report that Kaspersky willingly let state-sponsored threat actors into its environment and was actively working with Russian government. It also dances around the question of who was behind the Duqu 2.0 attack.

Kaspersky’s statement on the Equation Group malware is quite detailed, offering names for malicious code samples and files and specifics about the system on which the malware was first detected. But the statement also skips over important details and key questions in the ongoing Kaspersky controversy. If the company and the government continue to withhold vital information that could clear up this mess, both will look increasingly bad as this drags on.

October 31, 2017  9:18 PM

Is “responsible encryption” the new answer to “going dark”?

Peter Loshin Peter Loshin Profile: Peter Loshin

“Three may keep a Secret, if two of them are dead.”

So wrote Benjamin Franklin, in Poor Richard’s Almanack, in 1735. Franklin knew a thing or two about secrets, as well as about cryptography, given his experience as a diplomat for the fledgling United States, and he’s right: a secret shared is a secret exposed.

But it’s 2017 now, and the Department of Justice and the FBI are still hacking away at encryption, and the conversation about encryption and the need for the government to be able to access any and all encrypted data continues to hit the same talking points as when then FBI Director Louis Freeh and Attorney General Janet Reno were pushing them in the 1990s — and, we might imagine, the same arguments could have been offered by King George’s government in the run-up to the Revolutionary War.

FBI Director Christopher Wray and Deputy Attorney General Rod Rosenstein have been taking the latest version of the “strong encryption is bad” show on the road, again, with a new buzzword: “responsible encryption.” While phrasing continues to morph, the outline is the same: the forces of evil are abusing strong encryption and running wild, destroying our civilization.

Some things have changed since the first battles in the crypto wars were waged more than 25 years ago. For example, the FBI and DOJ have listed money launderers and software pirates alongside the terrorists, human traffickers and drug dealers as part of the existential threat posed by unbreakable encryption.

It all boils down to a single question: Should law-abiding citizens be forbidden to defend themselves with encryption so strong that not even a government can break it, just so criminals can be denied it?

Rosenstein makes it clear that any piece of encrypted data subject to a valid court order must be made accessible to law enforcement agencies. “I simply maintain that companies should retain the capability to provide the government unencrypted copies of communications and data stored on devices, when a court orders them to do so,” he said at the 2017 North American International Cyber Summit, in Detroit on October 30.

If the person who encrypted the data chooses not to unlock it, Rosenstein and Wray believe the company that provided the encryption technology must be able to make that data available upon presentation of a warrant.

In the 1990s, the government demanded a key escrow platform through which all encryption could be reversed on demand. The resulting Clipper Chip was a spectacular failure, both technically and politically. And during the 2015 campaign, former FBI Director James Comey promoted the term “going dark” into the conversation.

This time around, we’re offered the concept of “responsible encryption.” This is presumably some form of encryption that includes some (as yet undetermined) mechanism by means of which lawful access is provided to the encrypted data. The phrase itself is not new — it seems to have originated in 1996 Senate testimony by Freeh:

The only acceptable answer that serves all of our societal interests is to foster the use of “socially-responsible” encryption products, products that provide robust encryption, but which also permit timely law enforcement and national security access and decryption pursuant to court order or as otherwise authorized by law.

As for how that might be achieved, well, that’s not the business of the government, Rosenstein now tells us. Speaking in Detroit, he said, “I do not believe that the government should mandate a specific means of ensuring access. The government does not need to micromanage the engineering.”

However, he does seem to think that the answer is not as difficult as the experts would have us believe — and it would not be necessary to resort to back doors, either. Rosenstein said:

“Responsible encryption is effective secure encryption, coupled with access capabilities. We know encryption can include safeguards. For example, there are systems that include central management of security keys and operating system updates; scanning of content, like your e-mails, for advertising purposes; simulcast of messages to multiple destinations at once; and key recovery when a user forgets the password to decrypt a laptop. No one calls any of those functions a “backdoor.” In fact, those very capabilities are marketed and sought out.”

It seems Rosenstein is suggesting these functions — key management, data scanning, “simulcast” of data and key recovery — can each be a part of a “responsible encryption” solution. And since these features have already been deployed individually in commercial products, tech firms need to “nerd harder” and come up with a “responsible encryption” solution by:

  • maintaining a giant key repository database, so all encryption keys are accessible to government agents with court orders — but also secure enough to protect against all unauthorized access
  • scanning all content before it is encrypted, presumably to look for evidence of criminal activity — but hopefully without producing too many false positives
  • “simulcasting” all data, either before it is encrypted or maybe after it is encrypted and the keys are stored for government access — so it can be retrieved or scanned at the government’s leisure
  • deploying “key recovery” for encrypted laptops, but for all laptops, everywhere, and accessible to authorized government agents only

Unfortunately, the answers the government provides can’t make key escrow scalable or secure. There are many, many reasons the law enforcement community’s demand for breakable encryption is not a reasonable (or even practical) solution, but two spring to mind immediately:

  • Key escrow schemes are massively complicated and produce huge new attack surfaces that could, if successfully breached, destroy the world’s economy. And, they would be breached (see Office of Personnel Management, Yahoo, Equifax and others).
  • “Responsible encryption” means law-abiding organizations and people can no longer trust their data. With cryptography backdoored, forget about privacy; there no longer is any way to verify that data has not been altered.

A ban on end to end encryption in commercial tech products will only prevent consumers from enjoying the benefits — it won’t prevent criminals and threat actors from using it.

We shouldn’t be surprised that this or any government is interested in having unfettered, universal access to all encrypted data (subject, of course, to lawful court orders).

However, once we allow the government to legislate weaker encryption, we’re lost. As Franklin wrote in the 1741 edition of Poor Richard’s Almanack:

“If you would keep your secret from an enemy, tell it not to a friend.”

October 20, 2017  6:46 PM

Latest Kaspersky controversy brings new questions, few answers

Rob Wright Profile: Rob Wright

Kaspersky Lab’s latest salvo in its ongoing feud with the U.S. government and media offered some answers but raised eve more questions.

The company on Tuesday broke its silence a week after a series of explosive news reports turned up the heat on the Kaspersky controversy. We discussed the reports and the questions surrounding them in this week’s episode of the Risk & Repeat podcast, but I’ll summarize:

  • The New York Times claimed that Israeli intelligence officers hacked into Kaspersky Lab in 2015 and, more importantly, observed Russian hackers using the company’s antivirus software to search for classified U.S. government documents.
  • The Washington Post published a similar report later that day and also claimed Israeli intelligence discovered NSA hacking tools on Kaspersky’s network.
  • The Wall Street Journal also had a similar story the next day on the Kaspersky controversy, with a very important detail that Kaspersky antivirus scans were searching for

These reports resulted in the most serious and detailed allegations yet against Kaspersky; anonymous government officials had accused the company of, among other things, helping state-sponsored Russian hackers by tailoring Kaspersky antivirus scans to hunt for U.S. secrets.

As a result, Kaspersky responded this week with an oddly-worded statement (and a curious URL) that offered some rebuttals to the articles but also raised more questions. Much of the statement focuses on the “English-speaking media,” their use of anonymous sources and the lack of specific details about the secret files that were stolen, among other elements of the news reports.

But there are some important details in the statement that both shed light on the situation and raise further questions on the Kaspersky controversy. Here are a few points that stood out:

  • Kaspersky doesn’t directly confront the allegation that it had, or has, NSA cyberweapons on its servers. But it did provide reasoning for why it would have possession of them: “It sounds like this contractor decided to work on a cyberweapon from home, and our antivirus detected it. What a surprise!”
  • Kaspersky explained how its antivirus scanning works, specifically how Kaspersky Security Network (KSN) identifies malicious and suspicious files and transfers them to Kaspersky’s cloud repository. This is also where Kaspersky throws some shade: “If you like to develop cyberweapons on your home computer, it would be quite logical to turn KSN off — otherwise your malicious software will end up in our antivirus database and all your work will have been in vain.”
  • Ironically, the above point raised more questions about the reported NSA breach. Wouldn’t an NSA contractor know that having hacking tools (a.k.a. malware) on their computer would alert Kaspersky’s antivirus software? Wouldn’t the individual know to turn off KSN or perhaps use Kaspersky’s Private Security Network? It’s entirely possible that a person foolish enough to bring highly classified data home to their personal computer could commit an equally foolish error by fully enabling Kaspersky antivirus software, but it’s difficult to believe.
  • Kaspersky provided an explanation for why it would have NSA hacking tools on its network, but it didn’t offer any insight into how hackers could gain access to KSN data and use it to search for government documents. When Kaspersky was breached in 2015 (by Russian hackers, not Israeli hackers), did they gain access to KSN? Could threat actors somehow intercept transmissions from Kaspersky antivirus software to KSN? The company isn’t saying.
  • Let’s assume Kaspersky did have NSA cyberweapons on its network when the company was breached in 2015 (which, again, the company has not confirmed or denied). This makes sense since Kaspersky was the first to report on the Equation Group in February of 2015. But this raises the possibility that Kaspersky had possession of exploits like EternalBlue, DoublePulsar and others that were exposed by the Shadow Brokers in 2016 – but for whatever reason didn’t disclose them. Based on the Equation Group report, which cited a number of exploit codenames, that there is some overlap between what Kaspersky discovered and what was later released by the Shadow Brokers (FoggyBottom and Grok malware modules, for example, were included in last month’s UNITEDRAKE dump). But other hacking tools and malware samples discovered by Kaspersky were not identified by their codenames and instead were given nicknames by the company. Did Kaspersky have more of the cyberweapons that were later exposed by the Shadow Brokers? The idea that two largely distinct caches of NSA exploits were exposed – with one obtained by Kaspersky, and one stolen by the Shadow Brokers – is tough to wrap my head around. But considering the repeated blunders by the intelligence community and its contractors in recent years, maybe it’s not so far-fetched.
  • The Equation Group is the elephant in the room. Kaspersky’s landmark report on the covert NSA hacking group seems relevant in light of last week’s news, but the company hasn’t referenced it in any capacity. Does Kaspersky think the Equation Group reveal played a part in the U.S. government’s decision to ban its products? Again, the company isn’t saying. Instead, Kaspersky’s statement took some shots at the news media and made vague references to “geopolitics.”
  • Finally, a big question: Why did Israeli intelligence officers hack into Kaspersky’s network in 2015? The articles in question never make that clear, and Kaspersky never directly addresses that information in its statement. Instead, the company cites its report on the breach and the Duqu 2.0 malware used in the attack. Bu this is an important question, and it’s one that Kaspersky has shown no interest in raising. And that is strange, because it’s important part of this mess that has seemingly been overlooked. What was the motive for the attack? Was the attack in some way a response to Kaspersky’s exposure of the Equation Group? Was Israeli intelligence hoping to gain deeper insight into Kaspersky’s technologies to avoid detection? It’s unclear.

More bombshell stories on the Kaspersky controversy are likely to drop in the coming weeks and months. But until the U.S. government officially discloses what it has on the antivirus maker, and until Kaspersky itself comes clean on unanswered questions, we won’t have anything close to a clear picture.

September 29, 2017  8:16 PM

FBI’s Freese: It’s time to stop blaming hacking victims

Rob Wright Profile: Rob Wright

The infosec industry needs to express more empathy for hacking victims and engage in less public shaming.

That was the message from  Don Freese, deputy assistant director of the FBI and former head of the bureau’s National Cyber Investigative Joint Task Force (NCIJTF), at the (ISC)2 Security Congress this week. In his opening keynote discussion with Brandon Dunlap, senior manager of security, risk and compliance at Amazon, Freese focused on the importance of proper risk management in building a strong enterprise security posture.

But he reserved a portion of his talk to confront an oft-criticized and occasionally ugly practice of the infosec industry: blaming and shaming hacking victims.

In discussing the lack of communication and trust between security professionals and the rest of the enterprise, including C-suite executives, Freese talked about what he called an “unhealthy sense of superiority” in the cybersecurity field, which can lead to victim blaming.

“Certainly the FBI struggled with this in our culture,” Freese said. “The FBI was rightfully criticized in the last decade for victimizing people twice in cyber [attacks]. We certainly don’t do this when there’s a violent crime, when somebody is involved in a terrorist incident, or something like that. We don’t rush in and blame the victim. [But] we do it, and we have done it, in cybersecurity.”

That practice, Freese said, not only harms relationships with people inside an organization as well as third-parties, but it makes the difficult process of solving the problem of a cyberattack even harder.

“You’ve got to be willing to humble yourself a bit to really understand what’s going on with the victim,” he said.

Freese went on to say the situation at the FBI “is absolutely getting better.” But his point remained that the bureau as well as the infosec industry in general needs to do less victim-shaming in order to build better relationships and lines of communications.

Freese is absolutely right. The pile-on that ensues in both the media and social media following the latest breach can be alarming. This isn’t to say companies like Equifax shouldn’t be criticized for some of their actions – they absolutely should. And we shouldn’t let “breach fatigue” take hold and allow these events to be completely shrugged off. But there’s a line where the criticism becomes so wanton that it’s both self-defeating and self-destructive, and industry professionals as well as the media should at least make good faith efforts to find that line and stay on the right side of it.

And blaming hacking victims may have detrimental effects that are more tangible than we would like to believe. Freese’s words this week echoed Facebook CSO Alex Stamos’ keynote at Black Hat 2017 this summer.

“As a community we tend to punish people who implement imperfect solutions in an imperfect world,” Stamos said. “As an industry, we have a real problem with empathy. We have a real inability to put ourselves in the shoes of the people we are trying to protect. It’s really dangerous for us to do this because it makes it very easy for us to shift the responsibility for building trustworthy, dependable systems off of ourselves [and] on to other people.”

In short, security professionals may be making a hard job even harder. But the issue may go beyond shifting responsibilities and breaking down relationships. As someone who’s done his fair share of criticizing enterprises and government agencies that have suffered catastrophic breaches or committed seemingly incomprehensible errors, I’ve often wondered about the larger effects of negative media attention on a victim organization as well as the industry as a whole.

More specifically, I’ve wondered if the constant flow of embarrassing headlines and negative news regarding the latest data breaches and hacks act as a contributing factor in one of the industry’s biggest problems: the workforce shortage. Perhaps filling jobs and promoting the infosec profession to a younger and more diverse population is harder because no security professional wants the next breach headline on their resume and no one wants to take the fall as disgraced CISO; a college student considering a future infosec career may see the swirl of negativity and shaming around the dozens of companies that fall prey to threat actors each month and think that both outcomes are not just probable but inevitable.

Infosec careers offer steady employment and good pay. But in the event of a breach, these careers also offer massive stress, negative publicity and, in some cases, damaged reputations and job losses. I’m reminded of what Adobe CSO Brad Arkin said during a presentation on his experiences with the Adobe data breach in 2013; Arkin said he was so stressed dealing with the fallout of the breach, he grinded his teeth to the point where he cracked two molars during a meeting.

Yes, the pay for infosec jobs may be very good. But for a lot of people, that may not be enough to justify the costs.

September 22, 2017  10:04 PM

DerbyCon cybersecurity conference is unique and troubling

Michael Heller Michael Heller Profile: Michael Heller
Conferences, cybersecurity

Walking up to DerbyCon 7.0 cybersecurity conference it immediately has a very different feel from the “major” infosec conferences. Attendees would never be caught loitering outside of the Black Hat or DEFCON venues, because no one willingly spends more time than necessary outdoors in Las Vegas. RSAC attendees might be outside, but only because it’s simply impossible to fit 40,000-plus humans into the Moscone Center without triggering claustrophobics.

DerbyCon is different though. The cybersecurity conference literally begins on the sidewalk outside the Hyatt Regency in Louisville and the best parts (as many will tell you) take place in the lobby at the so-called “LobbyCon,” where anyone with the means to get to Louisville can participate in the conference without a ticket.

Groups of hackers, pen testers, researchers, enthusiasts and other various infosec wonks can be found talking about any and all topics from cybersecurity to “Rick and Morty” and everything in between. This feel of community extends into DerbyCon proper with attendees lining the hallways, not looking desperately in need of rest like other major infosec conferences, but talking, sharing and connecting.

The feel of DerbyCon cybersecurity conference  is not unlike a miniature DEFCON – but with a higher emphasis on the lounge areas – and that appearance seems intentional. Just like DEFCON, the parties in the evening get as much promotion as the daytime talks (Paul Oakenfold and Busta Rhymes this year), and just like DEFCON, DerbyCon hosts “villages” for hands-on experiences with lock-picking, social engineering, IoT hacking, hacking your derby hat and more.

On top of all that, DerbyCon is a place for learning all aspects of infosec. The tracks are broken into four – Break Me, Fix Me, Teach Me and The 3-Way (for talks that don’t fit neatly into one of the other buckets.)

The two keynotes had bigger messages but were rooted in storytelling; Matt Graeber, security researcher for SpecterOps, told the story of how he became a security researcher and how it led him to finding a flaw in Windows. John Strand, a SANS Institute instructor and owner of Black Hills Information Security, told a heart-wrenching tale of the lessons he learned from his mother as she was dying of cancer and how those lessons convinced him the infosec industry needs to be more of a community and not fight so much.

The good with the bad

For all of the unique aspects of DerbyCon it was hard to ignore one way that it is very similar to other cybersecurity conferences – the vast majority of attendees are white males.

Without demographics data, it is unclear if the proportion of minorities and women is lower for DerbyCon, but it is something that feels more prominent given that the tone of the conference is one of community. DerbyCon feels like a space where everyone is welcome, so noticing that there isn’t a more diverse base of attendees can serve to highlight an issue that is often talked about but may not get the real engagement needed to create meaningful change.

While it could be argued that size and location of DerbyCon might contribute to there being a low proportion of women and minorities here, it can’t be used as an excuse, and the issue is not unique to DerbyCon. The infosec world overall needs to make more of an effort to promote diversity, and DerbyCon cybersecurity conference serves as one more example of that.

September 15, 2017  7:15 PM

Fearmongering around Apple Face ID security announcement

Michael Heller Michael Heller Profile: Michael Heller
Apple, biometric, Facial recognition, iPhone

As fears grow over government surveillance, the phrase “facial recognition” often triggers a bit of panic in the public, and some commentators are exploiting that fear to overstate any risks associated with Apple’s new Face ID security system.

One of the more common misunderstandings is around who has access to the Face ID data. There are those who claim Apple is building a giant database of facial scans and that the government could compel Apple to share that data through court orders.

However, just as has been seen with the FBI attempting to bypass the Touch ID fingerprint scanner on iPhones, the same security measures are in place for Face ID security. The facial scan data is stored in the Secure Enclave on the user’s iPhone, according to Apple, and is never transmitted to the cloud — the same system that has protected users’ fingerprint data since 2013.

Over the past four years, Apple’s Touch ID has been deployed on hundreds of millions of iPhones and iPads, but not one report has ever surfaced of that fingerprint data being compromised, gathered by Apple, or shared with law enforcement. But, there are those out there who would claim that somehow this system would suddenly fail because it is holding Face ID security data.

The fact is that the data stored in the iOS Secure Enclave is only accessible on the specific device. Apple has built in layers of security so that even it cannot access that data — let alone share it with law enforcement — without rearchitecting iOS at a fundamental level, which often means the burden of doing so is too high to be compelled by court order.

Face ID security vs facial recognition

This is not to say that there is no reason to be wary of facial recognition systems. Having been a cybersecurity reporter for close to three years now, I’ve learned that there will always be an organization that abuses a system like this or fails to protect important data (**cough**Equifax**cough**).

Facial recognition systems should be questioned and the measures to protect user privacy should be scrutinized. But, just because a technology is “creepy” or has the potential to be abused doesn’t mean all logic should go out the window.

Facebook and Google have been doing facial recognition for years to help users better tag faces in photos. Those are legitimate databases of faces that should be far more worrying than Face ID security, because the companies holding those database could be compelled to share them via court order.

Of course, one could also argue that many of the photos used by Facebook and Google to train those recognition systems are public, and it is known that the FBI has its own facial recognition database of more than 411 million photos.

To create panic over Apple’s Face ID security when the same data is already in the hands of law enforcement is little more than clickbait and not worthy of the outlets spreading the fear.

August 23, 2017  6:47 PM

Project Treble is another attempt at faster Android updates

Michael Heller Michael Heller Profile: Michael Heller
Android, Google

Google has historically had a problem with getting mobile device manufacturers to push out Android updates, which has left hundreds of millions in the Android ecosystem at risk. Google hopes that will change with the introduction of Project Treble in Android 8.0 Oreo, but it’s unclear if this latest attempt to speed up Android updates will be successful.

Project Treble is an ambitious and impressive piece of engineering that will essentially make the Android system separate from any manufacturer (OEM) modifications. Theoretically, this will allow Android OS updates to be pushed out faster because they won’t be delayed by any custom software being added. In a perfect world, this could even make OEMs happier because they will also be able to push updates to custom software without going down the path of putting those custom apps in the Play Store.

Project Treble

Wrangling the Android ecosystem is by no means an easy feat. According to Google’s latest statistics from May 2017, there were more than 2 billion active Android devices worldwide. Attempting to get consistent, quick updates to all of those devices with dozens of OEMs needing modifications for hundreds of hardware variations, as well as verification from carriers around the world, is arguably one of the most difficult system update problems in history.

Project Treble isn’t Google’s first attempt at fixing Android updates by trying to implement policies around updates, nudging OEMs towards lighter customization and pushing updates through apps in the Play Store, but history has proven these changes don’t make much difference. Project Treble is a major change that has the potential to make a significant impact on system updates, but there will still be issues.

First in 2011, Google put in place an informal policy asking OEMs to fully support devices for 18 months after being put on the market, including updating to the latest Android OS version released within that timeframe. Unfortunately, there was no way to enforce this policy and no timetable mandating when updates needed to be pushed. As a result, Android 7.x Nougat is only on 13.5% of the more than 2 billion active devices worldwide (per the August numbers from Google) and, althoughAndroid 6.0 Marshmallow is found on the plurality of devices, it only reaches 32.3% of the ecosystem.

After that, Google added a rule that a device had to be released with the latest version of Android in order to be certified to receive the Play Store, Google Apps and Play services. This helped make sure new devices didn’t start out behind, but didn’t address the update problem.

Google even tried to sidestep the issue altogether by putting a number of security features into Google Play services, which is updated directly by Google separately from the Android OS and therefore can be pushed to nearly all Android devices without OEM or carrier interaction.

Remaining speedbumps for Android updates

Project Treble has the potential to make an impact to how quickly devices receive Android updates, but it doesn’t necessarily address delays from carrier certification of updates. Unlike iOS updates, which come straight from Apple, Android updates need OEM tinkering due to the variety of hardware and therefore need to also be tested by carriers. Perhaps, carrier testing will be faster since the Android system updates should be relatively similar from device to device, but there’s no way to know that yet.

Android update process

Additionally, Project Treble will add another layer of complexity to Android updates by creating three separate update packages for OEMs to worry about — the Android OS, the OEM custom software layer and the monthly security patches.

Google has been rumored to be considering a way to publicly shame OEMs who fall behind on the security patch releases, indicating that even those aren’t being pushed out in a timely manner. In the meantime, an unofficial tracker for security updates gives a rough idea of the situation.

OEM software teams are likely used to working on Android OS updates in conjunction with the OEM customizations, so a workflow change will be needed. Once that’s done, there’s no guarantee the Android OS update will get any priority over custom software or even that it will get the resources to get device-specific adaptations quickly.

Ultimately, Project Treble provides OEMs a much easier path to pushing faster updates for Android, but there are still enough speedbumps to cause delays and OEMs still haven’t proven able to even push small security patch releases. Android updates will never be as fast and seamless as iOS, but given the challenges Google faces, Project Treble may be the best solution yet. If OEMs get on board.

August 8, 2017  6:38 PM

The Symantec-Google feud can’t be swept under the rug

Rob Wright Profile: Rob Wright

The feud between Symantec and the web browser community, most notably Google, appears to be over now that DigiCert has agreed to acquire Symantec Website Security for close to $1 billion.

But according to Symantec CEO Greg Clark, there never was a feud to begin with.

Clark presented a charitable view of the Symantec-Google  dispute, which stemmed from Google’s findings of improper practices within Symantec’s certificate authority business, in a recent interview. “Some of the media reports were of a hostile situation,” Clark told CRN. “I think we had a good collaboration with Google during that process.”

It’s unclear what Clark considers “hostile” and which media outlets he’s referring to (Editor’s note: SearchSecurity has covered the proceedings extensively), but most of the back and forth between Symantec and the browser community was made public through forum messages, blog posts and corporate statements.

And those public statements show quite clearly that there was indeed a Symantec-Google feud that began on March 23rd when Google developer Ryan Sleevi announced the company’s plan on the Chromium forum to systematically remove trust from Symantec web certificates. The post, titled “Intent to Deprecate and Remove: Trust in existing Symantec-issued Certificates,” pulled few punches in describing a “series of failures” within Symantec Website Security, which included certificate misissuance, failures to remediate issues raised in audits, and other practices that ran counter to the Certificate Authority/Browser Forum’s Baseline Requirements.

What followed was a lengthy and at times tense tug-of-war between Google (and later other web browser companies, including Mozilla) and Symantec over issues with the antivirus vendor’s certificate authority (CA) practices and how to resolve them.

But one would need to go no further than Symantec’s first statement on the matter to see just how hostile the situation was right from the start. On March 24th, the day after Google’s plan to deprecate trust was made public, the antivirus vendor responded with defiant blog post titled “Symantec Backs Its CA.” In it, the company clearly suggests Google’s actions were not in the interests of security but instead designed to undermine Symantec’s certificate business.

“We strongly object to the action Google has taken to target Symantec SSL/TLS certificates in the Chrome browser. This action was unexpected, and we believe the blog post was irresponsible,” the statement read. “We hope it was not calculated to create uncertainty and doubt within the Internet community about our SSL/TLS certificates.”

Symantec didn’t stop there, either. Later in the blog post, the company accused Google of exaggerating the scope of Symantec’s past certificate issues and said its statements on the matter were “misleading.”

Symantec also wrote that Google “singled out” its certificate authority business and pledged to minimize the “disruption” caused by Google’s announcement. And throughout the post, Symantec repeatedly claimed that everything was fine, outside of previously disclosed issues, and that there was nothing to see here.

Clark believes the Symantec-Google dispute wasn’t hostile, but the antivirus vendor’s own words contradict that. Right from the start, Symantec accused Google of unfairly targeting it; acting irresponsibly and causing needless disruption for Symantec; and acting upon ulterior and malicious motives rather than genuine infosec concerns.

It should be noted that none of those claims were supported by what followed. Mozilla joined Google and found new issues with Symantec Website Security certificate. And instead of denying Google and Mozilla’s findings and refusing to adopt their remediation plan – which required Symantec to hands over its CA operations to a third party – Symantec agreed to make sweeping changes to its certificate business in order to regain.

Clark said in the interview that Symantec-Google dispute “came to a good outcome.”

That’s true; DigiCert will pay $950 million for Symantec’s certificate business, and Symantec will retain a 30% stake in the busy while bearing none of the responsibility for the operate. But if Google hadn’t announced its plan to deprecate trust and put this process in motion, Symantec wouldn’t have lifted a finger to address the obvious and lengthy list of issues with its certificate authority operations. Symantec Website Security would have continued along its current path of lax reviews, questionable audits and other certificate woes.

Clark also said the situation is largely resolved, and there are no hard feelings between the two companies. “I think Symantec and Google have a better relationship because of it,” he said.

It may be true that Symantec and Google have effectively buried the hatchet, but to suggest there never was a hatchet to begin with is absurd.

June 6, 2017  4:58 PM

Symantec certificate authority aims for more delays on browser trust

Peter Loshin Peter Loshin Profile: Peter Loshin

Is the Symantec certificate authority operation too big to fail?

That seems to be the message the security giant is sending in its latest response to a proposal from the browser community to turn over Symantec certificate authority operations to one or more third parties starting August 8. Doing so has become a requirement for Symantec to be retained in Google Chrome, Mozilla and Opera browser trusted root stores and to regain trust in its PKI operations.

Google, Mozilla and Opera seem to be united in agreement with the proposal from the Chrome developer team, under which Symantec would cooperate with the third-party CAs while at the same time re-certifying its extended validation certificates and revoking trust in extended validation certificates issued after Jan. 1, 2015.

“[W]e understand that any failure of our SSL/TLS certificates to be recognized by popular browsers could disrupt the business of our customers,” Symantec wrote in its blog post responding to Google’s proposal. “In that light, we appreciate that Google’s current proposal does not immediately pose compatibility or interoperability challenges for the vast majority of users.”

At first glance, Symantec appeared to praise the latest proposal from Chrome, noting it allows their customers, “for the most part, to have an uninterrupted and unencumbered experience.” However, the CA giant raised issues on almost all of the actions called for in the proposal, stating “there are some aspects of the current proposal that we believe need to be changed before we can reasonably and responsibly implement a plan that involves entrusting parts of our CA operations to a third party.”

Google’s proposal requires that new Symantec-chaining certificates be issued by “independently operated third-parties” starting August 8, 2017; Google’s timetable requires the transition be complete by Feb. 1, 2018, with all Symantec certificates issued and validated by those third-parties — although Symantec is making its case that the timetable is too short.

Symantec’s strategy seems to be to continue to seek further reductions in the limits placed on existing certificates, while dragging out the process — a tactic that reduces the impact of removing untrusted certificates as the questionable certificates continue aging and expiring on their own, without any further action on the part of the Symantec certificate authority operation.

The gist of the argument is that as “the largest issuer of EV and OV certificates in the industry,” the Symantec certificate authority is so much larger than its competitors that “no other single CA operates at the scale nor offers the broad set of capabilities that Symantec offers today.” In fact, over the course of several months, Symantec has frequently cited the size of its CA business and customer base in pushing back against Google’s and Mozilla’s proposals.

In other words, the Symantec certificate authority is so big that you can forget about having a CA partner ready to issue Symantec certificates by August 8. “Suitable CA partners” will need to be identified, vetted and selected; requests for proposals must be solicited and reviewed; and even then, Symantec will still need “time to put in place the governance, business and legal structures necessary to ensure the appropriate accountability and oversight for the sub-CA proposal to be successful.”

And even then, Symantec said, after it partners with one or more sub-CAs, all of the involved parties will need to do even more work to engineer the new operating model — and once that is done, there’s the need for extensive testing.

“Based on our initial research, we believe the timing laid out above is not achievable given the magnitude of the transition that would need to occur,” Symantec wrote.

What kind of timetable will work for Symantec?

Symantec can’t give any firm estimates for how long it will take to comply with Google’s proposal until Symantec’s candidate third-party partners respond to its requests for proposals. Those are due at the end of June, Symantec said.

After that, there’s the question of “ramp-up time,” the time Symantec’s third-party providers need for building infrastructure and authentication capabilities, which “may be greater than four months.”

“Symantec serves certain international markets that require language expertise in order to perform validation tasks,” the company wrote. “Any acceptable partner would also need to service these markets.” Signing up multiple CAs capable of serving these different markets will “require multiple contract negotiations and multiple technical integrations.”

Alternatively, Symantec could “[p]artner with a single sub-CA, which would require such CA to build up the compliant and reliable capacity necessary to take over our CA operations in terms of staff and infrastructure.”

Symantec did not indicate which alternative it preferred.

Symantec stated that “designing, developing, and testing new auth/verif/issuance logic, in addition to creating an orchestration layer to interface with multiple sub-CAs will take an estimated 14 calendar weeks. This does not include the engineering efforts required by the sub-CAs, systems integration and testing with each sub-CA, or testing end-to-end with API integrated customers and partners, although some of this effort can occur in parallel.”

It’s not clear whether this task is part of the ramp-up time Symantec referred to, but there’s also the question of revalidating “over 200,000 organizations in full, in order to maintain full certificate validity for OV and EV certificates.” Symantec needed more than four months to fully revalidate CrossCert’s active certificate issuances — about 30,000 certificates and far fewer organizations — that were issued by Symantec’s former SSL/TLS RA partners.

Could Symantec be purposely dragging its heels to mitigate the impact on itself and its customers through delaying the deadline for distrusting Symantec certificates until the questionable ones have expired? Or could Symantec be attempting to whittle down the pain points in Google’s plan by continually pushing back on them while at the same time asking for deadline extensions?

It’s unclear what Symantec’s strategy is, and the company is only addressing the ongoing controversy through official company statements (Symantec has not responded to requests for further comments or interview). But the clock is ticking, and the longer action is delayed, the harder it will likely be to fix the situation.

May 3, 2017  8:14 PM

Verizon DBIR 2017 loses international contributors

Michael Heller Michael Heller Profile: Michael Heller

Looking at the overall numbers for the contributors to the Verizon Data Breach Investigations Report (DBIR) from the past five years, it would seem like the amount of partners is hitting a plateau, but looking at the specifics raises questions about international data sharing.

The number of partners contributing data to the Verizon DBIR exploded from 2013 (19) through 2014 (50) and peaking in 2015 (70), while there has been a slight dip in 2016 (67) and 2017 (65). The total numbers gloss over the churn of contributors added and lost year-to-year.

For example, the slight dip in Verizon DBIR 2017 partners was due to the loss of 19 contributors and the addition of 17 new ones, but these are the biggest names lost:

  • Australian Federal Police
  • CERT Polska/NASK
  • European Crime Center
  • Imperva
  • International Computer Security Association Labs
  • Policia Metropolitana Ciudad de Buenos Aires, Argentina
  • Tenable Network Security
  • Splunk
  • Verizon Cyber Intelligence Center

And compare the biggest names added:

  • Rapid7
  • Veracode
  • VERIS Community Database
  • Verizon Digital Media Services
  • Verizon Fraud Team
  • Verizon Network Operations and Engineering
  • Verizon Enterprise Services

A Verizon spokesperson said the difference between 2016 and 2017 was due to “a combination of factors, including sample sizes may have been too small, an organization wasn’t able to commit to this year’s report due to other priorities or the deadline was missed for content submission.”

However, just looking at the Verizon DBIR partners involved, there was a notable drop in international contributors while Verizon listed more of its own projects as well as the VERIS Community Database, which has been integral to the DBIR since the database was launched in 2013.

It is unclear why these organziations have dropped out, and none responded to questions on the topic. Maybe they left due to changes in international data sharing laws, including the upcoming GDPR. It is also possible there were other mitigating factors such as the climate surrounding data privacy or political uncertainty in the U.S. and abroad. Or, Verizon could be correct and this is nothing more than an odd coincidence.

Effects on analyzing DBIR data

Over the years Verizon has warned that the results of the DBIR can be affected by the partners involved and one expert noted the Verizon DBIR 2017 had dearth of information related to industrial control systems. But, it appears there may also be a loss of international data to take into account when analyzing the results of the report.

Each year, Verizon does add new data to the DBIR statistics for previous years based on newly contributed information. This means the data regarding 2014 or 2015 incidents and data breaches would be more accurate in the 2017 Verizon DBIR than in the reports for those respective years. So, the data of past reports may be less reliable than the latest info in the newer reports.

That’s not a great thing for trying to tease out trends or pinpoint the biggest new threats, but Verizon has also admitted to shying away from offering suggestions on actions enterprises should take based on the DBIR data.

Maybe IT pros should take more care to consider the quality and volume of the sources when analyzing the Verizon DBIR. There is good data, like confirmation of trends we already saw or felt, like the rise of ransomware and cyberespionage or failures of basic security, and new trends, like pretexting. But, without more transparency regarding what organizations are contributing and why partners leave, other analysis could be challenging.

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: