Security Bytes


August 8, 2017  6:38 PM

The Symantec-Google feud can’t be swept under the rug

Rob Wright Profile: Rob Wright

The feud between Symantec and the web browser community, most notably Google, appears to be over now that DigiCert has agreed to acquire Symantec Website Security for close to $1 billion.

But according to Symantec CEO Greg Clark, there never was a feud to begin with.

Clark presented a charitable view of the Symantec-Google  dispute, which stemmed from Google’s findings of improper practices within Symantec’s certificate authority business, in a recent interview. “Some of the media reports were of a hostile situation,” Clark told CRN. “I think we had a good collaboration with Google during that process.”

It’s unclear what Clark considers “hostile” and which media outlets he’s referring to (Editor’s note: SearchSecurity has covered the proceedings extensively), but most of the back and forth between Symantec and the browser community was made public through forum messages, blog posts and corporate statements.

And those public statements show quite clearly that there was indeed a Symantec-Google feud that began on March 23rd when Google developer Ryan Sleevi announced the company’s plan on the Chromium forum to systematically remove trust from Symantec web certificates. The post, titled “Intent to Deprecate and Remove: Trust in existing Symantec-issued Certificates,” pulled few punches in describing a “series of failures” within Symantec Website Security, which included certificate misissuance, failures to remediate issues raised in audits, and other practices that ran counter to the Certificate Authority/Browser Forum’s Baseline Requirements.

What followed was a lengthy and at times tense tug-of-war between Google (and later other web browser companies, including Mozilla) and Symantec over issues with the antivirus vendor’s certificate authority (CA) practices and how to resolve them.

But one would need to go no further than Symantec’s first statement on the matter to see just how hostile the situation was right from the start. On March 24th, the day after Google’s plan to deprecate trust was made public, the antivirus vendor responded with defiant blog post titled “Symantec Backs Its CA.” In it, the company clearly suggests Google’s actions were not in the interests of security but instead designed to undermine Symantec’s certificate business.

“We strongly object to the action Google has taken to target Symantec SSL/TLS certificates in the Chrome browser. This action was unexpected, and we believe the blog post was irresponsible,” the statement read. “We hope it was not calculated to create uncertainty and doubt within the Internet community about our SSL/TLS certificates.”

Symantec didn’t stop there, either. Later in the blog post, the company accused Google of exaggerating the scope of Symantec’s past certificate issues and said its statements on the matter were “misleading.”

Symantec also wrote that Google “singled out” its certificate authority business and pledged to minimize the “disruption” caused by Google’s announcement. And throughout the post, Symantec repeatedly claimed that everything was fine, outside of previously disclosed issues, and that there was nothing to see here.

Clark believes the Symantec-Google dispute wasn’t hostile, but the antivirus vendor’s own words contradict that. Right from the start, Symantec accused Google of unfairly targeting it; acting irresponsibly and causing needless disruption for Symantec; and acting upon ulterior and malicious motives rather than genuine infosec concerns.

It should be noted that none of those claims were supported by what followed. Mozilla joined Google and found new issues with Symantec Website Security certificate. And instead of denying Google and Mozilla’s findings and refusing to adopt their remediation plan – which required Symantec to hands over its CA operations to a third party – Symantec agreed to make sweeping changes to its certificate business in order to regain.

Clark said in the interview that Symantec-Google dispute “came to a good outcome.”

That’s true; DigiCert will pay $950 million for Symantec’s certificate business, and Symantec will retain a 30% stake in the busy while bearing none of the responsibility for the operate. But if Google hadn’t announced its plan to deprecate trust and put this process in motion, Symantec wouldn’t have lifted a finger to address the obvious and lengthy list of issues with its certificate authority operations. Symantec Website Security would have continued along its current path of lax reviews, questionable audits and other certificate woes.

Clark also said the situation is largely resolved, and there are no hard feelings between the two companies. “I think Symantec and Google have a better relationship because of it,” he said.

It may be true that Symantec and Google have effectively buried the hatchet, but to suggest there never was a hatchet to begin with is absurd.

June 6, 2017  4:58 PM

Symantec certificate authority aims for more delays on browser trust

Peter Loshin Peter Loshin Profile: Peter Loshin
Symantec

Is the Symantec certificate authority operation too big to fail?

That seems to be the message the security giant is sending in its latest response to a proposal from the browser community to turn over Symantec certificate authority operations to one or more third parties starting August 8. Doing so has become a requirement for Symantec to be retained in Google Chrome, Mozilla and Opera browser trusted root stores and to regain trust in its PKI operations.

Google, Mozilla and Opera seem to be united in agreement with the proposal from the Chrome developer team, under which Symantec would cooperate with the third-party CAs while at the same time re-certifying its extended validation certificates and revoking trust in extended validation certificates issued after Jan. 1, 2015.

“[W]e understand that any failure of our SSL/TLS certificates to be recognized by popular browsers could disrupt the business of our customers,” Symantec wrote in its blog post responding to Google’s proposal. “In that light, we appreciate that Google’s current proposal does not immediately pose compatibility or interoperability challenges for the vast majority of users.”

At first glance, Symantec appeared to praise the latest proposal from Chrome, noting it allows their customers, “for the most part, to have an uninterrupted and unencumbered experience.” However, the CA giant raised issues on almost all of the actions called for in the proposal, stating “there are some aspects of the current proposal that we believe need to be changed before we can reasonably and responsibly implement a plan that involves entrusting parts of our CA operations to a third party.”

Google’s proposal requires that new Symantec-chaining certificates be issued by “independently operated third-parties” starting August 8, 2017; Google’s timetable requires the transition be complete by Feb. 1, 2018, with all Symantec certificates issued and validated by those third-parties — although Symantec is making its case that the timetable is too short.

Symantec’s strategy seems to be to continue to seek further reductions in the limits placed on existing certificates, while dragging out the process — a tactic that reduces the impact of removing untrusted certificates as the questionable certificates continue aging and expiring on their own, without any further action on the part of the Symantec certificate authority operation.

The gist of the argument is that as “the largest issuer of EV and OV certificates in the industry,” the Symantec certificate authority is so much larger than its competitors that “no other single CA operates at the scale nor offers the broad set of capabilities that Symantec offers today.” In fact, over the course of several months, Symantec has frequently cited the size of its CA business and customer base in pushing back against Google’s and Mozilla’s proposals.

In other words, the Symantec certificate authority is so big that you can forget about having a CA partner ready to issue Symantec certificates by August 8. “Suitable CA partners” will need to be identified, vetted and selected; requests for proposals must be solicited and reviewed; and even then, Symantec will still need “time to put in place the governance, business and legal structures necessary to ensure the appropriate accountability and oversight for the sub-CA proposal to be successful.”

And even then, Symantec said, after it partners with one or more sub-CAs, all of the involved parties will need to do even more work to engineer the new operating model — and once that is done, there’s the need for extensive testing.

“Based on our initial research, we believe the timing laid out above is not achievable given the magnitude of the transition that would need to occur,” Symantec wrote.

What kind of timetable will work for Symantec?

Symantec can’t give any firm estimates for how long it will take to comply with Google’s proposal until Symantec’s candidate third-party partners respond to its requests for proposals. Those are due at the end of June, Symantec said.

After that, there’s the question of “ramp-up time,” the time Symantec’s third-party providers need for building infrastructure and authentication capabilities, which “may be greater than four months.”

“Symantec serves certain international markets that require language expertise in order to perform validation tasks,” the company wrote. “Any acceptable partner would also need to service these markets.” Signing up multiple CAs capable of serving these different markets will “require multiple contract negotiations and multiple technical integrations.”

Alternatively, Symantec could “[p]artner with a single sub-CA, which would require such CA to build up the compliant and reliable capacity necessary to take over our CA operations in terms of staff and infrastructure.”

Symantec did not indicate which alternative it preferred.

Symantec stated that “designing, developing, and testing new auth/verif/issuance logic, in addition to creating an orchestration layer to interface with multiple sub-CAs will take an estimated 14 calendar weeks. This does not include the engineering efforts required by the sub-CAs, systems integration and testing with each sub-CA, or testing end-to-end with API integrated customers and partners, although some of this effort can occur in parallel.”

It’s not clear whether this task is part of the ramp-up time Symantec referred to, but there’s also the question of revalidating “over 200,000 organizations in full, in order to maintain full certificate validity for OV and EV certificates.” Symantec needed more than four months to fully revalidate CrossCert’s active certificate issuances — about 30,000 certificates and far fewer organizations — that were issued by Symantec’s former SSL/TLS RA partners.

Could Symantec be purposely dragging its heels to mitigate the impact on itself and its customers through delaying the deadline for distrusting Symantec certificates until the questionable ones have expired? Or could Symantec be attempting to whittle down the pain points in Google’s plan by continually pushing back on them while at the same time asking for deadline extensions?

It’s unclear what Symantec’s strategy is, and the company is only addressing the ongoing controversy through official company statements (Symantec has not responded to requests for further comments or interview). But the clock is ticking, and the longer action is delayed, the harder it will likely be to fix the situation.


May 3, 2017  8:14 PM

Verizon DBIR 2017 loses international contributors

Michael Heller Michael Heller Profile: Michael Heller
Security

Looking at the overall numbers for the contributors to the Verizon Data Breach Investigations Report (DBIR) from the past five years, it would seem like the amount of partners is hitting a plateau, but looking at the specifics raises questions about international data sharing.

The number of partners contributing data to the Verizon DBIR exploded from 2013 (19) through 2014 (50) and peaking in 2015 (70), while there has been a slight dip in 2016 (67) and 2017 (65). The total numbers gloss over the churn of contributors added and lost year-to-year.

For example, the slight dip in Verizon DBIR 2017 partners was due to the loss of 19 contributors and the addition of 17 new ones, but these are the biggest names lost:

  • Australian Federal Police
  • CERT-EU
  • CERT Polska/NASK
  • European Crime Center
  • Imperva
  • International Computer Security Association Labs
  • Policia Metropolitana Ciudad de Buenos Aires, Argentina
  • Tenable Network Security
  • Splunk
  • UK-CERT
  • Verizon Cyber Intelligence Center

And compare the biggest names added:

  • Rapid7
  • Veracode
  • VERIS Community Database
  • Verizon Digital Media Services
  • Verizon Fraud Team
  • Verizon Network Operations and Engineering
  • Verizon Enterprise Services

A Verizon spokesperson said the difference between 2016 and 2017 was due to “a combination of factors, including sample sizes may have been too small, an organization wasn’t able to commit to this year’s report due to other priorities or the deadline was missed for content submission.”

However, just looking at the Verizon DBIR partners involved, there was a notable drop in international contributors while Verizon listed more of its own projects as well as the VERIS Community Database, which has been integral to the DBIR since the database was launched in 2013.

It is unclear why these organziations have dropped out, and none responded to questions on the topic. Maybe they left due to changes in international data sharing laws, including the upcoming GDPR. It is also possible there were other mitigating factors such as the climate surrounding data privacy or political uncertainty in the U.S. and abroad. Or, Verizon could be correct and this is nothing more than an odd coincidence.

Effects on analyzing DBIR data

Over the years Verizon has warned that the results of the DBIR can be affected by the partners involved and one expert noted the Verizon DBIR 2017 had dearth of information related to industrial control systems. But, it appears there may also be a loss of international data to take into account when analyzing the results of the report.

Each year, Verizon does add new data to the DBIR statistics for previous years based on newly contributed information. This means the data regarding 2014 or 2015 incidents and data breaches would be more accurate in the 2017 Verizon DBIR than in the reports for those respective years. So, the data of past reports may be less reliable than the latest info in the newer reports.

That’s not a great thing for trying to tease out trends or pinpoint the biggest new threats, but Verizon has also admitted to shying away from offering suggestions on actions enterprises should take based on the DBIR data.

Maybe IT pros should take more care to consider the quality and volume of the sources when analyzing the Verizon DBIR. There is good data, like confirmation of trends we already saw or felt, like the rise of ransomware and cyberespionage or failures of basic security, and new trends, like pretexting. But, without more transparency regarding what organizations are contributing and why partners leave, other analysis could be challenging.


February 24, 2017  7:39 PM

RSA Conference 2017: Are software regulations coming for developers?

Rob Wright Profile: Rob Wright
Security

Security expert Bruce Schneier dragged an uncomfortable but very real possibility into public view during RSA Conference 2017, and it should have developers of all types pondering a very grim future full of software regulations.

Schneier discussed his case for internet of things (IoT) regulation in not one but two sessions at RSA Conference 2017 last week. The growing potential for IoT regulations are hardly a surprise given the run of recent high-profile DDoS attacks using insecure IoT devices. And while Schneier’s support for IoT regulations may have surprised some, he made a well-reasoned case that government action is coming whether the technology industry approves or not and IT professionals would be well-served by taking an active role in the process to ensure the government enacts the least-bad option.

Within one of those RSA Conference sessions, Schneier urged the audience to think more broadly about the responsibilities of developers in a “connect it all” world.

We need to start talking about our future. We rarely if ever have conversations about our technological future and what we’d like to have. Instead of designing our future, we let come as it comes without forethought or architecting or planning. When we try to design, we get surprised by emergent properties,” Schneier told the audience. “I think this also has to change. I think we should start making moral and ethical and political decisions about how technology should work.”

Schneier then made another point that went far beyond simple IoT regulations and had chilling implications for the technology industry.

“Until now, we have largely given programmers a special right to design [and] to code the world as they saw fit. And giving them that right was fine, as long as it didn’t matter. Fundamentally, it doesn’t matter what Facebook’s design is,” he said. “But when it comes to “things,” it does matter, so that special right probably has to end.”

MADRID, 20-11-07. II JORNADA SEGURIDAD DE LA INFORMACION FOTO: ANGEL MARTINEZ

First, Schneier is obviously right. Facebook’s software design affects a very large but finite number of people, and they can choose to stay on or leave the platform; whatever programming sins it may commit won’t extend to users at Microsoft or Google or other companies. A connected physical device, however, can extend outside of Facebook’s user base and affect others. To this point, we’ve gotten off easy by only having to contend with potent DDoS attacks and data breaches. But the possibility of physical harm from hacked IoT devices is certainly in play.

That doesn’t make the possibility of general software regulations any easier to swallow, however. The idea that programmers could lose the right to code what they want and how they want seems as incomprehensible as the government suddenly regulating what I write as a journalist. While the latter scenario is clear violation of the Constitution, the former probably isn’t (more on that in moment).

I don’t know if Schneier truly believes that software developers should have their rights curbed by the government, or if his aim was to spark concern – and potential action – from the audience. Maybe it was both.

But is Schneier’s idea – that unfettered freedom for programming should be replaced with software regulations – really that far-fetched? Consider the aggressive measures proposed in Congress regarding the “going dark” issue and encryption technology. And keep in mind it was the judiciary and not lawmakers that ordered Apple to design a tool to hack its own security protections for iOS. (At one point before achieving a legal victory in this case, Apple was reportedly preparing an argument that its code was protected as speech under the First Amendment, which would have be fascinating to see.)

I mostly agree with Schneier’s argument that government regulation is coming whether we like it or not. There seems to be little incentive — and even less desire — for the industry to solve these IoT security problems. Perhaps that will change as IoT-related attacks become more common and more powerful this year. But perhaps not; the fact that manufacturers have allowed outdated connected medical devices to linger with known vulnerabilities gives me little confidence.

What would these regulations look like? During the question-and-answer session, Schneier was asked about whether certifications, either for individuals or for technologies, could address some of the concerns about connected devices leading to physical harm. Schneier said government-regulated certifications or licenses for software developers were a possibility.

“You had to be a licensed architect to design this building,” Schneier said, referring to the hotel at which the session was hosted. “You couldn’t be just anybody. So we could have that sort of certification – a licensed software engineer.”

I’m neither a structural engineer nor a programmer, but this seems like a bad idea. There aren’t that many ways to design a structurally sound building relative to the vast number of ways a programmer could design a perfectly sound application. The complexity of software doesn’t lend itself to the kind of regulation we see with building codes, for example. Even if the codes for coding, so to speak, were straightforward and tackled only the no-brainers – Thou shalt not use SHA-1 ever again! – there is a haystack’s worth of questions (backward compatibility, support, enforcement, etc.) that need to be answered, with no guarantee of actually getting to the desired needle.

If we accept a world in which a hypothetical government agency dictates what devices can and cannot be connected to the internet and how they are connected, it’s worth asking now what it will mean in the coming years for potentially broader, sweeping software regulations. It’s possible the Trump Administration’s stated commitment to roll back federal regulations will buy the IT industry some time before such a future is realized.

On the other hand, we’re one bad headline away from Congress enacting knee-jerk legislation to police not just how IoT devices are built and connected but how developers write and deploy code across the entire digital realm. And unlike journalists, programmers may have nothing in the Constitution to prevent it.


February 15, 2017  4:53 PM

Christopher Young: Don’t sleep on the Mirai botnet

Rob Wright Profile: Rob Wright
Security

SAN FRANCISCO — While much of the talk at this year’s RSA Conference has been about future IoT threats and new attacks, Intel Security’s Christopher Young urged attendees not forget the past — specifically, the Mirai botnet.

“We can’t think of the Mirai botnet in the past tense,” Young, senior vice president and general manager at Intel Security, said during his keynote Tuesday at RSA Conference 2017. “It’s alive and well today and recruiting new players.”

To illustrate his point, Young described how Intel Security CTO Steve Grobman and his team decided to test the theory that Mirai was still highly active and looking for more vulnerable IoT devices to infect.

“Our hypothesis was simple,” Young said. “Given the amount of connected or infected devices that are out there today, what’s the risk that a new, unprotected device can be coopted into the Mirai botnet? We wanted to know, how pervasive was this threat?”

To that end, Grobman’s team set up a honeypot, disguised as a DVR, on an open network. And in just over a minute, Young said, they found the DVR had been compromised by the Mirai botnet.

“It just puts a fine point on the problem,” Young said. “The Mirai botnet is alive and well, recruiting drones…that can be used for the next attack.”

As a result, Young said the industry needs to address the insecurities of connected devices within the home, which have become lucrative targets for Mirai and other types of IoT malware. He stressed that a combination of approaches are required to address these IoT threats, including consumer education, changes in security policies for device manufacturers and better protection measures from the infosec industry.

“The question I’d ask all of us in cybersecurity here at RSA is: ‘How many of us take the home into account when designing our cybersecurity architectures and when we provision our cybersecurity tools?’” Young asked the audience.

Young said the threat landscape has changed, and as a result so too must the mentality of security professionals. “Today, the target has now become the weapon,” he said, referring to connected devices. “The game has changed on us yet again.”


February 8, 2017  9:09 PM

Five things to watch at RSA Conference 2017

Rob Wright Profile: Rob Wright
iot security, Security

RSA Conference 2017 officially kicks off Monday, and once again it will bring several topics, trends and potential controversies to the center of attention of the information security industry. Unlike previous RSA conferences, this year it’s been harder to identify one or two major themes heading into the show. But there are several storylines I’ll be following closely at RSA Conference 2017, starting with these five:

1. Rise of the machine learning model

Machine learning has been a persistent buzzword in the infosec industry recently, and that should continue at this year’s RSA Conference. Vendors big and small have moved away from signature-based antivirus and antimalware products to embrace machine learning models that no longer rely on existing malware definitions to detect threats. The antivirus software market has come under heavy scrutiny lately, thanks to head-scratching security flaws and persistent questions about the software’s effectiveness. But it remains to be seen if advanced threat detection products leveraging machine learning will be the true heir apparent. There’s also considerable confusion around the differences between machine learning and artificial intelligence, thanks to vendors using the terms interchangeably. Hopefully this year’s show will provide some clarity.

2. DDoS denial

Distributed denial of services (DDoS) attacks have long been a thorn in the sides of many enterprises, but until recently DDoS attacks have typically been viewed as minor nuisances rather as than major threats. That may be changing in the wake of the powerful Mirai botnet attacks last year. Several vendors and service providers are poised to deliver new mitigation products to combat these potent DDoS attacks. But the attacks have also led to growing concerns about the insecurity of internet of things (IoT) devices as well as potential legal or regulatory actions that may follow those concerns; both subjects will be addressed at the conference. And it’s unclear whether the infosec industry as a whole will be ready when the next wave of record-setting DDoS attacks takes aim at critical internet infrastructure.

3. The Trump effect

RSA Conference has been home to controversies in the past, and this year’s show is likely to host yet another one. A number of technology companies, organizations and conferences have spoken out in recent days against President Trump’s controversial executive order barring entry into the U.S. for people from seven Muslim-majority Middle Eastern countries. In the week before RSAC 2017 several technology companies, including Microsoft, Google and Intel, signed an amicus brief opposing Trump’s executive order; RSA, however, was not on the list. RSA Conference said last week it has not been impacted by the executive order. But the conference will feature Rep. Michael McCaul (R-Tex.) as a keynote speaker Tuesday; McCaul, chairman of the Homeland Security Committee, was initially a strong supporter of Trump’s action, though he later qualified that support, and reportedly contributed to the executive order. McCaul has appeared at previous RSA Conferences and has spoken out against encryption backdoors, but his support of the controversial executive order could earn him criticism from some attendees.

4. RSA’s direction

RSA itself has undergone major changes recently. Not only has the security vendor done an about face with its product portfolio, leaving behind its legacy encryption business in favor of identity and access management, but it experienced another executive leadership change recently. Former RSA President Amit Yoran departed the company in December, just weeks before this year’s show, to take the CEO position at Tenable Network Security. Yoran was named RSA president in 2014 and took over from former RSA head Art Coviello during a turbulent period for the vendor (RSA was criticized over reported ties to the U.S. National Security Agency). While RSA quickly tapped Rohit Ghai, formerly president of Dell EMC’s Enterprise Content Division, to fill the void, Yoran’s departure comes at a time when RSA is still trying to establish its new identity and questions about the company’s future with Dell EMC (or as an independent entity or acquisition target) persist.

5. Cloud security shakeup

Setting aside recent activity from cloud giants like Amazon Web Services and Microsoft Azure, the cloud security market has been in rut in terms of innovation and excitement. Cloud access security brokers have dominated much of the attention in the market, but after a flurry of investments and acquisitions in 2015 and early 2016, even the CASB market has been quiet lately. But that may be changing as several CASBs have broadened their scope beyond protecting major SaaS applications and are now moving to address IaaS security. And that’s good news, considering the growing number of attacks either targeting cloud services or using them as command & control infrastructure for other targets. With the rush of cloud-focused products from traditional security vendors waning over the last year, a shot in the arm from CASBs could be just what the cloud security market needs.


November 30, 2016  9:31 PM

How cloud file sharing is creating new headaches for security teams

Rob Wright Profile: Rob Wright

In the past, the simple sharing of a Microsoft Word document with a colleague over email wasn’t cause for alarm. It wasn’t the kind of event that was regularly reviewed or even recorded by a security operations center.

Maybe it should’ve been. Regardless, in the age of rapid cloud adoption, such document sharing – even over email – is exactly the kind of event that, at least in theory, is monitored by today’s security operations centers (SOCs). And that’s become yet another problem on the growing list of security concerns for enterprises in the age of decentralized and mobilized IT environments.

Skyhigh Networks, a cloud access security broker, released its Q4 2016 Cloud Adoption and Risk Report earlier this month, and it included many data points one might expect: the number of cloud services used by the average enterprise, according to Skyhigh’s customer survey, continues to climb (1,427 services, currently); many of these cloud services still lack proper security controls (just 8.1% of those services meet Skyhigh’s Enterprise-Ready rating); and a growing percentage of files uploaded to cloud file sharing or collaboration services contain sensitive information (18.1%).

But the report also included surprising data on just how much cloud sharing and collaboration services are taxing security teams – even when there are no actual security incidents occurring. For example, according to Skyhigh’s survey, the average enterprise generates 2.7 billion cloud events per month (which includes everything from a file upload/download, document share and user logins).

But just 2,542 of those events on average are anomalous events, and an even smaller number – 23.2 – end up being actual security incidents. The Skyhigh report also states “Security teams widely report inaccurate breach notifications, resulting in alert fatigue and missed incidents.” In other words, the rapid growth of cloud file sharing and collaboration has sent the number of cloud events snowballing down a hill, and security teams are getting buried at the bottom.

“We hear a lot about alert fatigue. There are just so many cloud events and activities per month to monitor,” said Kamal Shah, senior vice president of products and marketing at Skyhigh. “All of those events go to the SOC, which just gets bombarded. They can’t keep up.”

The good news, of course, is just 23.2 of these monthly events constitute actual risks or threats to an enterprise (and even then, the incidents in question may be something as relatively simple as an accidental file exposure). The bad news is that even when things are working smoothly, security teams still struggle to stay above the water line because of the sheer volume of data they must monitor.

There are ways to address the problem, Shah said. User behavior analytics, for example, can help security teams better identify a potential threat and separate pertinent data from the rest of the noise. But, Shad said, companies need to start addressing the problem before they become paralyzed by the mountain of cloud events piling up on them.

“The number of cloud events is growing every quarter, and that makes it harder on enterprises because it’s just too much data,” he said. “And the number is only going to go up.”


October 26, 2016  8:39 PM

Android malware delivery is harder than you might think

Michael Heller Michael Heller Profile: Michael Heller
Android, malware

There is certainly no shortage of infosec headlines mentioning some new Android malware threat. There are banking Trojans, malware that can root your phone, monitor your communication or steal your data. Given the frequency and bombast of the headlines, you might even think that Android is completely insecure and overrun by malicious apps. But this isn’t true for most in the Google world.

If you live in North America, Europe, Japan or Australia, your Android device almost certainly uses the Google Play Store for app discovery and installation. Malicious apps are sometimes found in the Play Store, but those are usually  apps with little usage. Plus, Google often will have removed a malicious app from the Play Store by the time you see the news story about the risk.

Google’s statistics claim that 0.16% of the apps that users attempted to install from the Play Store in 2015 were found to be malicious. And various studies show that the average Android user only installs about one app per month. Basically, you really need to be unlucky to install a malicious app out of the 2.4 million available in the Play Store.

Going outside of the Play Store does bump up your risk factor, but there is still a process to installing a malicious app that news about Android malware tends to gloss over. The vast majority of Android malware is delivered to devices via “side loading,” which is to say the app has to be actively installed by the user outside of the Google Play Store environment. This is not a simple process.

In order to be able to side load an app, a user must first go into the device settings and turn on the option to install apps from “Unknown Sources” and tap OK on the dialog that pops up warning the user that side loading apps makes “your phone and personal data more vulnerable to attack.”

Then it is a matter of getting the malicious app onto a user’s device. Often, malware is hidden inside hacked versions of popular apps that are hosted on pirate download sites. So if a user wants to get a paid app for free, they might accidentally get malware as well. However, when a user attempts to install a malicious app, Google’s Verify Apps feature kicks in. Verify Apps will present the user with a warning at the time of install (which requires two taps to bypass) and even if a user confirms the installation, Verify Apps can remove apps that register as device admins or disable apps that “compromise the device security model.”

Of course, all of this assumes the user has an Android device with Google apps and services on it. Many devices sold in Russia, China and India do not have Google apps and services, which is why Android malware is far more prevalent in those regions.

But imagine the process of delivering malware to an Android device when needing to trick the user into downloading it: First, the user is deceived into downloading the malicious app. Then the user would have to find the app on the device (which could even require a file explorer app that is not always present on the device by default), attempt to install it and be warned that the installation cannot happen without changing the “Unknown Sources” setting. The user would have to change the setting, bypass that warning, attempt to install the app again, likely be warned again, bypass that warning and complete the install. At this point, depending on what version of Android the user has, this might mean approving the app permissions at the time of install; or, it might mean opening the app and approving permissions for the app to access the device storage. Even after all of that, Google’s Verify Apps might still step in and kill the app for you or Google’s Safety Net might catch the app attempting to contact a command and control server to download malicious code and shut that down.

That’s a complicated process that often gets ignored in the rush to put up a headline about some “dangerous” new Android malware. Just remember: a malicious app can’t appear on your device by magic. So, if you stick with established apps in the Google Play Store and avoid side loading apps, your chances of being infected by a malicious app are close to zero.


October 6, 2016  6:48 PM

Patent race picks up speed in the cloud access security broker market

Rob Wright Profile: Rob Wright
CASB

The cloud access security broker market has been keeping the U.S. Patent and Trademark Office busy.

SkyHigh Networks recently announced it obtained another patent for its CASB platform, the third such patent the company has been awarded since it was founded in 2011. SkyHigh’s announcement came shortly after rival Netskope announced it had been awarded a second patent for its own CASB platform.

SkyHigh’s most recent patent covers an “encryption key management system and method” for “an enterprise using encryption for cloud-based services.” The company’s technology features a hosted encryption service or gateway that can handle clients’ encrypted data as it moves to and from cloud services. Rather than enterprises having to send user traffic to cloud applications through an on premise encryption gateway, SkyHigh’s platform handles the process in the cloud.

From the patent abstract: “The network intermediary receives the encryption key material from the enterprise and stores the encryption key material in temporary storage and uses the received encryption key material to derive a data encryption key to perform the encryption of the enterprise’s data. In this manner, the enterprise can be provided with the added security assurance of maintaining and managing its own encryption key while using cloud-based data storage services.”

SkyHigh’s approach for securely managing the encryption keys involves several steps, according to the patent, including the generation of original key material using a key agent deployed within the enterprise network; storing the original key material on the key agent in a temporary memory; using the key agent to request a hardware security module to encrypt the original key material, which the HSM does using the using an enterprise owned and managed encryption key that is available only within the enterprise data network, and so on (for the full process, read the patent description).

If that sounds like a complicated process, that’s because it is. So why is it beneficial to have the encryption gateway hosted in the cloud rather than on the enterprise’s own network? Kaushik Narayan, co-founder and CTO of Skyhigh Networks, said there are operational hurdles to that method. “Previously, enterprises would have to use on premise encryption and key management for data in the cloud,” he said. “But if you’re on the road or outside the corporate network, you would need a VPN to get access to that data. That can make using cloud services a lot more difficult.”

Narayan stressed that SkyHigh’s patented approach doesn’t involve taking ownership of encryption keys from enterprises. “The keys are still owned and kept by the customers,” he said. “The idea behind the patent is to cover the acquisition and distribution of those keys.”

Other CASBs such as CipherCloud use similar hosted encryption approaches, so it remains to be seen if there will be any overlap or potential friction with SkyHigh’s patent. But the flurry of patents awarded to competing startups in the CASB space show just how quickly the market is evolving, and how valuable intellectual property in the CASB market can be.

The patent also further demonstrates how crucial encryption is to the CASB model in particular and cloud security in general. Narayan said SkyHigh, which last month announced $40 million in series D financing, has put greater focus on encryption because of customer demand.

“Encryption is a core component of the CASB model. Fundamentally where this all started is with customers in certain verticals like financial services and healthcare, as well as businesses in certain regions with strict regulatory concerns,” Narayan said. “When these customers put third party customer or citizen data in the cloud, they have to — for a variety of compliance reasons — protect that data.”


August 31, 2016  9:53 PM

Windows 10 Anniversary update adds headaches for antivirus vendors

Rob Wright Profile: Rob Wright
Windows 10

The struggling antivirus software industry has been in the news lately for all the wrong reasons.

To recap, Google’s Project Zero has discovered and disclosed a flurry of head-scratching vulnerabilities in many leading antivirus products. The most serious of these included critical flaws in the core engine of Symantec’s flagship antivirus software. To add insult to injury, the discovery of these Symantec vulnerabilities revealed a shocking design flaw in its products: the antivirus engine was loading itself in the Windows kernel to scan malware.

Bringing potentially malicious code in the Windows kernel is like putting Ozzy Osborne in a roomful of doves and hoping he’ll be on best behavior. The Symantec episodes were a sad reminder of the woeful state of the antivirus industry.

And now the beleaguered antivirus industry is back in the spotlight once again. This time, however, it’s not the fault of the security vendors.

Many top antivirus programs have been rendered useless on Microsoft systems that installed the Windows 10 Anniversary update. Users have reported problems with Kaspersky Lab, Intel Security, Avast and others, ranging from features being disabled in the AV programs to systems crashes and blue screens of death. Microsoft confirmed that the problem is with the Windows 10 Anniversary update’s compatibility checks, and expects a patch to be issued in September (Kaspersky has released its own temporary fix for the problem).

While the antivirus industry has been on the decline lately, Microsoft depends heavily on these products to protect Windows systems from the waves of malware flooding cyberspace. No enterprise worth its salt would allow its employees to rely solely on Windows’ own embedded security features to protect their systems. So one would expect that Microsoft would take extra care in making sure its update would be compatible with programs and not leave systems without proper security protection.

But apparently that didn’t happen. And the reason, according to Intel Security’s advisory, is pretty upsetting:

“The intent was to have upgrade and installation checks implemented in the Windows 10 Anniversary Update to ensure that no incompatible McAfee product versions could be installed or present. Because of time constraints, these checks could not be implemented prior to the release of the Windows 10 Anniversary Update on August 2, 2016.”

Time constraints? I find it hard to believe that Microsoft didn’t have the resources available to implement these checks and avert this headache. Antivirus vendors have had a hard enough time fighting threat actors and their own lackluster product quality. They really don’t need a third adversary in the ring with Microsoft.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: