Security Bytes


November 12, 2018  6:19 PM

Android Ecosystem Security Transparency Report is a wary first step

Michael Heller Michael Heller Profile: Michael Heller
Android, Android security, Google

Reading through Google’s first quarterly Android Ecosystem Security Transparency Report feels like a mix of missed opportunities and déjà vu all over again.

Much of what is in the new Android ecosystem security report is data that has been part of Google’s annual Android Security Year in Review report, including the rates of potentially harmful applications (PHAs) on devices with and without sideloaded apps — spoiler alert: sideloading is much riskier — and rates of PHAs by geographical region. Surprisingly, the rates in Russia are lower than in the U.S.

The only other data in the Android ecosystem security report shows the percentage of devices with at least one PHA installed based on Android version. This is new data shows that the newer the version of Android, the less likely it is a device will have a PHA installed.

However, this also hints at the data Google didn’t include in the report, like how well specific hardware partners have done in updating devices to those newer versions of Android. Considering that Android 7.x Nougat is the most common version of the OS in the wild at 28.2% and the latest version 9.0 Pie hasn’t even cracked the 0.1% marker to be included in Google’s platform numbers, the smart money says OEM updating stats wouldn’t be too impressive.

There’s also the matter of Android security updates and the data around which hardware partners are best at pushing them out. Dave Kleidermacher, head of Android security and privacy, said at the Google I/O developer conference in May 2018 that the company was tracking which partners were best at pushing security updates and that it was considering adding hardware support details to future Android Ecosystem Security Transparency Reports. More recently, Google added stipulations to its OEM contracts mandating at least four security updates per year on Android devices.

It’s unclear why Google ultimately didn’t include this data in the report on Android ecosystem security, but Google has been hesitant to call out hardware partners for slow updates in the past. In addition to new requirements in Android partner contracts regarding security updates, there have been rules stating hardware partners need to update any device to the latest version of Android released in the first 18 months after a device launch. However, it has always been unclear what the punishment would be for breaking those rules. Presumably, it would be a ban on access to Google Play services, the Play Store and Google Apps, but there have never been reports of those penalties being enforced.

Google has taken steps to make Android updates easier, including Project Treble in Android 8.0 Oreo, which effectively decoupled the Android system from any software differentiation added by a hardware partner. But, since Android 7.x is still the most common version in the wild, it doesn’t appear as though that work has yielded much fruit yet.

Adding OS and security update stats to the Android Ecosystem Security Transparency Report could go a long way towards shaming OEMs into being better and giving consumers more information with which to make purchasing decisions, but time will tell if Google ever goes so far as to name OEMs specifically.

October 26, 2018  7:21 PM

Google sets Android security updates rules but enforcement is unclear

Michael Heller Michael Heller Profile: Michael Heller
Android, Google, Google Apps, Google Play Store, Security updates

The vendor requirements for Android are a strange and mysterious thing but a new leak claims Google has added language to force manufacturers to push more regular Android security updates.

According to The Verge, Google’s latest contract will require OEMs to supply Android security updates for two years and provide at least four updates within the first year of a device’s release. Vendors will also have to release patches within 90 days of Google identifying a vulnerability.

Mandating more consistent Android security updates is certainly a good thing, but it remains unclear what penalties Google would levy against manufacturers that fail to provide the updates or if Google would follow through on any punitive actions.

It has been known for years that Google sets certain rules for manufacturers who want to include the Play Store, Play services and Google apps on Android devices, but because enforcement has been unclear the rules have sometimes been seen as mere suggestions.

For example, Google has had a requirement in place since the spring of 2011 mandating manufacturers to upgrade devices to the latest version of the Android OS released within 18 months of a device’s launch. However, because of the logistics issues of providing those OS updates, Google has rarely been known to enforce that requirement.

This can be seen in the Android OS distribution numbers, which are a complete mess. Currently, according to Google, the most popular version of Android on devices in the wild is Android 6.0 Marshmallow (21.6%), followed by Android 7.0 (19%), Android 5.1 (14.7%), Android 8.0 (13.4%) and Android 7.1 (10.3%). And not even showing up on Google’s numbers because it hasn’t hit the 0.1% threshold for inclusion is Android 9.0 released in August.

Theoretically, the ultimate enforcement of the Android requirements would be Google barring a manufacturer from releasing a device that includes Google apps and services, but there have been no reports of that ever happening. Plus, the European Union’s recent crackdown on Android give an indication that Google does wield control over the Android ecosystem — and was found to be abusing that power.

The ruling in the EU will allow major OEMs to release forked versions of Android without Google apps and services (something they were previously barred from doing by Google’s contract). It will also force Google to bundle the Play Store, services and most Google apps into a paid licensing bundle, while offering — but not requiring — the Chrome browser and Search as a free bundle. Although early rumors suggest Google might offset the cost of the apps bundle by paying OEMs to use Chrome and Google Search, effectively making it all free and sidestepping any actual change.

These changes only apply to Android devices released in the EU, but it should lead to more devices on the market running Android but featuring third-party apps and services. This could mean some real competition for Google from less popular Android forks such as Amazon’s Fire OS or Xiaomi’s MIUI.

It’s still unknown if the new rules regarding Android security updates are for the U.S. only or if they will be part of contracts in other regions. But, an unintended consequence of the EU rules might be to strengthen Google’s claim that the most secure Android devices are those with the Play Store and Play services.

Google has long leaned on its strong record of keeping malware out of the Play Store and off of user devices, if Play services are installed. Google consistently shows that the highest rates of malware come from sideloading apps in regions where the Play Store and Play services are less common — Russia and China – and where third-party sources are more popular.

Assuming the requirements for Android security updates do apply in other regions around the globe, it might be fair to also assume they’d be tied to the Google apps and services bundle (at least in the EU) because otherwise Google would have no way to put teeth behind the rules. So, not only would Google have its stats regarding how much malware is taken care of in the Play Store and on user devices by Play services, it might also have more stats showing those devices are more consistently updated and patched.

The Play Store, services and Google apps are an enticing carrot to dangle in front of vendors when requiring things like Android security updates, and there is reason to believe manufacturers would be willing to comply in order to get those apps and services, even if the penalties are unclear.

More competition will be coming to the Android ecosystem in the EU, and it’s not unreasonable to think that competition could spread to the U.S., especially if Google is scared to face similar actions by the U.S. government (as unlikely as that may seem).  And the less power Google apps and services have in the market, the  less force there will be behind any Google requirements for security updates.

 


October 15, 2018  4:45 PM

Mystery around Trend Micro apps still lingers one month later

Rob Wright Profile: Rob Wright
Security

It’s been a little over a month since several Trend Micro apps were kicked out of the Mac App Store by Apple over allegations of stealing user data, but several crucial questions remain unanswered.

To recap, security researchers discovered that seven Trend Micro apps were collecting users’ browser data without notifying users (the vendor claims the data collection was included in its EULAs, but it later conceded the apps had no secondary, informed consent process). Following the removal of those apps, Trend Micro’s story of what took place changed several times – the first statement indicated everything was fine and that the apps were working as designed, while subsequent updates blamed the fiasco on common code libraries that were mistakenly used in certain apps and conceded that the user notification and permission processes needed an overhaul.

Trend Micro last week issued its latest statement on the situation, which included an answer to a vital question about what had happened with these Mac apps: “The data was never shared with any third party, monetized for ad revenue, or otherwise used for any purpose other than the security of customers.”

While that was an important disclosure, there were still questions Trend Micro had yet to answer. I sent some of those questions to Trend Micro; a company spokesperson replied with a statement addressing some of the points but sidestepping others.

  • What happened with “Open Any File: RAR Support”? Initially, researchers identified several apps that were collecting browser histories, and Trend Micro disclosed that five of those apps — Dr. Antivirus, Dr. Battery, Dr. Cleaner, Dr. Cleaner Pro, Dr. Unarchiver and Duplicate Finder – were the company’s property. But two days later, Trend Micro named a sixth app, Open Any Files. Why did it take two days for the company to disclose this? How did Trend Micro not know the Open Any Files app belonged to them? Trend Micro didn’t directly address these questions.
  • Why wasn’t Open Any Files listed as a Trend Micro app? This is one of the stranger parts of the Trend Micro apps controversy. According to a cached Mac App Store page for Open Any Files, there’s no mention of Trend Micro at all. Instead, the app is attributed to a developer named “Hao Wu,” and the description lists Wu as the copyright holder as well. Here is Trend Micro’s answer: “Open Any Files was created by a former Trend Micro developer as a short term pilot project to provide consumers with a number of helpful utilities,” the spokesperson wrote. “As there were no long term plans in place for the support of this application at the time of registration and copyright, full corporate branding was not applied. As you will know, we have decided to stop development and distribution of this particular app.” The spokesperson also said Open Any Files, was released in late 2017 with the browser data collection module enabled, but “starting with the version released in April 2018 (which was publicly available when this issue was reported in September) that functionality had already been removed.”
  • What was Open Any Files’ purpose? The only indications that Open Any Files belonged to Trend Micro are, according to MalwareBytes’ Thomas Reed, that the app was uploading users’ browser data to a Trend Micro domain, and it promoted another Trend Micro app in Dr. Antivirus. “Promoted” might be too soft a word; according to Reed’s assessment, Open Any Files was similar to other “scam applications” that warn users who attempt to open a file with the app that the file in question can’t be opened because it is infected and that users should scan the file with the promoted antivirus app. I asked Trend Micro if the company disputed Reed’s characterization of the app; the spokesperson did not address this question.
  • Who is Hao Wu? It appears from Trend Micro’s statement that Wu is a former developer at the company, but the company isn’t saying anything beyond that. Information from Apple’s Mac and iOS app stores is limited as well. It appears the developer behind Open Any Files is the same Hao Wu that is listed as the owner developer of other apps such as Weird Calc, iWiFiTest, Mr. Cleaner and Thinnest Calculator, but the developer’s app store profile appears to have been removed.
  • Is Trend Micro sure how much data its apps collected? On multiple occasions, the vendor explicitly stated data collection included only a small snapshot of users’ browser data – 24 hours prior to the installation of the apps. But Reed’s analysis of several of Trend Micro’s apps, including Open Any File and Dr. Antivirus, found they were collecting complete browsing and search histories from users. “It could be argued that it is useful for antivirus software to collect certain limited browsing history leading up to a malware/webpage detection and blocking,” Reed wrote in his analysis. “But it is very hard to argue to exfiltrate the entire browsing history of all installed browsers regardless of whether the user has encountered malware or not.” In addition, Reed discovered Dr. Antivirus was also uploading a list with “detailed information about every application found on the system,” which the company had yet to explain in its official statements and FAQ on the matter. Trend Micro responded to these questions. “We must reiterate our earlier statement that the apps in question performed a one-time upload of a snapshot of browser history covering the 24 hours prior to installation for security purposes,” the spokesperson wrote. “In addition, Dr. Antivirus included an app reputation feature that checked for malicious apps and fed anonymized app information into our large app reputation data base to protect users from potentially dangerous apps.”

It’s still unclear why Trend Micro would allow one of its developers to push out an app like Open Any Files if the company – by its own admission – never had any long term support plans for it. It’s also unclear why Trend Micro would remove the data collection feature for this specific app (and not others) but never properly brand Open Any Files.

To its credit, Trend Micro hasn’t ignored the situation or tried to erase its earlier denials of wrongdoing. But given the situation, the company owes more transparency about this episode and what oversight and controls it has around its app development process. The application ecosystem is full of threats, with countless apps performing a bevy of unscrupulous activity or downright malicious attacks against users. We’ve come to expect that kind of activity from get-rich-quick scam artists, cybercriminals and APTs. We don’t, however, expect it to come from one of the world’s largest and most successful security vendors.


October 1, 2018  1:46 PM

FBI, DHS blaming the victims on Remote Desktop Protocol

Peter Loshin Peter Loshin Profile: Peter Loshin

As most of the nation watched the Senate battle over a contentious Supreme Court appointment, the FBI and DHS jointly released a “Public Service Announcement,” in which they warn us all, per the announcement title, that “Cyber actors increasingly exploit the Remote Desktop Protocol to conduct malicious activity.”

An interesting aspect of this warning is that we, all of us – “businesses and private citizens” – must “review and understand what remote accesses their networks allow and take steps to reduce the likelihood of compromise, which may include disabling RDP if it is not needed.”

Got it? The government now expects all of us – every single one of us – to understand this threat and take steps to mitigate it. The person who runs the diner down the street, your parents and grandparents, college kids and retirees and disabled people and single parents; we’re all now on the hook for fixing this particular cyber problem.

It is no secret that the Remote Desktop Protocol has long been a source of exploitable vulnerabilities, and it is well known in the cyber community that RDP should be disabled in almost all cases. Jon Hart, senior security researcher at Rapid7, wrote last year in a blog post that there have been 20 Microsoft security updates on threats related to RDP, with at least two dozen individual CVEs dating back to 1999; he also noted that exploits targeting RDP were part of a 2017 ShadowBrokers leak.

However, what the FBI and DHS warning omits is that the Remote Desktop Protocol is really the Microsoft Remote Desktop Protocol, a proprietary protocol owned by Microsoft. In fact, the Public Service Announcement does not mention the words “Microsoft” or “Windows” once.

If the U.S. government truly wanted to protect its citizens from the depredations of ransomware operators who, like the SamSam threat actor, are abusing RDP to gain access to victim systems, couldn’t the government work directly with Microsoft to mitigate the vulnerability rather than putting the onus for cyberdefense on the victims?

This warning makes me wonder: What does the U.S. government really care about when it comes to “fixing the cyber”?

Cooperating with vendors on encryption, but not on RDP

When it comes to end-to-end encryption, the FBI sings a different tune.

The FBI has been targeting unbreakable end-to-end encryption because, we’ve been told, it interferes with the government’s ability to get lawful access to relevant evidence in some criminal cases. From the moment the government demanded that Apple decrypt an iPhone used by a shooter who was involved in the 2015 San Bernardino mass shooting, it was clear the FBI would continue to take the steps it deemed most effective to battle what it calls “going dark.”

That included pressuring tech giants like Apple, as well as Microsoft, Google, Facebook and others; that also includes leaders speaking out in favor of encryption backdoors and lobbying in favor of legislation that would require tech firms to “solve the problem,” or else.

It seems to me that an all-hands, all-fronts effort like the one mustered for “going dark” would be more effective in limiting cyberthreats like RDP than commanding citizens to “be on the lookout.”


September 14, 2018  8:38 PM

What the GAO Report missed about the Equifax data breach

Rob Wright Profile: Rob Wright
Equifax

The Government Accountability Office did its part to deliver some closure regarding the Equifax data breach by way of a newly published report on the now-infamous security incident.

The GAO report offers a comprehensive look at the numerous missteps made by Equifax, which allowed attackers to maintain a presence in the company’s network for 76 days and extract massive amounts of personal data without being detected. Those errors included having an outdated recipient list of system administrators for vulnerability alerts and an expired digital certificate, which led to a misconfiguration in Equifax’s network inspection system.

But for all its merits, the GAO’s report on the Equifax data breach omits or minimizes important parts of the story. Here are five things that were left out of the report.

  1. Website “issues”: The GAO noted the breach checker website that Equifax set up for consumers suffered from “several technical issues, including excessive downtime and inaccurate data.” But that’s hardly an adequate description of what ailed the website. For starters, the domain resembled a classic phishing URL — equifaxsecurity2017.com. It was also built on a stock version of WordPress (was the company trying to get hacked again?). And it was vulnerable to cross-site scripting attacks. And the site’s TLS certificate didn’t perform revocation checks. These are significantly bigger problems than website downtime.
  2. PIN problems: If the assortment of flaws with the breach checker website wasn’t enough, astute observers also discovered that the PINs generated for consumers who froze their credit files weren’t random, non-consecutive numbers – they were the date and time a consumer made the freeze request. As a result, the chances of threat actors guessing your code are significantly higher than they would be if the PIN digits were randomly selected.
  3. Encryption: This is the biggest omission in the Equifax breach report. While the report does mention encryption several times, it’s never in regard to the personally identifiable information that was exposed by the breach, and how encryption could have protected that data even after the attackers gained access to Equifax’s environment. Instead, the majority of the encryption talk is around how the threat actors abused existing encrypted communication channels to avoid detection when they issued commands and exfiltrated data. Encryption is obviously a sensitive topic within the federal government, but it’s as if the GAO is more concerned with how encryption helped the attackers rather than with how it could have stopped them.
  4. Insider trading: The GAO report doesn’t include any references to the former Equifax executive who was charged with insider trading by the Department of Justice. Jun Ying, the former CIO of Equifax’s U.S. Information Systems business unit, allegedly used non-public information about the breach to exercise his vested options and sell all of shares. While the case has no direct bearing on Equifax’s infosec posture, past or present, it’s a painful reminder that insider trading can be a by-product of enterprise breaches. Omitting any mention of Ying and the insider trading case from an accountability report seems like a missed opportunity for the federal government to address what could potentially be a reoccurring problem as breaches increase in frequency.
  5. Lack of incident response plan: Incident response is sparsely mentioned in the report, and when the GAO does mention it, it’s in the footnotes. For all the faults and errors laid out in the Equifax breach report, the GAO fails to identify a fundamental problem: the company apparently didn’t have a functional incident response plan in place. This is led to Equifax not only making several errors with its breach checker website but also later missteps, such as not knowing whether the company had encrypted consumer data post-breach. A solid, proper incident response plan would have saved Equifax a lot of trouble in the aftermath of the attack.


August 17, 2018  8:25 PM

DHS cybersecurity rhetoric offers contradictions at DEF CON

Michael Heller Michael Heller Profile: Michael Heller
DHS, Election, vote

The Vote Hacking Village at Defcon 26 in Las Vegas was an overwhelming jumble of activity — a mock vote manipulated, children hacking election results websites, machines being disassembled — and among it all were representatives from local and federal government, learning and sharing information and in the case of Jeanette Manfra, assistant secretary for the office of cybersecurity and communications in the Department of Homeland Security (DHS), deflecting the reality of the situation.

In her DEF CON  keynote address, Manfra discussed government cybersecurity in general as well as the ways election security could be improved, but she contradicted herself by refusing to acknowledge the value of the work done by DEF CON and deflecting actions to bring about real change.

The old standby arguments

“The way the government runs, naturally it’s somewhat more complicated. We don’t do IT [in] government particularly well,” Manfra said as an explanation of the DHS’ role. She said DHS was responsible for the cybersecurity of 99 different federal agencies, which have traditionally been isolated in terms of managing security. “We’re trying to get to think about enterprise risk and think about the government as a whole.”

This is a good example of the tone Manfra tried to establish: self-deprecating, but honest about the situation, even if she omitted key pieces of information — such as the challenge of having a holistic view of federal cybersecurity when so many infosec leadership roles in government remain empty — which would contradict the point she made.

Manfra continued to bring up the fact that we live in confusing times in terms of cybersecurity, especially because “the internet has challenged everything when it comes to how we think about the role of government in defending and securing its citizens and its infrastructure.”

“For the first time in a national security space, the government is not on the front lines. Our companies are on the front lines; our citizens are on the front lines; all of you are on the front lines,” Manfra said and concluded this means everyone — government, the intelligence community and the private sector — needs to think differently about their roles in cybersecurity and how to work together. “Our adversaries have been taking advantage of us for a long time. They’ve been taking advantage of our traditional principles for a really long time. And, we’ve got to come up with a way to turn it back on them.”

The idea that the roles of government and the private sector are in flux because of changes in technology is arguably accurate, but the situation is more complex than Manfra portrays. One could just as easily point to the privatization of critical infrastructure and lack of regulations surrounding necessary security and system upgrades in that infrastructure as contributing risk factors.

Manfra’s call for more cooperation between the public and private sectors in terms of security has been a common theme from the government for the past few years. However, the government’s appeal to the private sector to cooperate out of the pride of helping the country has largely fallen on deaf ears, because as was true with Manfra’s speech, the government often fails to make a compelling case.

The government wants to share, but the private sector has little incentive to do so, and experts have said the private sector doesn’t necessarily trust it would benefit from such cooperation, nor that security would improve. Despite the continued reluctance from the private sector and the lack of specifics from the government about what such cooperation would look like, the government seems ready to continue pushing the idea.

Election deflection and contradictions

Once Manfra got to the topic of election security, she began to combine omissions of information with statements that contradicted and attempts to deflect suggestions to make meaningful improvements to security.

“Elections are more than just the voting machines … The complexity is actually a benefit,” Manfra said. “Going back to 2016 when we first started to understand that the Russians were attempting to undermine and sow chaos and discord and undermine our democracy in general — which by the way, they’ve been trying to do for decades, it’s just the technology has allowed them to do it at a better scale.”

Despite acknowledging that attempts to undermine our democracy have been happening “for decades,” Manfra failed to explain why efforts to investigate risk factors and offer aid to improve security did not begin until 2016.

Manfra went on to claim the research the government has done led to the conclusion that it is “really really difficult to try to manipulate the actual vote count itself.” She said there were a lot of reasons for this, including that election machines are “physically secured.” This claim garnered chuckles from the DEF CON crowd, who have little respect for things like padlocks.

Manfra said that while misinformation on social media was an issue, DHS was focused on manipulation of voter rolls and the systems that tally the votes. She gave an example of voters becoming disenfranchised with the system because their names weren’t on the rolls at their polling places. She admitted the local officials running these systems are often under-resourced and need help because they could be using old systems.

“They’re trying to undermine our democratic process and confidence that we have in the democratic process,” Manfra said. “There’s a lot of ways to do that without actually manipulating the vote. We’re very much focused on the state and local process that you and I all participate in — I hope — all the time.”

Manfra explicitly mentioned the effect in undermining the trust in the election that could occur if an adversary were to manipulate the unofficial tally being reported by states. However, Manfra contradicted herself by discounting the efforts by DEF CON — where an 11 year old girl hacked into a mock reporting website in 10 minutes and changed the results — telling reporters after the keynote, “If all you’re saying is ‘Look, even a kid can hack into this.’ You’re not getting the full story which could have the impact of the average voter not understanding.”

Manfra admitted the DHS has begun researching more experimental security technologies, like blockchain, to see what their effects could be on election security. But, it’s unclear how serious the DHS is about making changes that would improve security because she also shied away from mandating proven election security measures such as risk-limiting audits.

“I’m not there yet in terms of mandatory requirements,” Manfra told reporters. “I think mandatory requirements could chill, so then people are only about the compliance with the requirement and not the intent.”

Ultimately, it’s unclear if the DHS has real, actionable plans to improve election security beyond the nebulous idea of helping local officials — assuming those officials ask for help in the first place. DEF CON showed vulnerable areas (reporting websites) and ways to improve security (paper trails and risk-limiting audits), but DHS seemed more interested in waiting and watching than learning from the event.


August 3, 2018  4:56 PM

Five things to watch for at Black Hat USA this year

Robert Richardson Robert Richardson Profile: Robert Richardson

When the latest edition of Black Hat USA kicks off in Las Vegas next week, it will find itself deep in the swirl of nation-state election tampering, with top security administrator’s gathering jointly in the White House press room to underscore the dangers of Russian cybermeddling while President Trump dismisses it all as wing-flapping.

Black Hat’s sessions have a way of morphing as they unfold onsite so that they speak to whatever news is breaking at the moment, and no doubt this will happen as the week progresses. But at the moment, you’d be forgiven it you took a look at the program and came away thinking that this larger cultural moment was being ignored. There’s a talk on norms in cyberdiplomacy (apparently there are some) and one on attribution (that perennial hobgoblin), but this isn’t one of those years where the head of the National Security Agency has come to take a drubbing on the big stage.

Also not much on the main program: cloud security. There’s that session you’ve come to expect on some new aspect of AWS credential compromise, but it’s pretty sparse otherwise. It remains to be seen whether this is because there’s a general stalemate in cloud attacks for the time being or whether the larger truth is that now it’s all cloud, so why even use the word.

What’s actually heavy on the program, then? Here are five areas that should prove interesting:

  1. Artificial intelligence for bad people.You knew hackers were going to use machine learning, right? Sessions like “Deep learning for side-channel analysis” and “Deep neural networks for hackers” should drum up some discussions in the hallways. And however bad it winds up looking, there’s little doubt that this is a horse that has already left the barn.
  2. The workaday breaking of things. Black Hat has always been a gathering where researchers tell IT folks how things can be skewered and compromised. This year carries on that proud tradition, with roughly half of sessions describing the milking either of control or of sensitive data from various applications or devices. It doesn’t appear there’s a particular pattern here with a thick new seam of vulnerability opening up. If there’s a generalization to be had, it’s that breaks making it into sessions these days lean toward complexity. A Japanese wirelessSD card will be reverse engineered. CPU caches will be ransacked. Parsers will be snipped. Nothing seems, so far, to have the big media flare of jackpotting an ATM or driving a Jeep off the road while the driver squeals for mercy. But you never can tell.
  3. Serious focus on the infosec community’s issues.There’s a track dedicated to tackle the issues that, frankly, most conferences don’t touch except with gallows humor asides. Topics include suicide, PTSD, addiction, dealing with sexual assault and closing the gender gap in the profession. It’s a strong move on Black Hat’s part.
  4. Spectre/Meltdown. There’s a talk that an insider friend tells me really will sort out why things got a little weird when word of the Meltdown vulnerability came out back in January. As the conference program has it, the speakers will “focus on the developments after the disclosure of Meltdown.” They’ll talk about “yet undisclosed attacks, including combinations of Meltdown and Spectre.” If you go for geeky content, this is your session (see you there).
  5. Industrial control systems. There are well-nigh 20 presentations just in the main program that deal with cars, planes and factories. There’s an “ICS firewall deep dive” that might be viewed as a core look at what the industrial world has in the way of conventional protections at the moment. Then there will be the customary breaking of things.

There will once again be a split between what’s important in the exhibit hall and most of what’s going on in the main conference session rooms. The sessions are about tools and attacks, but out on the sales floor, what vendors are beginning to grapple with are the several large changes in IT as a whole. These changes include IoT edge architectures, software-defined everything, and microservice application architectures and converged data centers changing traffic patterns within the enterprise so fundamentally that firewalls and intrusion detection are failing on the fundamentals—and trying to make sense within the new paradigms. Things on the show floor could be pretty interesting this year, even if that’s not the traditional hot spot at Black Hat.


July 27, 2018  8:00 PM

How Dropbox dropped the ball with anonymized data

Rob Wright Profile: Rob Wright
Security

Dropbox found itself in hot water this week over an academic study that used anonymized data to analyze the behavior and activity of thousands of customers.

The situation seemed innocent enough at first — an article in Harvard Business Review, researchers at Northwestern University Institute on Complex Systems (NICO) detailed an extensive two-year study of best practices for collaboration and communication on the cloud file hosting platform. Specifically, the study examined how thousands of academic scientists used Dropbox, which gave the NICO researchers project-folder data from more than 1,000 university departments.

But it wasn’t long before serious issues were revealed. The article, titled “A Study of Thousands of Dropbox Projects Reveals How Successful Teams Collaborate,” initially claimed that Dropbox gave the research team raw user data, which the researchers then anonymized. After Dropbox was hit with a wave of criticism, the article was revised to say the original version was incorrect – Dropbox anonymized the user data first and then gave it to the researchers.

That’s an extremely big error for the authors to make (if indeed it was an error) about who anonymized the data and when the data was anonymized — especially considering article was co-authored by a Dropbox manager (Rebecca Hinds, head of Enterprise Insights at Dropbox). I have to believe the article went through some kind of review process from Dropbox before it was published.

But let’s assume one of the leading cloud collaboration companies in the world simply screwed up the article rather than the process of handling and sharing customer data. There are still issues and questions for Dropbox, starting with the anonymized data itself. A Dropbox spokesperson told WIRED the company “randomized or hashed the dataset” before sharing the user data with NICO.

Why did Dropbox randomize *or* hash the datasets? Why did the company use two different approaches to anonymizing the user data? And how did it decide which types of data to hash and which types to randomize?

Furthermore, how was the data hashed? Dropbox didn’t say, but that’s an important question. I’d like to believe that a company like Dropbox wouldn’t use an insecure, deprecated hashing algorithm like MD5 or SHA-1, but there’s plenty of evidence those algorithms are still used by many organizations today.

The Dropbox spokesperson also told WIRED it grouped the dataset into “wide ranges” so no identifying information could be derived. But Dropbox’s explanation of the process is short on details. As a number of people in the infosec community have pointed out this week, anonymized data may not always be truly anonymous. And while some techniques work better than others, the task of de-anonymization appears to be getting easier.

And these are just the issues relating to the anonymized data; there are also serious questions about Dropbox’s privacy policy. The company claims its privacy policy covers the academic research, which has since sparked a debate about the requirements of informed consent. The policy states Dropbox may share customer data with “certain trusted third parties (for example, providers of customer support and IT services) to help us provide, improve, protect, and promote our services,” and includes a list of those trusted third parties like Amazon, Google and Salesforce. NICO, however, is not on the list. It’s also not entirely clear whether the anonymized data was given to NICO to improve the Dropbox service or to advance scientific research.

And while this isn’t close to the gross abuse of personal data we’ve seen with the Cambridge Analytica scandal, it’s nevertheless concerning. These types of questionable decisions regarding data usage and sharing can lead to accidental breaches, which can be just as devastating as any malicious attack that breaches and exposes user data. If companies in the business of storing and protecting data — like Dropbox — don’t have clear policies and procedures for sharing and anonymizing data, then we’re in for plenty more unforced errors.


July 17, 2018  2:26 PM

Is the new California privacy law a domestic GDPR?

Peter Loshin Peter Loshin Profile: Peter Loshin

The difference between data privacy protections afforded to European Union residents and people in the U.S. is more sharply highlighted now that the EU’s General Data Protection Regulation has taken effect. Will passage of a new California privacy law make a difference?

At first glance, it may seem California state legislators took a bold first step when they quickly passed a comprehensive data privacy protection law last month known as the California Consumer Privacy Act of 2018.

Like the GDPR, this new legislation spells out these rights for protection of the privacy of California consumers. From the text of the new law, these rights include:

(1) The right of Californians to know what personal information is being collected about them.

(2) The right of Californians to know whether their personal information is sold or disclosed and to whom.

(3) The right of Californians to say no to the sale of personal information.

(4) The right of Californians to access their personal information.

(5) The right of Californians to equal service and price, even if they exercise their privacy rights.

While the intent of the new California privacy law and the GDPR are the same — protecting consumer privacy — the most important differences between the two laws lie in the process. Whereas the GDPR was the product of years of careful preparation and collaboration between bureaucrats, privacy experts, politicians and technology practitioners, the California privacy law was mashed together in less than a week, according to the Washington Post, in order to forestall more stringent privacy protections from being passed via a ballot initiative that had broad support in California.

The bipartisan rush to enact the new California privacy law (passed unanimously) has everything to do with control, and little to do with the will of the people. Legislation passed by the electorate through a ballot initiative is much more difficult for legislators to tinker with: any changes require a two-thirds majority, while laws passed the usual way by the legislature can be more easily modified with a simple majority.

Another superficial similarity between the GDPR and the California Consumer Privacy Act is that enforcement of the new law is set to begin (almost) two years from the date of passage. For the GDPR, enforcement began May 25, 2018; the California privacy law goes into effect on Jan. 1, 2020. Companies facing the requirement to comply with the GDPR were given a two-year window by the EU lawmakers to get ready, but the conventional wisdom around the California privacy law is that the next year and a half will be used by big tech companies and legislators to negotiate the precise terms of the law.

There are many other differences, but companies aiming to comply with the California privacy law should note that the terms of the law as currently written could be softened considerably before enforcement begins.

And some of the differences are worth noting. First, most businesses are likely to not be affected at all, as businesses subject to the law must meet one or more of the following conditions:

  • Annual gross revenues in excess of $25 million,
  • process information of 50,000 or more consumers, households or devices,
  • derive at least 50% of their annual revenues from the sale of personal information

As for penalties, companies subject to the regulation face fines as high as $7,500 for each violation, to be levied through a civil action “brought in the name of the people of the State of California by the Attorney General,” the law reads — but that requires the finding that the offending entity violated the law “intentionally.”

Is the California privacy law comparable to the GDPR? We don’t know, and we probably won’t know for at least a year — and perhaps not until after Jan. 1, 2020, when the new law goes into effect. If the law, as written, is applied to a company like Equifax, which exposed roughly half the adult U.S. population in the breach uncovered last year, then the results could be devastating. The share of Californians exposed in that breach can be estimated at about 12 million; if the Equifax breach was found to have been caused intentionally, the maximum fine would be close to $100 billion.

That’s far higher than the GDPR maximum penalty of 4% of annual global turnover, which in 2017 was only $3.36 billion, meaning the maximum fine could be about $135 million.

However, GDPR penalties don’t require a finding of intent to break the law on the part of the offending entity, and many smaller companies subject to GDPR — those with annual gross revenues lower than $25 million, processing personal data of fewer than 50,000 consumers, households or devices, and which make less than 50% of their revenue from the sale of that data — would be immune from any penalties under the new California privacy law.

The bottom line: unlike in 2016, when the final form of the GDPR was approved and companies were granted a two-year period to prepare to comply with the new privacy regulation, the new California privacy law is coming — but it’s still an open question just how effective or useful it will be for protecting consumer privacy.


June 29, 2018  8:09 PM

Cyber attribution: Why it won’t be easy to stop the blame game

Rob Wright Profile: Rob Wright

The “who” in a whodunit has always been the most crucial element, but when it comes to cyberattacks, that conventional wisdom has been turned on its head.

A growing chorus of infosec experts in recent years has argued that cyber attribution of an attack is the least important aspect of the incident, far below identification, response and remediation. Focusing on attribution, they say, can distract organizations from those more important elements. Some experts such as Dragos CEO Robert Lee have even asserted that public attribution of cyberattacks can do more harm than good.

I tend to agree with many of the critiques about attribution, especially the dangers of misattribution. But a shift away from cyber attribution could be challenging for several reasons.

First, nation-state cyberattacks have become an omnipresent issue for both the public as well as enterprises. Incidents like the Sony Entertainment hack or, more recently, the breach of the Democratic National Committee’s network have dominated headlines and the national consciousness. It’s tough to hear about the latest devastating hack or data breach and not immediately wonder if Iran or Russia or North Korea is behind it. There’s a collective desire to know who is responsible for these events, even if that information matters little to the actual victims of the attacks.

Second, attribution is a selling point for the vendors and security researchers that publish detailed threat reports on a near-daily occurrence. The infosec industry is hypercompetitive, and that applies not just to products and technology but threat and vulnerability research, which has emerged in recent years as a valuable tool for branding and marketing. A report that describes a cyberattacks on cryptocurrency exchanges might get lost in the mix with other threat reports; a report that attributes that activity to state-sponsored hackers in North Korea, however, is likely to catch more attention. Asking vendors and researchers to withhold attribution, therefore, is asking them to give up a potential competitive differentiator.

And finally, on the attention note, the media plays an enormous role here. Journalists are tasked with finding out the “who, what, when, where and why” of a given news event, and that includes a cyberattack. Leaving out the “who” is a tough pill to swallow. The larger and more devastating the attack, the more intense the media pressure is for answers about which advanced persistent threat (APT) group is responsible. But even with smaller, less important incidents, there is considerable appetite for attribution (and yes, that includes SearchSecurity). Will that media appetite influence more vendors and research teams to engage in public attribution? And where should the infosec community draw a line, if one should be drawn at all?

This is not to say that cyber attribution doesn’t matter. Nation-state APT groups are generally considered to be more skilled and dangerous than your average cybercrime gang, and the differences between the two can heavily influence how an organization reacts and responds to a threat. But there is also a point at which engaging in public attribution can become frivolous and potentially detrimental.

A larger industry conversation about the merits and drawbacks of cyber attribution is one worth having, but the overwhelming desire to identify the actors behind today’s threats and attackers isn’t something that will be easily quelled.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: