Security Bytes


September 14, 2018  8:38 PM

What the GAO Report missed about the Equifax data breach

Rob Wright Profile: Rob Wright
Equifax

The Government Accountability Office did its part to deliver some closure regarding the Equifax data breach by way of a newly published report on the now-infamous security incident.

The GAO report offers a comprehensive look at the numerous missteps made by Equifax, which allowed attackers to maintain a presence in the company’s network for 76 days and extract massive amounts of personal data without being detected. Those errors included having an outdated recipient list of system administrators for vulnerability alerts and an expired digital certificate, which led to a misconfiguration in Equifax’s network inspection system.

But for all its merits, the GAO’s report on the Equifax data breach omits or minimizes important parts of the story. Here are five things that were left out of the report.

  1. Website “issues”: The GAO noted the breach checker website that Equifax set up for consumers suffered from “several technical issues, including excessive downtime and inaccurate data.” But that’s hardly an adequate description of what ailed the website. For starters, the domain resembled a classic phishing URL — equifaxsecurity2017.com. It was also built on a stock version of WordPress (was the company trying to get hacked again?). And it was vulnerable to cross-site scripting attacks. And the site’s TLS certificate didn’t perform revocation checks. These are significantly bigger problems than website downtime.
  2. PIN problems: If the assortment of flaws with the breach checker website wasn’t enough, astute observers also discovered that the PINs generated for consumers who froze their credit files weren’t random, non-consecutive numbers – they were the date and time a consumer made the freeze request. As a result, the chances of threat actors guessing your code are significantly higher than they would be if the PIN digits were randomly selected.
  3. Encryption: This is the biggest omission in the Equifax breach report. While the report does mention encryption several times, it’s never in regard to the personally identifiable information that was exposed by the breach, and how encryption could have protected that data even after the attackers gained access to Equifax’s environment. Instead, the majority of the encryption talk is around how the threat actors abused existing encrypted communication channels to avoid detection when they issued commands and exfiltrated data. Encryption is obviously a sensitive topic within the federal government, but it’s as if the GAO is more concerned with how encryption helped the attackers rather than with how it could have stopped them.
  4. Insider trading: The GAO report doesn’t include any references to the former Equifax executive who was charged with insider trading by the Department of Justice. Jun Ying, the former CIO of Equifax’s U.S. Information Systems business unit, allegedly used non-public information about the breach to exercise his vested options and sell all of shares. While the case has no direct bearing on Equifax’s infosec posture, past or present, it’s a painful reminder that insider trading can be a by-product of enterprise breaches. Omitting any mention of Ying and the insider trading case from an accountability report seems like a missed opportunity for the federal government to address what could potentially be a reoccurring problem as breaches increase in frequency.
  5. Lack of incident response plan: Incident response is sparsely mentioned in the report, and when the GAO does mention it, it’s in the footnotes. For all the faults and errors laid out in the Equifax breach report, the GAO fails to identify a fundamental problem: the company apparently didn’t have a functional incident response plan in place. This is led to Equifax not only making several errors with its breach checker website but also later missteps, such as not knowing whether the company had encrypted consumer data post-breach. A solid, proper incident response plan would have saved Equifax a lot of trouble in the aftermath of the attack.

August 17, 2018  8:25 PM

DHS cybersecurity rhetoric offers contradictions at DEF CON

Michael Heller Michael Heller Profile: Michael Heller
DHS, Election, vote

The Vote Hacking Village at Defcon 26 in Las Vegas was an overwhelming jumble of activity — a mock vote manipulated, children hacking election results websites, machines being disassembled — and among it all were representatives from local and federal government, learning and sharing information and in the case of Jeanette Manfra, assistant secretary for the office of cybersecurity and communications in the Department of Homeland Security (DHS), deflecting the reality of the situation.

In her DEF CON  keynote address, Manfra discussed government cybersecurity in general as well as the ways election security could be improved, but she contradicted herself by refusing to acknowledge the value of the work done by DEF CON and deflecting actions to bring about real change.

The old standby arguments

“The way the government runs, naturally it’s somewhat more complicated. We don’t do IT [in] government particularly well,” Manfra said as an explanation of the DHS’ role. She said DHS was responsible for the cybersecurity of 99 different federal agencies, which have traditionally been isolated in terms of managing security. “We’re trying to get to think about enterprise risk and think about the government as a whole.”

This is a good example of the tone Manfra tried to establish: self-deprecating, but honest about the situation, even if she omitted key pieces of information — such as the challenge of having a holistic view of federal cybersecurity when so many infosec leadership roles in government remain empty — which would contradict the point she made.

Manfra continued to bring up the fact that we live in confusing times in terms of cybersecurity, especially because “the internet has challenged everything when it comes to how we think about the role of government in defending and securing its citizens and its infrastructure.”

“For the first time in a national security space, the government is not on the front lines. Our companies are on the front lines; our citizens are on the front lines; all of you are on the front lines,” Manfra said and concluded this means everyone — government, the intelligence community and the private sector — needs to think differently about their roles in cybersecurity and how to work together. “Our adversaries have been taking advantage of us for a long time. They’ve been taking advantage of our traditional principles for a really long time. And, we’ve got to come up with a way to turn it back on them.”

The idea that the roles of government and the private sector are in flux because of changes in technology is arguably accurate, but the situation is more complex than Manfra portrays. One could just as easily point to the privatization of critical infrastructure and lack of regulations surrounding necessary security and system upgrades in that infrastructure as contributing risk factors.

Manfra’s call for more cooperation between the public and private sectors in terms of security has been a common theme from the government for the past few years. However, the government’s appeal to the private sector to cooperate out of the pride of helping the country has largely fallen on deaf ears, because as was true with Manfra’s speech, the government often fails to make a compelling case.

The government wants to share, but the private sector has little incentive to do so, and experts have said the private sector doesn’t necessarily trust it would benefit from such cooperation, nor that security would improve. Despite the continued reluctance from the private sector and the lack of specifics from the government about what such cooperation would look like, the government seems ready to continue pushing the idea.

Election deflection and contradictions

Once Manfra got to the topic of election security, she began to combine omissions of information with statements that contradicted and attempts to deflect suggestions to make meaningful improvements to security.

“Elections are more than just the voting machines … The complexity is actually a benefit,” Manfra said. “Going back to 2016 when we first started to understand that the Russians were attempting to undermine and sow chaos and discord and undermine our democracy in general — which by the way, they’ve been trying to do for decades, it’s just the technology has allowed them to do it at a better scale.”

Despite acknowledging that attempts to undermine our democracy have been happening “for decades,” Manfra failed to explain why efforts to investigate risk factors and offer aid to improve security did not begin until 2016.

Manfra went on to claim the research the government has done led to the conclusion that it is “really really difficult to try to manipulate the actual vote count itself.” She said there were a lot of reasons for this, including that election machines are “physically secured.” This claim garnered chuckles from the DEF CON crowd, who have little respect for things like padlocks.

Manfra said that while misinformation on social media was an issue, DHS was focused on manipulation of voter rolls and the systems that tally the votes. She gave an example of voters becoming disenfranchised with the system because their names weren’t on the rolls at their polling places. She admitted the local officials running these systems are often under-resourced and need help because they could be using old systems.

“They’re trying to undermine our democratic process and confidence that we have in the democratic process,” Manfra said. “There’s a lot of ways to do that without actually manipulating the vote. We’re very much focused on the state and local process that you and I all participate in — I hope — all the time.”

Manfra explicitly mentioned the effect in undermining the trust in the election that could occur if an adversary were to manipulate the unofficial tally being reported by states. However, Manfra contradicted herself by discounting the efforts by DEF CON — where an 11 year old girl hacked into a mock reporting website in 10 minutes and changed the results — telling reporters after the keynote, “If all you’re saying is ‘Look, even a kid can hack into this.’ You’re not getting the full story which could have the impact of the average voter not understanding.”

Manfra admitted the DHS has begun researching more experimental security technologies, like blockchain, to see what their effects could be on election security. But, it’s unclear how serious the DHS is about making changes that would improve security because she also shied away from mandating proven election security measures such as risk-limiting audits.

“I’m not there yet in terms of mandatory requirements,” Manfra told reporters. “I think mandatory requirements could chill, so then people are only about the compliance with the requirement and not the intent.”

Ultimately, it’s unclear if the DHS has real, actionable plans to improve election security beyond the nebulous idea of helping local officials — assuming those officials ask for help in the first place. DEF CON showed vulnerable areas (reporting websites) and ways to improve security (paper trails and risk-limiting audits), but DHS seemed more interested in waiting and watching than learning from the event.


August 3, 2018  4:56 PM

Five things to watch for at Black Hat USA this year

Robert Richardson Robert Richardson Profile: Robert Richardson

When the latest edition of Black Hat USA kicks off in Las Vegas next week, it will find itself deep in the swirl of nation-state election tampering, with top security administrator’s gathering jointly in the White House press room to underscore the dangers of Russian cybermeddling while President Trump dismisses it all as wing-flapping.

Black Hat’s sessions have a way of morphing as they unfold onsite so that they speak to whatever news is breaking at the moment, and no doubt this will happen as the week progresses. But at the moment, you’d be forgiven it you took a look at the program and came away thinking that this larger cultural moment was being ignored. There’s a talk on norms in cyberdiplomacy (apparently there are some) and one on attribution (that perennial hobgoblin), but this isn’t one of those years where the head of the National Security Agency has come to take a drubbing on the big stage.

Also not much on the main program: cloud security. There’s that session you’ve come to expect on some new aspect of AWS credential compromise, but it’s pretty sparse otherwise. It remains to be seen whether this is because there’s a general stalemate in cloud attacks for the time being or whether the larger truth is that now it’s all cloud, so why even use the word.

What’s actually heavy on the program, then? Here are five areas that should prove interesting:

  1. Artificial intelligence for bad people.You knew hackers were going to use machine learning, right? Sessions like “Deep learning for side-channel analysis” and “Deep neural networks for hackers” should drum up some discussions in the hallways. And however bad it winds up looking, there’s little doubt that this is a horse that has already left the barn.
  2. The workaday breaking of things. Black Hat has always been a gathering where researchers tell IT folks how things can be skewered and compromised. This year carries on that proud tradition, with roughly half of sessions describing the milking either of control or of sensitive data from various applications or devices. It doesn’t appear there’s a particular pattern here with a thick new seam of vulnerability opening up. If there’s a generalization to be had, it’s that breaks making it into sessions these days lean toward complexity. A Japanese wirelessSD card will be reverse engineered. CPU caches will be ransacked. Parsers will be snipped. Nothing seems, so far, to have the big media flare of jackpotting an ATM or driving a Jeep off the road while the driver squeals for mercy. But you never can tell.
  3. Serious focus on the infosec community’s issues.There’s a track dedicated to tackle the issues that, frankly, most conferences don’t touch except with gallows humor asides. Topics include suicide, PTSD, addiction, dealing with sexual assault and closing the gender gap in the profession. It’s a strong move on Black Hat’s part.
  4. Spectre/Meltdown. There’s a talk that an insider friend tells me really will sort out why things got a little weird when word of the Meltdown vulnerability came out back in January. As the conference program has it, the speakers will “focus on the developments after the disclosure of Meltdown.” They’ll talk about “yet undisclosed attacks, including combinations of Meltdown and Spectre.” If you go for geeky content, this is your session (see you there).
  5. Industrial control systems. There are well-nigh 20 presentations just in the main program that deal with cars, planes and factories. There’s an “ICS firewall deep dive” that might be viewed as a core look at what the industrial world has in the way of conventional protections at the moment. Then there will be the customary breaking of things.

There will once again be a split between what’s important in the exhibit hall and most of what’s going on in the main conference session rooms. The sessions are about tools and attacks, but out on the sales floor, what vendors are beginning to grapple with are the several large changes in IT as a whole. These changes include IoT edge architectures, software-defined everything, and microservice application architectures and converged data centers changing traffic patterns within the enterprise so fundamentally that firewalls and intrusion detection are failing on the fundamentals—and trying to make sense within the new paradigms. Things on the show floor could be pretty interesting this year, even if that’s not the traditional hot spot at Black Hat.


July 27, 2018  8:00 PM

How Dropbox dropped the ball with anonymized data

Rob Wright Profile: Rob Wright
Security

Dropbox found itself in hot water this week over an academic study that used anonymized data to analyze the behavior and activity of thousands of customers.

The situation seemed innocent enough at first — an article in Harvard Business Review, researchers at Northwestern University Institute on Complex Systems (NICO) detailed an extensive two-year study of best practices for collaboration and communication on the cloud file hosting platform. Specifically, the study examined how thousands of academic scientists used Dropbox, which gave the NICO researchers project-folder data from more than 1,000 university departments.

But it wasn’t long before serious issues were revealed. The article, titled “A Study of Thousands of Dropbox Projects Reveals How Successful Teams Collaborate,” initially claimed that Dropbox gave the research team raw user data, which the researchers then anonymized. After Dropbox was hit with a wave of criticism, the article was revised to say the original version was incorrect – Dropbox anonymized the user data first and then gave it to the researchers.

That’s an extremely big error for the authors to make (if indeed it was an error) about who anonymized the data and when the data was anonymized — especially considering article was co-authored by a Dropbox manager (Rebecca Hinds, head of Enterprise Insights at Dropbox). I have to believe the article went through some kind of review process from Dropbox before it was published.

But let’s assume one of the leading cloud collaboration companies in the world simply screwed up the article rather than the process of handling and sharing customer data. There are still issues and questions for Dropbox, starting with the anonymized data itself. A Dropbox spokesperson told WIRED the company “randomized or hashed the dataset” before sharing the user data with NICO.

Why did Dropbox randomize *or* hash the datasets? Why did the company use two different approaches to anonymizing the user data? And how did it decide which types of data to hash and which types to randomize?

Furthermore, how was the data hashed? Dropbox didn’t say, but that’s an important question. I’d like to believe that a company like Dropbox wouldn’t use an insecure, deprecated hashing algorithm like MD5 or SHA-1, but there’s plenty of evidence those algorithms are still used by many organizations today.

The Dropbox spokesperson also told WIRED it grouped the dataset into “wide ranges” so no identifying information could be derived. But Dropbox’s explanation of the process is short on details. As a number of people in the infosec community have pointed out this week, anonymized data may not always be truly anonymous. And while some techniques work better than others, the task of de-anonymization appears to be getting easier.

And these are just the issues relating to the anonymized data; there are also serious questions about Dropbox’s privacy policy. The company claims its privacy policy covers the academic research, which has since sparked a debate about the requirements of informed consent. The policy states Dropbox may share customer data with “certain trusted third parties (for example, providers of customer support and IT services) to help us provide, improve, protect, and promote our services,” and includes a list of those trusted third parties like Amazon, Google and Salesforce. NICO, however, is not on the list. It’s also not entirely clear whether the anonymized data was given to NICO to improve the Dropbox service or to advance scientific research.

And while this isn’t close to the gross abuse of personal data we’ve seen with the Cambridge Analytica scandal, it’s nevertheless concerning. These types of questionable decisions regarding data usage and sharing can lead to accidental breaches, which can be just as devastating as any malicious attack that breaches and exposes user data. If companies in the business of storing and protecting data — like Dropbox — don’t have clear policies and procedures for sharing and anonymizing data, then we’re in for plenty more unforced errors.


July 17, 2018  2:26 PM

Is the new California privacy law a domestic GDPR?

Peter Loshin Peter Loshin Profile: Peter Loshin

The difference between data privacy protections afforded to European Union residents and people in the U.S. is more sharply highlighted now that the EU’s General Data Protection Regulation has taken effect. Will passage of a new California privacy law make a difference?

At first glance, it may seem California state legislators took a bold first step when they quickly passed a comprehensive data privacy protection law last month known as the California Consumer Privacy Act of 2018.

Like the GDPR, this new legislation spells out these rights for protection of the privacy of California consumers. From the text of the new law, these rights include:

(1) The right of Californians to know what personal information is being collected about them.

(2) The right of Californians to know whether their personal information is sold or disclosed and to whom.

(3) The right of Californians to say no to the sale of personal information.

(4) The right of Californians to access their personal information.

(5) The right of Californians to equal service and price, even if they exercise their privacy rights.

While the intent of the new California privacy law and the GDPR are the same — protecting consumer privacy — the most important differences between the two laws lie in the process. Whereas the GDPR was the product of years of careful preparation and collaboration between bureaucrats, privacy experts, politicians and technology practitioners, the California privacy law was mashed together in less than a week, according to the Washington Post, in order to forestall more stringent privacy protections from being passed via a ballot initiative that had broad support in California.

The bipartisan rush to enact the new California privacy law (passed unanimously) has everything to do with control, and little to do with the will of the people. Legislation passed by the electorate through a ballot initiative is much more difficult for legislators to tinker with: any changes require a two-thirds majority, while laws passed the usual way by the legislature can be more easily modified with a simple majority.

Another superficial similarity between the GDPR and the California Consumer Privacy Act is that enforcement of the new law is set to begin (almost) two years from the date of passage. For the GDPR, enforcement began May 25, 2018; the California privacy law goes into effect on Jan. 1, 2020. Companies facing the requirement to comply with the GDPR were given a two-year window by the EU lawmakers to get ready, but the conventional wisdom around the California privacy law is that the next year and a half will be used by big tech companies and legislators to negotiate the precise terms of the law.

There are many other differences, but companies aiming to comply with the California privacy law should note that the terms of the law as currently written could be softened considerably before enforcement begins.

And some of the differences are worth noting. First, most businesses are likely to not be affected at all, as businesses subject to the law must meet one or more of the following conditions:

  • Annual gross revenues in excess of $25 million,
  • process information of 50,000 or more consumers, households or devices,
  • derive at least 50% of their annual revenues from the sale of personal information

As for penalties, companies subject to the regulation face fines as high as $7,500 for each violation, to be levied through a civil action “brought in the name of the people of the State of California by the Attorney General,” the law reads — but that requires the finding that the offending entity violated the law “intentionally.”

Is the California privacy law comparable to the GDPR? We don’t know, and we probably won’t know for at least a year — and perhaps not until after Jan. 1, 2020, when the new law goes into effect. If the law, as written, is applied to a company like Equifax, which exposed roughly half the adult U.S. population in the breach uncovered last year, then the results could be devastating. The share of Californians exposed in that breach can be estimated at about 12 million; if the Equifax breach was found to have been caused intentionally, the maximum fine would be close to $100 billion.

That’s far higher than the GDPR maximum penalty of 4% of annual global turnover, which in 2017 was only $3.36 billion, meaning the maximum fine could be about $135 million.

However, GDPR penalties don’t require a finding of intent to break the law on the part of the offending entity, and many smaller companies subject to GDPR — those with annual gross revenues lower than $25 million, processing personal data of fewer than 50,000 consumers, households or devices, and which make less than 50% of their revenue from the sale of that data — would be immune from any penalties under the new California privacy law.

The bottom line: unlike in 2016, when the final form of the GDPR was approved and companies were granted a two-year period to prepare to comply with the new privacy regulation, the new California privacy law is coming — but it’s still an open question just how effective or useful it will be for protecting consumer privacy.


June 29, 2018  8:09 PM

Cyber attribution: Why it won’t be easy to stop the blame game

Rob Wright Profile: Rob Wright

The “who” in a whodunit has always been the most crucial element, but when it comes to cyberattacks, that conventional wisdom has been turned on its head.

A growing chorus of infosec experts in recent years has argued that cyber attribution of an attack is the least important aspect of the incident, far below identification, response and remediation. Focusing on attribution, they say, can distract organizations from those more important elements. Some experts such as Dragos CEO Robert Lee have even asserted that public attribution of cyberattacks can do more harm than good.

I tend to agree with many of the critiques about attribution, especially the dangers of misattribution. But a shift away from cyber attribution could be challenging for several reasons.

First, nation-state cyberattacks have become an omnipresent issue for both the public as well as enterprises. Incidents like the Sony Entertainment hack or, more recently, the breach of the Democratic National Committee’s network have dominated headlines and the national consciousness. It’s tough to hear about the latest devastating hack or data breach and not immediately wonder if Iran or Russia or North Korea is behind it. There’s a collective desire to know who is responsible for these events, even if that information matters little to the actual victims of the attacks.

Second, attribution is a selling point for the vendors and security researchers that publish detailed threat reports on a near-daily occurrence. The infosec industry is hypercompetitive, and that applies not just to products and technology but threat and vulnerability research, which has emerged in recent years as a valuable tool for branding and marketing. A report that describes a cyberattacks on cryptocurrency exchanges might get lost in the mix with other threat reports; a report that attributes that activity to state-sponsored hackers in North Korea, however, is likely to catch more attention. Asking vendors and researchers to withhold attribution, therefore, is asking them to give up a potential competitive differentiator.

And finally, on the attention note, the media plays an enormous role here. Journalists are tasked with finding out the “who, what, when, where and why” of a given news event, and that includes a cyberattack. Leaving out the “who” is a tough pill to swallow. The larger and more devastating the attack, the more intense the media pressure is for answers about which advanced persistent threat (APT) group is responsible. But even with smaller, less important incidents, there is considerable appetite for attribution (and yes, that includes SearchSecurity). Will that media appetite influence more vendors and research teams to engage in public attribution? And where should the infosec community draw a line, if one should be drawn at all?

This is not to say that cyber attribution doesn’t matter. Nation-state APT groups are generally considered to be more skilled and dangerous than your average cybercrime gang, and the differences between the two can heavily influence how an organization reacts and responds to a threat. But there is also a point at which engaging in public attribution can become frivolous and potentially detrimental.

A larger industry conversation about the merits and drawbacks of cyber attribution is one worth having, but the overwhelming desire to identify the actors behind today’s threats and attackers isn’t something that will be easily quelled.


May 30, 2018  5:14 PM

It’s GDPR Day. Let the privacy regulation games begin!

Peter Loshin Peter Loshin Profile: Peter Loshin

May 25, 2018 was “GDPR Day;” the day enforcement of the European Union’s new General Data Protection Regulation began; the day so many information security professionals have been preparing for over the past two years; the day so many have been anticipating and fearing.

GDPR Day is a day many have been treating as a deadline to comply with an entirely new privacy regulation, and woe to all who are not ready by the deadline.

However, GDPR Day is not a deadline — it’s a starting date.

If you’re new to the GDPR game, last Friday was the first day the new regulation could be enforced in the EU against any organization collecting personal data and failing to comply with the new rules.

Max Schrems, the Austrian attorney and privacy activist who helped bring down the long-established Safe Harbor framework governing trans-Atlantic data flows over privacy concerns in 2015, is on the job now as well. His group, NOYB (“None of Your Business”) filed the first complaints under GDPR, alleging that Facebook and its Instagram and WhatsApp services, as well as Google, were attempting to do an end-run around GDPR consent policies by “forcing” consent: telling users there is a new privacy policy, but giving them no way to opt out of sharing other than to stop using the service entirely.

And, anyone who imagined Facebook and Google would be the only companies facing this type of charge was simply wrong.

Monday morning after GDPR Day saw more complaints: Seven claims against Facebook and Google (in three separate complaints against Gmail, Youtube and Search) as well as claims against Apple, Amazon and LinkedIn by the French digital rights group La Quadrature du Net. The group had originally intended to target a dozen services but held back on complaints against Whatsapp, Instagram, Android, Outlook and Skype in order to avoid overwhelming the system.

Forced consent is not OK under GDPR

The intent of the GDPR is to return control of their data to EU data subjects. Up until now, companies like Facebook and the rest have been gathering data about their users and then finding ways to turn that data into revenue, for example, through targeted advertisements. Previously, there have been no significant obstacles keeping those big data companies from sharing or reselling some or all of the personal data they collect with other companies. And users have had little to no recourse to prevent all of this from happening. At best, services would bury controls to opt out of targeted advertising deep in settings and at worst, even leaving (or not joining) the service all together might not stop the data collection and sale as was the case with Facebook’s “shadow profiles.”

What was seen in the run-up to GDPR Day from the big data companies has been a form of “opting in” consent policies that effectively force consent from users. This forced consent is not just a bad look on the part of these big corporations but, as NOYB put it in its statement, it is in fact illegal under the new rules.

Schrems said in a statement that when Facebook blocked accounts of users who withheld consent, “that’s not a free choice, it more reminds of a North Korean election process.”

NOYB pointed out that, under Article 7(4) of the GDPR, “such forced consent and any form of bundling a service with the requirement to consent” is prohibited under GDPR — and Schrems said that “this annoying way of pushing people to consent is actually forbidden under GDPR in most cases.”

Schrems and NOYB also note that the GDPR doesn’t mean companies can’t collect any data from their users, because there are some pieces of information that they need in order to provide their services. “The GDPR explicitly allows any data processing that is strictly necessary for the service – but using the data additionally for advertisement or to sell it on needs the users’ free opt-in consent.”

In other words, if the data is required for the service provider to be able to provide the service, consent is no longer required — but for any other use, the users must be given a real choice.

So, who should be worried about GDPR enforcement?

In the days since GDPR Day and the start of enforcement, it is clear that companies that have failed in some way to comply with the new rules — especially those that have attempted to comply in a way that circumvents the consumer protections provided by GDPR — should be worried.

If your organization has taken the steps necessary to comply — in good faith — with the GDPR, it is probably safe. If your organization cares for the personally identifying data of its customers, employees and anyone else whose data it collects, you are also probably safe.

However, if your company is making an effort to appear to be in compliance with GDPR, but in a way that attempts to subvert the privacy regulation, you should worry.


May 9, 2018  3:43 PM

Google I/O’s security and privacy focus missing on day one

Michael Heller Michael Heller Profile: Michael Heller

It’s fairly easy to find stories sparking security and privacy concerns regarding a Google product or service — Search, Chrome, Android, AdSense and more — but if you watched or attended Google I/O, you might be convinced everything is fine.

On the first day of Google I/O, there were effectively three keynotes — the main consumer keynote headlined by CEO Sundar Pichai; the developer keynote headlined by Jason Titus, vice president of the developer product group; and what is colloquially known as the Android keynote headed by developers Dan Sandler, Romain Guy and Chet Haase.

Google I/O’s security content, however, was scant. During the course of those talks, which lasted nearly five hours, there were about three mentions of security and privacy — one in the developer conference in regards to Firebase, Google’s cross-platform app developer tool, including help for GDPR concerns; and two in the Android keynote regarding the biometrics API being opened up to include authentication types bdesides fingerprints and how Android P will shut down access for apps that monitor network traffic.

Sandler did mention a session on Android security scheduled for Thursday, but there were more than enough moments during day one for Google to point out how it was handling security concerns in Android. Research into the Android P developer previews had uncovered security improvements, including locking down access to device cameras and microphones for background apps, better encryption for backup data, random MAC addresses on network connections, more HTTPS and more alerts when an app is using an outdated API.

Even when Google’s IoT platform, Android Things, was announced as being out of beta and ready for the limelight, there was no mention of security despite that being one of the major selling points of the platform. Google claims Android Things will make IoT safer because the company is promising three years of security and reliability updates. (The question of whether three years of support meaningfully moves the needle for IoT security is a question for another time.)

Privacy is constantly a concern for Google Assistant and its growing army of always-listening devices, but the closest Google got to touching on privacy here was in saying that the new “continued conversations” — which will allow more back and forth conversations with the Assistant without requiring the “OK Google” trigger — would be a feature that users would have to turn on, implying it would be opt-in to allow the Assistant to keep listening when you don’t directly address it.

Beyond all of these areas, Google has a history of privacy concerns around how much data it collects on users. Google has a more well-defined policy about how that data is shared and what data is shared with advertisers and other partners, but it’s not hard to imagine Google getting swept up in a backlash similar to what Facebook has faced in the aftermath of the Cambridge Analytica episode. Given that controversy, it’s surprising that Google didn’t want to proactively address the issue and reassert how it protects user data. It’s not as though staying quiet will make the public forget about its concerns regarding Google.

Google I/O’s security focus was largely non-existent on the first day of the show. Time will tell whether or not this is anomaly or something more concerning.


May 3, 2018  5:58 PM

Cybersecurity pervasiveness subsumes all security concerns

Michael Heller Michael Heller Profile: Michael Heller

Given the increased digitization of society and explosion of devices generating data (including retail, social media, search, mobile, and the internet of things), it seems like it might have been inevitable that cybersecurity pervasiveness would eventually touch every aspect of life. But, it feels more like everything has been subsumed by infosec.

All information in our lives is now digital — health records, location data, search habits, not to mention all of the info we willingly share on social media — and all of that data has value to us. However, it also has value to companies that can use it to build more popular products and serve ads and it has value to malicious actors too.

The conflict between the interests of these three groups means cybersecurity pervasiveness is present in every facet of life. Users want control of their data in order to have a semblance of privacy. Corporations want to gather and keep as much data as possible, just in case trends can be found in it to increase the bottom line. And, malicious actors want to use that data for financial gain — selling PII, credit info or intellectual property on the dark web, holding systems for ransom, etc. — or political gain.

None of these cybersecurity pervasiveness trends are necessarily new for those in the infosec community, but issues like identity theft or stolen credit card numbers haven’t always registered with the general public or mass media as cybersecurity problems because they tended to be considered in individual terms — a few people here and there had those sorts of issues but it couldn’t be too widespread, right?

Now, there are commercials on major TV networks pitching “free dark web scans” to let you know whether your data is being sold on the black market. (Spoiler alert: your data has almost certainly been compromised, it’s more a matter of whether you’re unlucky enough to have your ID chosen from the pile by malicious actors or not. And, a dark web scan won’t make the awful process of getting a new social security number any better.)

Data breaches are so common and so far-reaching that everyone has either been directly affected or is no more than about two degrees of separation from someone who has been. Remember: the Yahoo breach alone affected 3 billion accounts and the latest stats say there are currently only about 4.1 billion people who have internet access. The Equifax breach affected 148 million U.S. records and the U.S. has an estimated population of 325 million.

Everyone has been affected in one way or another. Everything we do can be tracked including our location, our search and purchase history, our communications and more.

But, cybersecurity pervasiveness no longer affects only financial issues and the general public has seen in stark reality how digital platforms and the idea of truth itself can be manipulated by threat actors for political gain.

Cyberattacks have become shows of nation-state power in a type of new Cold War, at least until cyberattacks impact industrial systems and cause real world harm.

Just as threat actors can find the flaws in software, there are flaws in human psychology that can be exploited as part of traditional phishing schemes or fake news campaigns designed to sway public opinion or even manipulate elections.

For all of the issues that arise from financially-motivated threat actors, the security fixes range from relatively simple to implement — encryption, data protection, data management, stronger privacy controls, and so on — to far more complex issues like replacing the woefully outmatched social security number as a primary form of ID.

However, the politically-minded attacks are far more difficult to mitigate, because you can’t patch human psychology. Better critical reading skills are hard to build across people who might not believe there’s even an issue that needs fixing. Pulling people out of echo chambers will be difficult.

Social networks need to completely change their platforms to be better at enforcing abuse policies and to devalue constant sharing of links. And the media also needs to stop prioritizing conflict and inflammatory headlines over real news. All of this means prioritizing the public good over profits, a notoriously difficult proposition under the almighty hand of capitalism.

None of these are easy to do and some may be downright impossible. But, like it or not, the infosec community has been brought to the table and can have a major voice in how these issues get fixed. Are we ready for the challenge?


April 30, 2018  5:34 PM

Algorithmic discrimination: A coming storm for security?

Rob Wright Profile: Rob Wright
Security

“If you don’t understand algorithmic discrimination, then you don’t understand discrimination in the 21st century.”

Bruce Schneier’s words, which came at the end of his wide-ranging session at RSA Conference last week, continued to echo in my ears long after I returned from San Francisco. Schneier, the well-known security expert, author and CTO of IBM Resilient, was discussing how technologists can become more involved in government policy, and he advocated for joint computer science-law programs in higher education.

“I think that’s very important. Right now, if you have a computer science-law degree, then you become a patent attorney,” he said. “Yes, it makes you a lot of money, but it would be great if you could work for the ACLU, the Sothern Poverty Law Center and the NAACP.”

Those organizations, he argued, need technologists that understand algorithmic discrimination. And given some recent events, it’s hard to argue with Schneier’s point. But with all of the talk at RSA Conference this year about the value of machine learning and artificial intelligence, just as in previous years, I wondered if the security industry truly does understand the dangers of bias and discrimination, and what kind of problems will come to the surface if it doesn’t.

Inside the confines of the Moscone Center, algorithms were viewed with almost complete optimism and positivity. Algorithms, we’re told, will help save time and money for enterprises that can’t find enough skilled infosec professionals to fill their ranks.

But when you step outside the infosec sphere, it’s a different story. We’re told how algorithms, in fact, won’t save us from vicious conspiracy theories and misinformation, or hate speech and online harassment, or any number of other negative factors afflicting our digital lives.

If there are any reservations about machine learning and AI, they are generally limited to a few areas such as improper training of AI models or how those models are being used by threat actors to aid cyberattacks. But there’s another issue to consider: how algorithmic discrimination and bias could negatively impact these models.

This isn’t to say that algorithmic discrimination will necessarily afflict cybersecurity technology in a way that reveals racial or gender bias. But for an industry that so often misses the mark on the most dangerous vulnerabilities and persistent yet preventable threats, it’s hard to believe infosec’s own inherent biases won’t somehow be reflected in the machine learning and AI-based products that are now dominating the space.

Will these products discriminate against certain risks over more pressing ones? Will algorithms be designed to prioritize certain types of data and threat intelligence at the expense of others, leading to data discrimination? It’s also not hard to imagine racial and ethnic bias creeping into security products with algorithms that demonstrate a predisposition toward certain languages and regions (Russian and Eastern Europe, for example). How long will it take for threat actors to pick up on those biases and exploit them?

It’s important to note that in many cases outside the infosec industry, the algorithmic havoc is wreaked not by masterful black hats and evil geniuses but by your average internet trolls and miscreants. They simply spent enough time studying how, for example, YouTube functions on a day-to-day basis and flooded the systems with content to figure out how they could weaponize search engine optimization.

If Google can’t construct algorithms to root out YouTube trolls and prevent harassers from abusing the sites’ search and referral features, then why do we in the infosec industry believe that algorithms will be able to detect and resolve even the low-hanging fruit that afflicts so many organizations?

The question isn’t whether the algorithms will be flawed. These machine learning and AI systems are built by humans, and flaws come with the territory. The question is whether they will be – unintentionally or purposefully – biased, and if those biases will be fixed or reinforced as the systems learn and grow.

The world is full of examples of algorithms gone wrong or nefarious actors gaming systems to their advantage. It would be foolish to think infosec will somehow be exempt.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: