Security Bytes


June 29, 2018  8:09 PM

Cyber attribution: Why it won’t be easy to stop the blame game

Rob Wright Profile: Rob Wright

The “who” in a whodunit has always been the most crucial element, but when it comes to cyberattacks, that conventional wisdom has been turned on its head.

A growing chorus of infosec experts in recent years has argued that cyber attribution of an attack is the least important aspect of the incident, far below identification, response and remediation. Focusing on attribution, they say, can distract organizations from those more important elements. Some experts such as Dragos CEO Robert Lee have even asserted that public attribution of cyberattacks can do more harm than good.

I tend to agree with many of the critiques about attribution, especially the dangers of misattribution. But a shift away from cyber attribution could be challenging for several reasons.

First, nation-state cyberattacks have become an omnipresent issue for both the public as well as enterprises. Incidents like the Sony Entertainment hack or, more recently, the breach of the Democratic National Committee’s network have dominated headlines and the national consciousness. It’s tough to hear about the latest devastating hack or data breach and not immediately wonder if Iran or Russia or North Korea is behind it. There’s a collective desire to know who is responsible for these events, even if that information matters little to the actual victims of the attacks.

Second, attribution is a selling point for the vendors and security researchers that publish detailed threat reports on a near-daily occurrence. The infosec industry is hypercompetitive, and that applies not just to products and technology but threat and vulnerability research, which has emerged in recent years as a valuable tool for branding and marketing. A report that describes a cyberattacks on cryptocurrency exchanges might get lost in the mix with other threat reports; a report that attributes that activity to state-sponsored hackers in North Korea, however, is likely to catch more attention. Asking vendors and researchers to withhold attribution, therefore, is asking them to give up a potential competitive differentiator.

And finally, on the attention note, the media plays an enormous role here. Journalists are tasked with finding out the “who, what, when, where and why” of a given news event, and that includes a cyberattack. Leaving out the “who” is a tough pill to swallow. The larger and more devastating the attack, the more intense the media pressure is for answers about which advanced persistent threat (APT) group is responsible. But even with smaller, less important incidents, there is considerable appetite for attribution (and yes, that includes SearchSecurity). Will that media appetite influence more vendors and research teams to engage in public attribution? And where should the infosec community draw a line, if one should be drawn at all?

This is not to say that cyber attribution doesn’t matter. Nation-state APT groups are generally considered to be more skilled and dangerous than your average cybercrime gang, and the differences between the two can heavily influence how an organization reacts and responds to a threat. But there is also a point at which engaging in public attribution can become frivolous and potentially detrimental.

A larger industry conversation about the merits and drawbacks of cyber attribution is one worth having, but the overwhelming desire to identify the actors behind today’s threats and attackers isn’t something that will be easily quelled.

May 30, 2018  5:14 PM

It’s GDPR Day. Let the privacy regulation games begin!

Peter Loshin Peter Loshin Profile: Peter Loshin

May 25, 2018 was “GDPR Day;” the day enforcement of the European Union’s new General Data Protection Regulation began; the day so many information security professionals have been preparing for over the past two years; the day so many have been anticipating and fearing.

GDPR Day is a day many have been treating as a deadline to comply with an entirely new privacy regulation, and woe to all who are not ready by the deadline.

However, GDPR Day is not a deadline — it’s a starting date.

If you’re new to the GDPR game, last Friday was the first day the new regulation could be enforced in the EU against any organization collecting personal data and failing to comply with the new rules.

Max Schrems, the Austrian attorney and privacy activist who helped bring down the long-established Safe Harbor framework governing trans-Atlantic data flows over privacy concerns in 2015, is on the job now as well. His group, NOYB (“None of Your Business”) filed the first complaints under GDPR, alleging that Facebook and its Instagram and WhatsApp services, as well as Google, were attempting to do an end-run around GDPR consent policies by “forcing” consent: telling users there is a new privacy policy, but giving them no way to opt out of sharing other than to stop using the service entirely.

And, anyone who imagined Facebook and Google would be the only companies facing this type of charge was simply wrong.

Monday morning after GDPR Day saw more complaints: Seven claims against Facebook and Google (in three separate complaints against Gmail, Youtube and Search) as well as claims against Apple, Amazon and LinkedIn by the French digital rights group La Quadrature du Net. The group had originally intended to target a dozen services but held back on complaints against Whatsapp, Instagram, Android, Outlook and Skype in order to avoid overwhelming the system.

Forced consent is not OK under GDPR

The intent of the GDPR is to return control of their data to EU data subjects. Up until now, companies like Facebook and the rest have been gathering data about their users and then finding ways to turn that data into revenue, for example, through targeted advertisements. Previously, there have been no significant obstacles keeping those big data companies from sharing or reselling some or all of the personal data they collect with other companies. And users have had little to no recourse to prevent all of this from happening. At best, services would bury controls to opt out of targeted advertising deep in settings and at worst, even leaving (or not joining) the service all together might not stop the data collection and sale as was the case with Facebook’s “shadow profiles.”

What was seen in the run-up to GDPR Day from the big data companies has been a form of “opting in” consent policies that effectively force consent from users. This forced consent is not just a bad look on the part of these big corporations but, as NOYB put it in its statement, it is in fact illegal under the new rules.

Schrems said in a statement that when Facebook blocked accounts of users who withheld consent, “that’s not a free choice, it more reminds of a North Korean election process.”

NOYB pointed out that, under Article 7(4) of the GDPR, “such forced consent and any form of bundling a service with the requirement to consent” is prohibited under GDPR — and Schrems said that “this annoying way of pushing people to consent is actually forbidden under GDPR in most cases.”

Schrems and NOYB also note that the GDPR doesn’t mean companies can’t collect any data from their users, because there are some pieces of information that they need in order to provide their services. “The GDPR explicitly allows any data processing that is strictly necessary for the service – but using the data additionally for advertisement or to sell it on needs the users’ free opt-in consent.”

In other words, if the data is required for the service provider to be able to provide the service, consent is no longer required — but for any other use, the users must be given a real choice.

So, who should be worried about GDPR enforcement?

In the days since GDPR Day and the start of enforcement, it is clear that companies that have failed in some way to comply with the new rules — especially those that have attempted to comply in a way that circumvents the consumer protections provided by GDPR — should be worried.

If your organization has taken the steps necessary to comply — in good faith — with the GDPR, it is probably safe. If your organization cares for the personally identifying data of its customers, employees and anyone else whose data it collects, you are also probably safe.

However, if your company is making an effort to appear to be in compliance with GDPR, but in a way that attempts to subvert the privacy regulation, you should worry.


May 9, 2018  3:43 PM

Google I/O’s security and privacy focus missing on day one

Michael Heller Michael Heller Profile: Michael Heller

It’s fairly easy to find stories sparking security and privacy concerns regarding a Google product or service — Search, Chrome, Android, AdSense and more — but if you watched or attended Google I/O, you might be convinced everything is fine.

On the first day of Google I/O, there were effectively three keynotes — the main consumer keynote headlined by CEO Sundar Pichai; the developer keynote headlined by Jason Titus, vice president of the developer product group; and what is colloquially known as the Android keynote headed by developers Dan Sandler, Romain Guy and Chet Haase.

Google I/O’s security content, however, was scant. During the course of those talks, which lasted nearly five hours, there were about three mentions of security and privacy — one in the developer conference in regards to Firebase, Google’s cross-platform app developer tool, including help for GDPR concerns; and two in the Android keynote regarding the biometrics API being opened up to include authentication types bdesides fingerprints and how Android P will shut down access for apps that monitor network traffic.

Sandler did mention a session on Android security scheduled for Thursday, but there were more than enough moments during day one for Google to point out how it was handling security concerns in Android. Research into the Android P developer previews had uncovered security improvements, including locking down access to device cameras and microphones for background apps, better encryption for backup data, random MAC addresses on network connections, more HTTPS and more alerts when an app is using an outdated API.

Even when Google’s IoT platform, Android Things, was announced as being out of beta and ready for the limelight, there was no mention of security despite that being one of the major selling points of the platform. Google claims Android Things will make IoT safer because the company is promising three years of security and reliability updates. (The question of whether three years of support meaningfully moves the needle for IoT security is a question for another time.)

Privacy is constantly a concern for Google Assistant and its growing army of always-listening devices, but the closest Google got to touching on privacy here was in saying that the new “continued conversations” — which will allow more back and forth conversations with the Assistant without requiring the “OK Google” trigger — would be a feature that users would have to turn on, implying it would be opt-in to allow the Assistant to keep listening when you don’t directly address it.

Beyond all of these areas, Google has a history of privacy concerns around how much data it collects on users. Google has a more well-defined policy about how that data is shared and what data is shared with advertisers and other partners, but it’s not hard to imagine Google getting swept up in a backlash similar to what Facebook has faced in the aftermath of the Cambridge Analytica episode. Given that controversy, it’s surprising that Google didn’t want to proactively address the issue and reassert how it protects user data. It’s not as though staying quiet will make the public forget about its concerns regarding Google.

Google I/O’s security focus was largely non-existent on the first day of the show. Time will tell whether or not this is anomaly or something more concerning.


May 3, 2018  5:58 PM

Cybersecurity pervasiveness subsumes all security concerns

Michael Heller Michael Heller Profile: Michael Heller

Given the increased digitization of society and explosion of devices generating data (including retail, social media, search, mobile, and the internet of things), it seems like it might have been inevitable that cybersecurity pervasiveness would eventually touch every aspect of life. But, it feels more like everything has been subsumed by infosec.

All information in our lives is now digital — health records, location data, search habits, not to mention all of the info we willingly share on social media — and all of that data has value to us. However, it also has value to companies that can use it to build more popular products and serve ads and it has value to malicious actors too.

The conflict between the interests of these three groups means cybersecurity pervasiveness is present in every facet of life. Users want control of their data in order to have a semblance of privacy. Corporations want to gather and keep as much data as possible, just in case trends can be found in it to increase the bottom line. And, malicious actors want to use that data for financial gain — selling PII, credit info or intellectual property on the dark web, holding systems for ransom, etc. — or political gain.

None of these cybersecurity pervasiveness trends are necessarily new for those in the infosec community, but issues like identity theft or stolen credit card numbers haven’t always registered with the general public or mass media as cybersecurity problems because they tended to be considered in individual terms — a few people here and there had those sorts of issues but it couldn’t be too widespread, right?

Now, there are commercials on major TV networks pitching “free dark web scans” to let you know whether your data is being sold on the black market. (Spoiler alert: your data has almost certainly been compromised, it’s more a matter of whether you’re unlucky enough to have your ID chosen from the pile by malicious actors or not. And, a dark web scan won’t make the awful process of getting a new social security number any better.)

Data breaches are so common and so far-reaching that everyone has either been directly affected or is no more than about two degrees of separation from someone who has been. Remember: the Yahoo breach alone affected 3 billion accounts and the latest stats say there are currently only about 4.1 billion people who have internet access. The Equifax breach affected 148 million U.S. records and the U.S. has an estimated population of 325 million.

Everyone has been affected in one way or another. Everything we do can be tracked including our location, our search and purchase history, our communications and more.

But, cybersecurity pervasiveness no longer affects only financial issues and the general public has seen in stark reality how digital platforms and the idea of truth itself can be manipulated by threat actors for political gain.

Cyberattacks have become shows of nation-state power in a type of new Cold War, at least until cyberattacks impact industrial systems and cause real world harm.

Just as threat actors can find the flaws in software, there are flaws in human psychology that can be exploited as part of traditional phishing schemes or fake news campaigns designed to sway public opinion or even manipulate elections.

For all of the issues that arise from financially-motivated threat actors, the security fixes range from relatively simple to implement — encryption, data protection, data management, stronger privacy controls, and so on — to far more complex issues like replacing the woefully outmatched social security number as a primary form of ID.

However, the politically-minded attacks are far more difficult to mitigate, because you can’t patch human psychology. Better critical reading skills are hard to build across people who might not believe there’s even an issue that needs fixing. Pulling people out of echo chambers will be difficult.

Social networks need to completely change their platforms to be better at enforcing abuse policies and to devalue constant sharing of links. And the media also needs to stop prioritizing conflict and inflammatory headlines over real news. All of this means prioritizing the public good over profits, a notoriously difficult proposition under the almighty hand of capitalism.

None of these are easy to do and some may be downright impossible. But, like it or not, the infosec community has been brought to the table and can have a major voice in how these issues get fixed. Are we ready for the challenge?


April 30, 2018  5:34 PM

Algorithmic discrimination: A coming storm for security?

Rob Wright Profile: Rob Wright
Security

“If you don’t understand algorithmic discrimination, then you don’t understand discrimination in the 21st century.”

Bruce Schneier’s words, which came at the end of his wide-ranging session at RSA Conference last week, continued to echo in my ears long after I returned from San Francisco. Schneier, the well-known security expert, author and CTO of IBM Resilient, was discussing how technologists can become more involved in government policy, and he advocated for joint computer science-law programs in higher education.

“I think that’s very important. Right now, if you have a computer science-law degree, then you become a patent attorney,” he said. “Yes, it makes you a lot of money, but it would be great if you could work for the ACLU, the Sothern Poverty Law Center and the NAACP.”

Those organizations, he argued, need technologists that understand algorithmic discrimination. And given some recent events, it’s hard to argue with Schneier’s point. But with all of the talk at RSA Conference this year about the value of machine learning and artificial intelligence, just as in previous years, I wondered if the security industry truly does understand the dangers of bias and discrimination, and what kind of problems will come to the surface if it doesn’t.

Inside the confines of the Moscone Center, algorithms were viewed with almost complete optimism and positivity. Algorithms, we’re told, will help save time and money for enterprises that can’t find enough skilled infosec professionals to fill their ranks.

But when you step outside the infosec sphere, it’s a different story. We’re told how algorithms, in fact, won’t save us from vicious conspiracy theories and misinformation, or hate speech and online harassment, or any number of other negative factors afflicting our digital lives.

If there are any reservations about machine learning and AI, they are generally limited to a few areas such as improper training of AI models or how those models are being used by threat actors to aid cyberattacks. But there’s another issue to consider: how algorithmic discrimination and bias could negatively impact these models.

This isn’t to say that algorithmic discrimination will necessarily afflict cybersecurity technology in a way that reveals racial or gender bias. But for an industry that so often misses the mark on the most dangerous vulnerabilities and persistent yet preventable threats, it’s hard to believe infosec’s own inherent biases won’t somehow be reflected in the machine learning and AI-based products that are now dominating the space.

Will these products discriminate against certain risks over more pressing ones? Will algorithms be designed to prioritize certain types of data and threat intelligence at the expense of others, leading to data discrimination? It’s also not hard to imagine racial and ethnic bias creeping into security products with algorithms that demonstrate a predisposition toward certain languages and regions (Russian and Eastern Europe, for example). How long will it take for threat actors to pick up on those biases and exploit them?

It’s important to note that in many cases outside the infosec industry, the algorithmic havoc is wreaked not by masterful black hats and evil geniuses but by your average internet trolls and miscreants. They simply spent enough time studying how, for example, YouTube functions on a day-to-day basis and flooded the systems with content to figure out how they could weaponize search engine optimization.

If Google can’t construct algorithms to root out YouTube trolls and prevent harassers from abusing the sites’ search and referral features, then why do we in the infosec industry believe that algorithms will be able to detect and resolve even the low-hanging fruit that afflicts so many organizations?

The question isn’t whether the algorithms will be flawed. These machine learning and AI systems are built by humans, and flaws come with the territory. The question is whether they will be – unintentionally or purposefully – biased, and if those biases will be fixed or reinforced as the systems learn and grow.

The world is full of examples of algorithms gone wrong or nefarious actors gaming systems to their advantage. It would be foolish to think infosec will somehow be exempt.


April 27, 2018  7:35 PM

GDPR deadline: Keep calm and GDPR on

Peter Loshin Peter Loshin Profile: Peter Loshin
Security

You may know that the GDPR deadline — May 25, 2018 — is almost upon us.

In less than a month, the European Union will begin enforcing its new General Data Privacy Regulation, or GDPR. Some companies will face disabling fines, as much as 20 million euros, or 4% of global gross revenue, whichever is higher. Some companies will have spent millions to be compliant with the new rules on protecting the privacy of EU data subjects — anyone resident in the EU — while some companies will have spent nothing when the GDPR deadline arrives.

For example, according to a survey by technology non-profit CompTIA, U.S. companies are not doing well with GDPR preparations. They found that 52% of the 400 U.S. companies they surveyed are still either exploring how GDPR is applicable to their business, trying to determine whether GDPR is a requirement for their business, or are simply unsure. The research also revealed that  that just 13% of firms say they are fully compliant with GDPR, 23% are “mostly compliant” and 12% claim they are “somewhat compliant.”

That is not an isolated finding. A poll released this month by Baker Tilly Virchow Krause, LLP, revealed that 90% of organizations do not have controls in place to be compliant with GDPR before the deadline.

GDPR deadline versus Y2K

In four weeks, once the GDPR deadline has passed, will the privacy Armageddon will be upon us?

Probably not.

For IT and infosec pros of a certain age, the GDPR deadline echoes the panic of an earlier and more innocent time: January 1, 2000.

I certainly remember that time.

Also known as the year 2000 bug, the Y2K challenge, like GDPR, represented a problem that would require massive amounts of human, computing and financial resources to solve — and with a hard deadline that could not be argued with. The practice of coding years with just the last two digits in dates was clearly going to cause problems, and created its own industry for remediation of those problems in legacy systems of all types across the globe.

Much of the news coverage leading up to the millennium’s end focused on its impact on the world in the form of computers that could react unpredictably to the calendar change, especially all the embedded computers that controlled (and still control) so much of the modern landscape.

There were worries about whether air traffic control systems could cope with Y2K, worries that embedded computer-heavy airplanes would fall out of the sky, electric grids would fail, gas pumps would stop pumping and much worse was in store unless all systems were remediated.

The late software engineer Edward Yourdon, author of “Time Bomb 2000” and one of the leading voices of Y2k preparation, told me he had moved to a remote rural location where he was prepared to function without computers until the fallout cleared.

The GDPR deadline, on the other hand, represents an artificial milestone. After this date, if a company’s practices are not in line with the regulation and something happens as a result of those practices, the company may be fined — but the wheels won’t fall off unexpectedly, nor will any systems fail catastrophically and without notice.

Some of the big U.S. companies that will be affected by the GDPR, like Facebook, Twitter, Microsoft and many others, have already taken action. And many companies that believe they won’t be affected, or that aren’t sure, are taking the “wait and see” approach, rather than attempting to be proactive and address, at great cost, privacy concerns before worrying about the potentially huge fines.

Both approaches will make sense, depending on the company.

It may be heresy, but there are probably many U.S. companies that don’t need to worry too much about the upcoming GDPR deadline:

  • Failing companies need not worry about GDPR. If they are having trouble keeping the lights on, a huge GDPR penalty might spell the end of the company — but that doesn’t mean the company would be prospering in a world without privacy regulations.
  • Business to business companies that do not have EU data subjects as their customers likely have little to fear from GDPR enforcement.
  • Companies that do not solicit, collect or process personally identifiable information about their EU customers should also have little to fear from GDPR enforcement.

Most notably, there are — I hope — companies that don’t need to make special preparation for the GDPR deadline because while they may not be explicitly compliant with the GDPR, they already take the principles of privacy and security seriously.

Enforcement of the GDPR begins in a month, but that doesn’t mean the headlines on May 26 will herald the levying of massive fines against GDPR violators. In time the fines will surely rain down on violators, but companies with the right attitude toward privacy can stay calm, for now.

While the magnitude of the importance of the Y2K challenge faded almost immediately after January 1, 2000, the importance of enforcing data privacy protections through the GDPR will only continue to grow after the deadline.


April 19, 2018  10:03 PM

CrowdStrike unveils Meltdown exploit in unusual fashion

Rob Wright Profile: Rob Wright
Meltdown

SAN FRANCISCO – CrowdStrike displayed a flare for the dramatic at RSA Conference Wednesday during the company’s “Oscars” event, where it unveiled a new Meltdown exploit.

The cybersecurity vendor unveiled the exploit, dubbed “MeltiKatz,” during the company’s 2018 Hacking Exposed Oscars. The contest recognizes the most formidable flaws and impressive exploits CrowdStrike saw over the last year. This year’s nominees included techniques, such as a credential theft campaign that uses Microsoft’s Server Message Block protocol and a whitelisting bypass that abuses the InstallUtil command line tool.

But this year, Crowdstrike threw a curve ball.

“And the winner is…actually none of them,” said CrowdStrike CEO George Kurtz, naming Meltdown as the winner. Kurtz then unveiled MeltiKatz, saying “Certainly, it wouldn’t be an RSA [Conference] without developing our own tools around this.”

CrowdStrike CTO Dimitri Alperovitch outlined the Meltdown exploit, which uses the MimiKatz tool, and reassured the audience it was developed by the vendor, not threat actors in the wild. “This is something we’re not yet seeing in real-world attacks, but we had one of our ninjas, Alex Ionescu, create something really cool,” he said.

The Meltdown vulnerability essentially enables unauthorized users to read privileged kernel memory in an Intel system. Alperovitch explained that operating system vendors introduced mitigations for Meltdown that tried to reduce or prevent data from leaking out systems; those mitigations involved unmapping kernel memory from user-mode processes so intruders can’t access it.

But Alperovitch explained you can’t fully unmap kernel memory, so while the mitigations address a lot of the problems, it doesn’t fully fix Meltdown. And, he said, the situation gets more complicated on Windows systems.

“Windows actually does not fully believe that the user-mode to kernel memory border is a security border that needs to be enforced,” he said, adding that someone using a privileged user mode can still do a lot of things on a Windows system even if it’s patched.

Alperovitch said Microsoft rightly decided the cost of fully mitigating against Meltdown, which would have unmapped all kernel memory from user mode, would have been too high in terms of performance impact. However, that means that Microsoft’s mitigation for Meltdown is disabled for processes run by administrators with high integrity tokens. “Even on a fully patched machine, you can still use Meltdown as an administrative app to do really cool things,” he said.

Because of the way Windows is designed, a threat actor who gains administrative rights could access parts of the registry that contain Windows NT LAN Manager password hashes, encrypted cached passwords for Active Directory and other sensitive data.

Ionescu initially Tweeted about MeltiKatz in January shortly after Meltdown and Spectre were disclosed. Despite the Meltdown exploit approach being publicly available, Alperovitch said CrowdStrike has seen no indication threat actors have used this type of attack.

CrowdStrike did not release the technical specifications for MeltiKatz, but Alperovitch said the demo showed “the power of Meltdown” against even systems that have been patched. While CrowdStrike’s Oscars event may have been a bit over the top, the MeltiKatz demo was a chilling reminder of how far-reaching the vulnerability is.


April 17, 2018  10:36 PM

FedRAMP security requirements put a premium on automation

Rob Wright Profile: Rob Wright
FedRamp

When it comes to the federal government’s cloud rules, security automation is king.

That was the message from Matt Goodrich, director for the Federal Risk and Authorization Management Program (FedRAMP), GSA. Goodrich spoke at the Cloud Security Alliance Summit Monday during RSA Conference 2018 and talked about the history of and lessons learned from FedRAMP, which was first introduced in 2011.

“We wanted to standardize how the federal government does authorizations for cloud products,” Goodrich said, describing the chaos of each individual department and agency having its own set of guidelines and approaches for approving cloud service providers.

Goodrich described in detail the vision behind the regulatory program, the security issues that drove its creation and how FedRAMP security requirements were developed. One of the more interesting details he discussed was the importance of security automation for those requirements.

Three impact levels

FedRAMP has a three-tiered system for cloud service offerings based on impact level: Low, Moderate and High. Low impact systems include public websites with non-sensitive data, which have 35 FedRAMP security requirements. Goodrich said his organization has reduced the number of requirements for Low impact systems, which had been more than 100. “With these systems, we’re looking to ask [cloud providers]: Do you have a basic security program? Do you do scanning, do you patch, and do you have vulnerability management processes like that,” he said.

Moderate impact systems, meanwhile, include approximately 80% of all data across the federal government, and as such they have 325 FedRAMP security requirements for cloud providers. That includes having a well-operated risk management program, Goodrich said, as well as encryption and access controls around the data.

High impact systems are another story. “These are some of the most sensitive systems we have across the government,” Goodrich said, such as Department of Defense data. Compromises of these systems’ data, he said, could lead to financial catastrophes for government agencies and private sector organizations or even loss of life. High impact systems have 420 FedRAMP security requirements, and the focus of those requirements is on security automation.

“Basically we’re looking for a high degree of automation behind a lot of what these high impact systems do,” Goodrich said. “If you can cut what a human can do and have a machine do it, then that’s what’s going to have to be implemented. It’s the difference between moderate and high systems.”

A lot of the FedRAMP security requirements for moderate and high systems are the same, Goodrich said, but it’s how cloud providers implement the controls for those requirements that are different. Having configuration management tools, for example, in place will get you a contract to maintain moderate impact systems in the cloud, but having automated configuration management tools will get you in the door for high impact systems.

Security automation is something that’s been talked about for years, but new developments and investments around AI and automation seem to have reignited interest lately. Goodrich’s insights echo similar statements at the RSA Conference this week from the private sector on the value of automated systems that not only alleviate the burden on infosec professionals but also enhance security operations within an organization.

CrowdStrike, for instance, introduced Falcon X, the newest part of its cloud-based Falcon platform, which automates malware analysis processes to help enterprises respond to security incidents faster. In addition, ISACA’s State of Cybersecurity 2018 report emphasized the value of security automation in offsetting the shortage of skill infosec personnel within an organization.

FedRAMP’s security requirements make it clear the U.S. government doesn’t trust humans to handle its most sensitive data – which begs the question: Should enterprises adopt the same approach?


March 31, 2018  6:01 PM

Privacy protections are needed for government overreach, too

Rob Wright Profile: Rob Wright
Security

After the unfortunate yet predictable Facebook episode involving Cambridge Analytica, several leaders in the technology industry were quick to pledge they would never allow that kind corporate misuse of user data.

The fine print in those pledges, of course, is the word ‘corporate,’ and it’s exposed a glaring weakness in the privacy protections that technology companies have brought to bear.

Last week at IBM Think 2018, Big Blue’s CEO Ginni Rometty stressed the importance of “data trust and responsibility” and called on not only technology companies but all enterprises to be better stewards of data. She was joined by IBM customers who echoed those remarks; for example, Lowell McAdam, chairman and CEO of Verizon Communications, said he didn’t ever want to be in the position that some Silicon Valley companies had found themselves following data misuse or exposures, lamenting that once users’ trust has been broken it can never be repaired.

Other companies piled on the Facebook controversy and played up their privacy protections for users. Speaking at a televised town hall event for MSNBC this week, Apple CEO Tim Cook called privacy “a human right” and criticized Facebook, saying he “wouldn’t be in this situation.” Apple followed Cook’s remarks by unveiling new privacy features related to European Union’s General Data Protection Regulation.

Those pledges and actions are important, but they ignore a critical threat to privacy: government overreach. The omission of that threat might be purposeful. Verizon, for example, found itself in the crosshairs of privacy advocates in 2013 following the publication of National Security Agency (NSA) documents leaked by Edward Snowden. Those documents revealed the telecom giant was delivering American citizens’ phone records to the NSA under a secret court order for bulk surveillance.

In addition, Apple has taken heat for its decision to remove VPN and encrypted messaging apps from its App Store in China following pressure from the Chinese government. And while Tim Cook’s company deserved recognition for defending encryption from the FBI’s “going dark” effort, it should be noted that Apple (along with Google, Microsoft and of course Facebook) supported the CLOUD Act, which was recently approved by Congress and has roiled privacy activists.

The misuse of private data at the hands of greedy or unethical corporations is a serious threat to users’ security, but it’s not the only predator in the forest. Users should demand strong privacy protections from all threats, including bulk surveillance and warrantless spying, and we shouldn’t allow companies to pay lip service to privacy rights only when the aggressor is a corporate entity.

Rometty made an important statement at IBM Think when she said she believes all companies will be judged by how well they protect their users’ data. That’s true, but there should be no exemptions for what they will protect that data from, and no denials about the dangers of government overreach.


March 30, 2018  6:23 PM

Apple GDPR privacy protection will float everyone’s privacy boat

Peter Loshin Peter Loshin Profile: Peter Loshin

With less than two months before the European Union’s General Data Protection Regulation goes into effect, Apple is making notable changes in the name of user privacy. For everyone.

While all companies that collect data from EU data subjects will be subject to the GDPR, Apple has stepped up to announce that privacy, being a fundamental human right, should be available to everyone, including those outside the protection of the EU.

In a move that is raising hope for anyone concerned about data privacy, Apple GDPR protections will be offered to all Apple customers, not just the EU data subjects covered by the GDPR.

The new privacy features are part of Apple’s latest updates to its operating systems — macOS 10.13.4, iOS11.3 and tvOS 11.3 — released on Thursday. The most obvious change, for now, will be a new splash screen detailing Apple’s privacy policy as well as a new icon that will be displayed when an Apple feature wants to collect personal information.

More Apple GDPR support will come later this year when the web page for managing Apple ID accounts is updated to allow easier access to key privacy features mandated under the EU privacy protection regulation, including downloading a copy of all their personal data stored by Apple, correcting account information and temporarily deactivating or permanently deleting the account. The Apple GDPR features will roll out to the EU first after GDPR enforcement begins, but eventually they will be available to every Apple customer no matter where they are.

Apple GDPR protections for all

Speaking at a town-hall event sponsored by MSNBC the day before the big update release, Apple CEO Tim Cook stressed the company profits from the sale of hardware — not the sale of personal data collected on its customers. Cook also took a shot at Facebook for its latest troubles related to allowing improper use of personal data by Cambridge Analytica, saying that privacy is a fundamental human right — a sentiment also spelled out in the splash screen displayed by Apple’s new OS versions.

Anyone concerned about data privacy should welcome Apple’s move, but it may not be as easy for other companies to follow Apple’s lead on data privacy, even with the need to comply with GDPR.

The great thing about the Apple GDPR compliance for everyone move is that it shows the way for other companies: rather than attempting to maintain two different systems for privacy protections, companies can choose to raise the ethical bar for maintaining and supporting personal data privacy to the highest standard, set by the GDPR rules, or they can go to the effort and expense of complying with GDPR only to the extent necessary by law.

On the one hand there is the requirement for GDPR-compliance regarding EU data subjects, where consumers are granted the right to be forgotten and the right to be notified when their data has been compromised, among other rights. On the other hand, companies can choose to continue to collect and trade personal data of non-EU data subjects and evade consequences for privacy violations on those people by complying with the minimal protections required by the patchwork of less stringent legislation in effect in the rest of the world.

While a technology company like Apple can focus its efforts on selling hardware while protecting its customers’ data, it remains to be seen what the big internet companies — like Facebook, Google, Amazon and Twitter — will do.

Companies whose business models depend on the unfettered collection, use and sale of consumer data may opt to build a two-tier privacy model: more protection for EU residents under GDPR, and less protection for everyone else.

As a member of the “everyone else” group, I’d rather not be treated like a second-class citizen when it comes to privacy rights.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: