Security Bytes


May 9, 2018  3:43 PM

Google I/O’s security and privacy focus missing on day one

Michael Heller Michael Heller Profile: Michael Heller

It’s fairly easy to find stories sparking security and privacy concerns regarding a Google product or service — Search, Chrome, Android, AdSense and more — but if you watched or attended Google I/O, you might be convinced everything is fine.

On the first day of Google I/O, there were effectively three keynotes — the main consumer keynote headlined by CEO Sundar Pichai; the developer keynote headlined by Jason Titus, vice president of the developer product group; and what is colloquially known as the Android keynote headed by developers Dan Sandler, Romain Guy and Chet Haase.

Google I/O’s security content, however, was scant. During the course of those talks, which lasted nearly five hours, there were about three mentions of security and privacy — one in the developer conference in regards to Firebase, Google’s cross-platform app developer tool, including help for GDPR concerns; and two in the Android keynote regarding the biometrics API being opened up to include authentication types bdesides fingerprints and how Android P will shut down access for apps that monitor network traffic.

Sandler did mention a session on Android security scheduled for Thursday, but there were more than enough moments during day one for Google to point out how it was handling security concerns in Android. Research into the Android P developer previews had uncovered security improvements, including locking down access to device cameras and microphones for background apps, better encryption for backup data, random MAC addresses on network connections, more HTTPS and more alerts when an app is using an outdated API.

Even when Google’s IoT platform, Android Things, was announced as being out of beta and ready for the limelight, there was no mention of security despite that being one of the major selling points of the platform. Google claims Android Things will make IoT safer because the company is promising three years of security and reliability updates. (The question of whether three years of support meaningfully moves the needle for IoT security is a question for another time.)

Privacy is constantly a concern for Google Assistant and its growing army of always-listening devices, but the closest Google got to touching on privacy here was in saying that the new “continued conversations” — which will allow more back and forth conversations with the Assistant without requiring the “OK Google” trigger — would be a feature that users would have to turn on, implying it would be opt-in to allow the Assistant to keep listening when you don’t directly address it.

Beyond all of these areas, Google has a history of privacy concerns around how much data it collects on users. Google has a more well-defined policy about how that data is shared and what data is shared with advertisers and other partners, but it’s not hard to imagine Google getting swept up in a backlash similar to what Facebook has faced in the aftermath of the Cambridge Analytica episode. Given that controversy, it’s surprising that Google didn’t want to proactively address the issue and reassert how it protects user data. It’s not as though staying quiet will make the public forget about its concerns regarding Google.

Google I/O’s security focus was largely non-existent on the first day of the show. Time will tell whether or not this is anomaly or something more concerning.

May 3, 2018  5:58 PM

Cybersecurity pervasiveness subsumes all security concerns

Michael Heller Michael Heller Profile: Michael Heller

Given the increased digitization of society and explosion of devices generating data (including retail, social media, search, mobile, and the internet of things), it seems like it might have been inevitable that cybersecurity pervasiveness would eventually touch every aspect of life. But, it feels more like everything has been subsumed by infosec.

All information in our lives is now digital — health records, location data, search habits, not to mention all of the info we willingly share on social media — and all of that data has value to us. However, it also has value to companies that can use it to build more popular products and serve ads and it has value to malicious actors too.

The conflict between the interests of these three groups means cybersecurity pervasiveness is present in every facet of life. Users want control of their data in order to have a semblance of privacy. Corporations want to gather and keep as much data as possible, just in case trends can be found in it to increase the bottom line. And, malicious actors want to use that data for financial gain — selling PII, credit info or intellectual property on the dark web, holding systems for ransom, etc. — or political gain.

None of these cybersecurity pervasiveness trends are necessarily new for those in the infosec community, but issues like identity theft or stolen credit card numbers haven’t always registered with the general public or mass media as cybersecurity problems because they tended to be considered in individual terms — a few people here and there had those sorts of issues but it couldn’t be too widespread, right?

Now, there are commercials on major TV networks pitching “free dark web scans” to let you know whether your data is being sold on the black market. (Spoiler alert: your data has almost certainly been compromised, it’s more a matter of whether you’re unlucky enough to have your ID chosen from the pile by malicious actors or not. And, a dark web scan won’t make the awful process of getting a new social security number any better.)

Data breaches are so common and so far-reaching that everyone has either been directly affected or is no more than about two degrees of separation from someone who has been. Remember: the Yahoo breach alone affected 3 billion accounts and the latest stats say there are currently only about 4.1 billion people who have internet access. The Equifax breach affected 148 million U.S. records and the U.S. has an estimated population of 325 million.

Everyone has been affected in one way or another. Everything we do can be tracked including our location, our search and purchase history, our communications and more.

But, cybersecurity pervasiveness no longer affects only financial issues and the general public has seen in stark reality how digital platforms and the idea of truth itself can be manipulated by threat actors for political gain.

Cyberattacks have become shows of nation-state power in a type of new Cold War, at least until cyberattacks impact industrial systems and cause real world harm.

Just as threat actors can find the flaws in software, there are flaws in human psychology that can be exploited as part of traditional phishing schemes or fake news campaigns designed to sway public opinion or even manipulate elections.

For all of the issues that arise from financially-motivated threat actors, the security fixes range from relatively simple to implement — encryption, data protection, data management, stronger privacy controls, and so on — to far more complex issues like replacing the woefully outmatched social security number as a primary form of ID.

However, the politically-minded attacks are far more difficult to mitigate, because you can’t patch human psychology. Better critical reading skills are hard to build across people who might not believe there’s even an issue that needs fixing. Pulling people out of echo chambers will be difficult.

Social networks need to completely change their platforms to be better at enforcing abuse policies and to devalue constant sharing of links. And the media also needs to stop prioritizing conflict and inflammatory headlines over real news. All of this means prioritizing the public good over profits, a notoriously difficult proposition under the almighty hand of capitalism.

None of these are easy to do and some may be downright impossible. But, like it or not, the infosec community has been brought to the table and can have a major voice in how these issues get fixed. Are we ready for the challenge?


April 30, 2018  5:34 PM

Algorithmic discrimination: A coming storm for security?

Rob Wright Profile: Rob Wright
Security

“If you don’t understand algorithmic discrimination, then you don’t understand discrimination in the 21st century.”

Bruce Schneier’s words, which came at the end of his wide-ranging session at RSA Conference last week, continued to echo in my ears long after I returned from San Francisco. Schneier, the well-known security expert, author and CTO of IBM Resilient, was discussing how technologists can become more involved in government policy, and he advocated for joint computer science-law programs in higher education.

“I think that’s very important. Right now, if you have a computer science-law degree, then you become a patent attorney,” he said. “Yes, it makes you a lot of money, but it would be great if you could work for the ACLU, the Sothern Poverty Law Center and the NAACP.”

Those organizations, he argued, need technologists that understand algorithmic discrimination. And given some recent events, it’s hard to argue with Schneier’s point. But with all of the talk at RSA Conference this year about the value of machine learning and artificial intelligence, just as in previous years, I wondered if the security industry truly does understand the dangers of bias and discrimination, and what kind of problems will come to the surface if it doesn’t.

Inside the confines of the Moscone Center, algorithms were viewed with almost complete optimism and positivity. Algorithms, we’re told, will help save time and money for enterprises that can’t find enough skilled infosec professionals to fill their ranks.

But when you step outside the infosec sphere, it’s a different story. We’re told how algorithms, in fact, won’t save us from vicious conspiracy theories and misinformation, or hate speech and online harassment, or any number of other negative factors afflicting our digital lives.

If there are any reservations about machine learning and AI, they are generally limited to a few areas such as improper training of AI models or how those models are being used by threat actors to aid cyberattacks. But there’s another issue to consider: how algorithmic discrimination and bias could negatively impact these models.

This isn’t to say that algorithmic discrimination will necessarily afflict cybersecurity technology in a way that reveals racial or gender bias. But for an industry that so often misses the mark on the most dangerous vulnerabilities and persistent yet preventable threats, it’s hard to believe infosec’s own inherent biases won’t somehow be reflected in the machine learning and AI-based products that are now dominating the space.

Will these products discriminate against certain risks over more pressing ones? Will algorithms be designed to prioritize certain types of data and threat intelligence at the expense of others, leading to data discrimination? It’s also not hard to imagine racial and ethnic bias creeping into security products with algorithms that demonstrate a predisposition toward certain languages and regions (Russian and Eastern Europe, for example). How long will it take for threat actors to pick up on those biases and exploit them?

It’s important to note that in many cases outside the infosec industry, the algorithmic havoc is wreaked not by masterful black hats and evil geniuses but by your average internet trolls and miscreants. They simply spent enough time studying how, for example, YouTube functions on a day-to-day basis and flooded the systems with content to figure out how they could weaponize search engine optimization.

If Google can’t construct algorithms to root out YouTube trolls and prevent harassers from abusing the sites’ search and referral features, then why do we in the infosec industry believe that algorithms will be able to detect and resolve even the low-hanging fruit that afflicts so many organizations?

The question isn’t whether the algorithms will be flawed. These machine learning and AI systems are built by humans, and flaws come with the territory. The question is whether they will be – unintentionally or purposefully – biased, and if those biases will be fixed or reinforced as the systems learn and grow.

The world is full of examples of algorithms gone wrong or nefarious actors gaming systems to their advantage. It would be foolish to think infosec will somehow be exempt.


April 27, 2018  7:35 PM

GDPR deadline: Keep calm and GDPR on

Peter Loshin Peter Loshin Profile: Peter Loshin
Security

You may know that the GDPR deadline — May 25, 2018 — is almost upon us.

In less than a month, the European Union will begin enforcing its new General Data Privacy Regulation, or GDPR. Some companies will face disabling fines, as much as 20 million euros, or 4% of global gross revenue, whichever is higher. Some companies will have spent millions to be compliant with the new rules on protecting the privacy of EU data subjects — anyone resident in the EU — while some companies will have spent nothing when the GDPR deadline arrives.

For example, according to a survey by technology non-profit CompTIA, U.S. companies are not doing well with GDPR preparations. They found that 52% of the 400 U.S. companies they surveyed are still either exploring how GDPR is applicable to their business, trying to determine whether GDPR is a requirement for their business, or are simply unsure. The research also revealed that  that just 13% of firms say they are fully compliant with GDPR, 23% are “mostly compliant” and 12% claim they are “somewhat compliant.”

That is not an isolated finding. A poll released this month by Baker Tilly Virchow Krause, LLP, revealed that 90% of organizations do not have controls in place to be compliant with GDPR before the deadline.

GDPR deadline versus Y2K

In four weeks, once the GDPR deadline has passed, will the privacy Armageddon will be upon us?

Probably not.

For IT and infosec pros of a certain age, the GDPR deadline echoes the panic of an earlier and more innocent time: January 1, 2000.

I certainly remember that time.

Also known as the year 2000 bug, the Y2K challenge, like GDPR, represented a problem that would require massive amounts of human, computing and financial resources to solve — and with a hard deadline that could not be argued with. The practice of coding years with just the last two digits in dates was clearly going to cause problems, and created its own industry for remediation of those problems in legacy systems of all types across the globe.

Much of the news coverage leading up to the millennium’s end focused on its impact on the world in the form of computers that could react unpredictably to the calendar change, especially all the embedded computers that controlled (and still control) so much of the modern landscape.

There were worries about whether air traffic control systems could cope with Y2K, worries that embedded computer-heavy airplanes would fall out of the sky, electric grids would fail, gas pumps would stop pumping and much worse was in store unless all systems were remediated.

The late software engineer Edward Yourdon, author of “Time Bomb 2000” and one of the leading voices of Y2k preparation, told me he had moved to a remote rural location where he was prepared to function without computers until the fallout cleared.

The GDPR deadline, on the other hand, represents an artificial milestone. After this date, if a company’s practices are not in line with the regulation and something happens as a result of those practices, the company may be fined — but the wheels won’t fall off unexpectedly, nor will any systems fail catastrophically and without notice.

Some of the big U.S. companies that will be affected by the GDPR, like Facebook, Twitter, Microsoft and many others, have already taken action. And many companies that believe they won’t be affected, or that aren’t sure, are taking the “wait and see” approach, rather than attempting to be proactive and address, at great cost, privacy concerns before worrying about the potentially huge fines.

Both approaches will make sense, depending on the company.

It may be heresy, but there are probably many U.S. companies that don’t need to worry too much about the upcoming GDPR deadline:

  • Failing companies need not worry about GDPR. If they are having trouble keeping the lights on, a huge GDPR penalty might spell the end of the company — but that doesn’t mean the company would be prospering in a world without privacy regulations.
  • Business to business companies that do not have EU data subjects as their customers likely have little to fear from GDPR enforcement.
  • Companies that do not solicit, collect or process personally identifiable information about their EU customers should also have little to fear from GDPR enforcement.

Most notably, there are — I hope — companies that don’t need to make special preparation for the GDPR deadline because while they may not be explicitly compliant with the GDPR, they already take the principles of privacy and security seriously.

Enforcement of the GDPR begins in a month, but that doesn’t mean the headlines on May 26 will herald the levying of massive fines against GDPR violators. In time the fines will surely rain down on violators, but companies with the right attitude toward privacy can stay calm, for now.

While the magnitude of the importance of the Y2K challenge faded almost immediately after January 1, 2000, the importance of enforcing data privacy protections through the GDPR will only continue to grow after the deadline.


April 19, 2018  10:03 PM

CrowdStrike unveils Meltdown exploit in unusual fashion

Rob Wright Profile: Rob Wright
Meltdown

SAN FRANCISCO – CrowdStrike displayed a flare for the dramatic at RSA Conference Wednesday during the company’s “Oscars” event, where it unveiled a new Meltdown exploit.

The cybersecurity vendor unveiled the exploit, dubbed “MeltiKatz,” during the company’s 2018 Hacking Exposed Oscars. The contest recognizes the most formidable flaws and impressive exploits CrowdStrike saw over the last year. This year’s nominees included techniques, such as a credential theft campaign that uses Microsoft’s Server Message Block protocol and a whitelisting bypass that abuses the InstallUtil command line tool.

But this year, Crowdstrike threw a curve ball.

“And the winner is…actually none of them,” said CrowdStrike CEO George Kurtz, naming Meltdown as the winner. Kurtz then unveiled MeltiKatz, saying “Certainly, it wouldn’t be an RSA [Conference] without developing our own tools around this.”

CrowdStrike CTO Dimitri Alperovitch outlined the Meltdown exploit, which uses the MimiKatz tool, and reassured the audience it was developed by the vendor, not threat actors in the wild. “This is something we’re not yet seeing in real-world attacks, but we had one of our ninjas, Alex Ionescu, create something really cool,” he said.

The Meltdown vulnerability essentially enables unauthorized users to read privileged kernel memory in an Intel system. Alperovitch explained that operating system vendors introduced mitigations for Meltdown that tried to reduce or prevent data from leaking out systems; those mitigations involved unmapping kernel memory from user-mode processes so intruders can’t access it.

But Alperovitch explained you can’t fully unmap kernel memory, so while the mitigations address a lot of the problems, it doesn’t fully fix Meltdown. And, he said, the situation gets more complicated on Windows systems.

“Windows actually does not fully believe that the user-mode to kernel memory border is a security border that needs to be enforced,” he said, adding that someone using a privileged user mode can still do a lot of things on a Windows system even if it’s patched.

Alperovitch said Microsoft rightly decided the cost of fully mitigating against Meltdown, which would have unmapped all kernel memory from user mode, would have been too high in terms of performance impact. However, that means that Microsoft’s mitigation for Meltdown is disabled for processes run by administrators with high integrity tokens. “Even on a fully patched machine, you can still use Meltdown as an administrative app to do really cool things,” he said.

Because of the way Windows is designed, a threat actor who gains administrative rights could access parts of the registry that contain Windows NT LAN Manager password hashes, encrypted cached passwords for Active Directory and other sensitive data.

Ionescu initially Tweeted about MeltiKatz in January shortly after Meltdown and Spectre were disclosed. Despite the Meltdown exploit approach being publicly available, Alperovitch said CrowdStrike has seen no indication threat actors have used this type of attack.

CrowdStrike did not release the technical specifications for MeltiKatz, but Alperovitch said the demo showed “the power of Meltdown” against even systems that have been patched. While CrowdStrike’s Oscars event may have been a bit over the top, the MeltiKatz demo was a chilling reminder of how far-reaching the vulnerability is.


April 17, 2018  10:36 PM

FedRAMP security requirements put a premium on automation

Rob Wright Profile: Rob Wright
FedRamp

When it comes to the federal government’s cloud rules, security automation is king.

That was the message from Matt Goodrich, director for the Federal Risk and Authorization Management Program (FedRAMP), GSA. Goodrich spoke at the Cloud Security Alliance Summit Monday during RSA Conference 2018 and talked about the history of and lessons learned from FedRAMP, which was first introduced in 2011.

“We wanted to standardize how the federal government does authorizations for cloud products,” Goodrich said, describing the chaos of each individual department and agency having its own set of guidelines and approaches for approving cloud service providers.

Goodrich described in detail the vision behind the regulatory program, the security issues that drove its creation and how FedRAMP security requirements were developed. One of the more interesting details he discussed was the importance of security automation for those requirements.

Three impact levels

FedRAMP has a three-tiered system for cloud service offerings based on impact level: Low, Moderate and High. Low impact systems include public websites with non-sensitive data, which have 35 FedRAMP security requirements. Goodrich said his organization has reduced the number of requirements for Low impact systems, which had been more than 100. “With these systems, we’re looking to ask [cloud providers]: Do you have a basic security program? Do you do scanning, do you patch, and do you have vulnerability management processes like that,” he said.

Moderate impact systems, meanwhile, include approximately 80% of all data across the federal government, and as such they have 325 FedRAMP security requirements for cloud providers. That includes having a well-operated risk management program, Goodrich said, as well as encryption and access controls around the data.

High impact systems are another story. “These are some of the most sensitive systems we have across the government,” Goodrich said, such as Department of Defense data. Compromises of these systems’ data, he said, could lead to financial catastrophes for government agencies and private sector organizations or even loss of life. High impact systems have 420 FedRAMP security requirements, and the focus of those requirements is on security automation.

“Basically we’re looking for a high degree of automation behind a lot of what these high impact systems do,” Goodrich said. “If you can cut what a human can do and have a machine do it, then that’s what’s going to have to be implemented. It’s the difference between moderate and high systems.”

A lot of the FedRAMP security requirements for moderate and high systems are the same, Goodrich said, but it’s how cloud providers implement the controls for those requirements that are different. Having configuration management tools, for example, in place will get you a contract to maintain moderate impact systems in the cloud, but having automated configuration management tools will get you in the door for high impact systems.

Security automation is something that’s been talked about for years, but new developments and investments around AI and automation seem to have reignited interest lately. Goodrich’s insights echo similar statements at the RSA Conference this week from the private sector on the value of automated systems that not only alleviate the burden on infosec professionals but also enhance security operations within an organization.

CrowdStrike, for instance, introduced Falcon X, the newest part of its cloud-based Falcon platform, which automates malware analysis processes to help enterprises respond to security incidents faster. In addition, ISACA’s State of Cybersecurity 2018 report emphasized the value of security automation in offsetting the shortage of skill infosec personnel within an organization.

FedRAMP’s security requirements make it clear the U.S. government doesn’t trust humans to handle its most sensitive data – which begs the question: Should enterprises adopt the same approach?


March 31, 2018  6:01 PM

Privacy protections are needed for government overreach, too

Rob Wright Profile: Rob Wright
Security

After the unfortunate yet predictable Facebook episode involving Cambridge Analytica, several leaders in the technology industry were quick to pledge they would never allow that kind corporate misuse of user data.

The fine print in those pledges, of course, is the word ‘corporate,’ and it’s exposed a glaring weakness in the privacy protections that technology companies have brought to bear.

Last week at IBM Think 2018, Big Blue’s CEO Ginni Rometty stressed the importance of “data trust and responsibility” and called on not only technology companies but all enterprises to be better stewards of data. She was joined by IBM customers who echoed those remarks; for example, Lowell McAdam, chairman and CEO of Verizon Communications, said he didn’t ever want to be in the position that some Silicon Valley companies had found themselves following data misuse or exposures, lamenting that once users’ trust has been broken it can never be repaired.

Other companies piled on the Facebook controversy and played up their privacy protections for users. Speaking at a televised town hall event for MSNBC this week, Apple CEO Tim Cook called privacy “a human right” and criticized Facebook, saying he “wouldn’t be in this situation.” Apple followed Cook’s remarks by unveiling new privacy features related to European Union’s General Data Protection Regulation.

Those pledges and actions are important, but they ignore a critical threat to privacy: government overreach. The omission of that threat might be purposeful. Verizon, for example, found itself in the crosshairs of privacy advocates in 2013 following the publication of National Security Agency (NSA) documents leaked by Edward Snowden. Those documents revealed the telecom giant was delivering American citizens’ phone records to the NSA under a secret court order for bulk surveillance.

In addition, Apple has taken heat for its decision to remove VPN and encrypted messaging apps from its App Store in China following pressure from the Chinese government. And while Tim Cook’s company deserved recognition for defending encryption from the FBI’s “going dark” effort, it should be noted that Apple (along with Google, Microsoft and of course Facebook) supported the CLOUD Act, which was recently approved by Congress and has roiled privacy activists.

The misuse of private data at the hands of greedy or unethical corporations is a serious threat to users’ security, but it’s not the only predator in the forest. Users should demand strong privacy protections from all threats, including bulk surveillance and warrantless spying, and we shouldn’t allow companies to pay lip service to privacy rights only when the aggressor is a corporate entity.

Rometty made an important statement at IBM Think when she said she believes all companies will be judged by how well they protect their users’ data. That’s true, but there should be no exemptions for what they will protect that data from, and no denials about the dangers of government overreach.


March 30, 2018  6:23 PM

Apple GDPR privacy protection will float everyone’s privacy boat

Peter Loshin Peter Loshin Profile: Peter Loshin

With less than two months before the European Union’s General Data Protection Regulation goes into effect, Apple is making notable changes in the name of user privacy. For everyone.

While all companies that collect data from EU data subjects will be subject to the GDPR, Apple has stepped up to announce that privacy, being a fundamental human right, should be available to everyone, including those outside the protection of the EU.

In a move that is raising hope for anyone concerned about data privacy, Apple GDPR protections will be offered to all Apple customers, not just the EU data subjects covered by the GDPR.

The new privacy features are part of Apple’s latest updates to its operating systems — macOS 10.13.4, iOS11.3 and tvOS 11.3 — released on Thursday. The most obvious change, for now, will be a new splash screen detailing Apple’s privacy policy as well as a new icon that will be displayed when an Apple feature wants to collect personal information.

More Apple GDPR support will come later this year when the web page for managing Apple ID accounts is updated to allow easier access to key privacy features mandated under the EU privacy protection regulation, including downloading a copy of all their personal data stored by Apple, correcting account information and temporarily deactivating or permanently deleting the account. The Apple GDPR features will roll out to the EU first after GDPR enforcement begins, but eventually they will be available to every Apple customer no matter where they are.

Apple GDPR protections for all

Speaking at a town-hall event sponsored by MSNBC the day before the big update release, Apple CEO Tim Cook stressed the company profits from the sale of hardware — not the sale of personal data collected on its customers. Cook also took a shot at Facebook for its latest troubles related to allowing improper use of personal data by Cambridge Analytica, saying that privacy is a fundamental human right — a sentiment also spelled out in the splash screen displayed by Apple’s new OS versions.

Anyone concerned about data privacy should welcome Apple’s move, but it may not be as easy for other companies to follow Apple’s lead on data privacy, even with the need to comply with GDPR.

The great thing about the Apple GDPR compliance for everyone move is that it shows the way for other companies: rather than attempting to maintain two different systems for privacy protections, companies can choose to raise the ethical bar for maintaining and supporting personal data privacy to the highest standard, set by the GDPR rules, or they can go to the effort and expense of complying with GDPR only to the extent necessary by law.

On the one hand there is the requirement for GDPR-compliance regarding EU data subjects, where consumers are granted the right to be forgotten and the right to be notified when their data has been compromised, among other rights. On the other hand, companies can choose to continue to collect and trade personal data of non-EU data subjects and evade consequences for privacy violations on those people by complying with the minimal protections required by the patchwork of less stringent legislation in effect in the rest of the world.

While a technology company like Apple can focus its efforts on selling hardware while protecting its customers’ data, it remains to be seen what the big internet companies — like Facebook, Google, Amazon and Twitter — will do.

Companies whose business models depend on the unfettered collection, use and sale of consumer data may opt to build a two-tier privacy model: more protection for EU residents under GDPR, and less protection for everyone else.

As a member of the “everyone else” group, I’d rather not be treated like a second-class citizen when it comes to privacy rights.


March 27, 2018  8:55 PM

RSA Conference keynotes miss the point of diversity

Madelyn Bacon Madelyn Bacon Profile: Madelyn Bacon

RSA Conference finalized its keynote speaker lineup this week, and while the new cast has been adjusted to include more female speakers, precious few actually work in cybersecurity.

RSA conference was criticized last month for initially only booking one female keynote. Activist and writer Monica Lewinsky was formerly the only female keynote speaker at the conference and despite her important work in cyberbullying and online harassment, she is not a security professional.

RSA Conference Vice President and Curator Sandra Toms penned a blog post Monday that introduced the finalized list of RSA Conference keynotes, which featured an additional six female speakers. However, neither the blog post nor revamped lineup adequately addressed the equal representation issues that have surrounded RSAC.

“We’ve been working from the beginning to bring unique backgrounds and perspectives to the main stage, and are thrilled to deliver on that mission,” Toms wrote. “Whether business leaders, technologists, scientists, best-selling authors, activists, futurists or policy makers, our keynote speakers are at the top of their fields and have experience commanding a stage in front of thousands of people.”

RSA Conference’s work to add more female keynote speakers stands in stark contrast to OURSA, an alternative conference that was quickly organized in response to the lack of diversity and representation in the RSA Conference keynotes. The vast majority of speakers scheduled for OURSA — an acronym for Our Security Advocates — are female and all are accomplished in various cybersecurity fields. The lineup includes big names from Google, the Electronic Frontier Foundation, the ACLU, Twitter and many others.

As the OURSA Conference was pulled together, the RSA Conference addressed the issue of diversity with another lackluster blog post from Toms.

“Invitations were extended to many potential female guest keynote speakers over the past seven months,” she wrote. “While the vast majority declined due to scheduling issues, the RSA Conference keynote line-up is not yet final. Overall this year, RSA Conference will feature more than 130 female speakers, on both the main stage, Industry Experts stage and in a variety of other sessions and labs, tackling topics from data integrity to hybrid clouds and application security, among others. And while 20% of our speakers at this year’s conference are women, we fully recognize there is still work to be done.”

The finalized lineup of speakers

The new lineup of RSA Conference keynotes is final and includes more women, but it sends mixed messages.

Keynote speakers for RSAC now include the Department of Homeland Security Secretary Kirstjen Nielsen; game designer and SuperBetter inventor Jane McGonigal; Kate Darling of MIT Media Lab; Dawn Song of UC Berkeley; founder and CEO of Girls Who Code Reshma Saujani; and New York Times bestselling author of “Hidden Figures” Margot Lee Shetterly.

In any other context, this is a remarkable lineup of speakers. But with few exceptions, these women are primarily not cybersecurity professionals who have been brought in to discuss cybersecurity topics.

OURSA Conference, which will take place the same week at RSA Conference in San Francisco, managed to bring together a diverse group of accomplished women to discuss actual technical topics — including applied security engineering, practical privacy protection, and security policy and ethics for emerging technology — in a short amount of time. However, RSA Conference has — seemingly in reaction to the negative press — selected successful women to speak about women’s issues at a security conference.

Bringing women into a security conference to discuss the issue of not enough women in security does not solve the problem. If women are to be properly represented at technology conferences, they need to be booked to speak about technology and they need to be considered initially — not as a reactionary stop-gap.

While the 2018 line-up of RSA Conference keynotes has many powerful names, perhaps next year it will take a page out of OURSA’s playbook and ask women in security to actually speak about security.


February 23, 2018  9:31 PM

Facebook’s 2FA bug lands social media giant in hot water

Rob Wright Profile: Rob Wright
Security

At Black Hat USA 2017, Facebook CSO Alex Stamos said “As a community we tend to punish people who implement imperfect solutions in an imperfect world.”

Now, Facebook has found itself on the receiving end of such punishment after users who had enabled two-factor authentication reported receiving non-security-related SMS notifications on their phones.

News reports of the issue led several security experts and privacy advocates to slam Facebook for leveraging two-factor authentication numbers for what many viewed as Facebook spam. Critics assumed the Facebook 2FA notifications, which alerted users about friends’ activity, were intentional and part of Facebook’s larger effort to improve engagement on the site, which has been steadily losing users lately. However, in a statement last week acknowledging the issue, Stamos said this was not the case.

“It was not our intention to send non-security-related SMS notifications to these phone numbers, and I am sorry for any inconvenience these messages might have caused,” Stamos wrote. “To reiterate, this was not an intentional decision; this was a bug.”

It’s unclear how a bug led the Facebook 2FA system to be used for engagement notification, but the unwanted texts weren’t the only issue; when users responded to Facebook’s notifications to request the company stop texting them, those messages were automatically posted to users’ Facebook pages. Stamos said this was an unintended side effect caused by an older feature.

“For years, before the ubiquity of smartphones, we supported posting to Facebook via text message, but this feature is less useful these days,” Stamos wrote in his statement. “As a result, we are working to deprecate this functionality soon.”

The Facebook 2FA bug did more than rattle users of the social media site – it led to a notable public feud on Twitter. Matthew Green, a cryptography expert and professor at Johns Hopkins University, was one of several people in the infosec community to sharply criticize Facebook for its misuse of 2FA phone numbers and argued that sending unwanted texts to users would turn people away from an important security feature.

However, Alec Muffett, infosec consultant and developer of the Unix password cracker tool “Crack,” took issue with Green’s argument and the critical media coverage of Facebook’s 2FA bug, which he claimed was having a more negative effect on 2FA adoption than the bug itself.

At one point during the Twitter feud between Muffett and Green, Stamos himself weighed in with the following reply to Green:

“Look, it was a real problem. You guys, honestly, overreacted a bit,” Stamos Tweeted. “The media covered the overreaction without question, because it fed into emotionally satisfying pre-conceived notions of FB. Can we just admit that this could have been better handled by all involved?”

I, for one, cannot admit that. If this episode had occured in a vacuum, then Stamos might have a point. But it didn’t. First, Facebook isn’t some middling tech company; it’s a giant, flush with money and filled with skilled people like Stamos who have a stated mission to provide first-class security and privacy protection. I don’t think it’s unfair to hold such a company to a higher standard in this case.

Also, Stamos complains about “emotionally satisfying pre-conceived notions” of Facebook as if the company doesn’t have a well-earned reputation of privacy violations and questionable practices over the years. To ignore that extensive history in coverage of the Facebook 2FA bug would be absurd.

This is not to argue that this was, as many first believed, a deliberate effort to send user engagement notifications. If Stamos says it was a bug, then that’s good enough for me. But any bug that inadvertently sends text spam to 2FA users is troubling, and there are other factors that make this episode unsettling. There are reports that this bug has existed for quite some time, and it’s difficult to believe that among Facebook’s massive user base, not a single person contacted Facebook to alert them to the situation, and no one among Facebook’s 25,000 employees noticed text notifications were being sent to 2FA numbers.

I don’t expect Facebook to perfect, but I expect it to be better. And Stamos is right: Facebook could have handled this better. And if there’s been damage done to the adoption of 2FA because of this incident, then it starts with his company and not the media coverage of it.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: