Security Bytes


April 17, 2018  10:36 PM

FedRAMP security requirements put a premium on automation

Rob Wright Profile: Rob Wright
FedRamp

When it comes to the federal government’s cloud rules, security automation is king.

That was the message from Matt Goodrich, director for the Federal Risk and Authorization Management Program (FedRAMP), GSA. Goodrich spoke at the Cloud Security Alliance Summit Monday during RSA Conference 2018 and talked about the history of and lessons learned from FedRAMP, which was first introduced in 2011.

“We wanted to standardize how the federal government does authorizations for cloud products,” Goodrich said, describing the chaos of each individual department and agency having its own set of guidelines and approaches for approving cloud service providers.

Goodrich described in detail the vision behind the regulatory program, the security issues that drove its creation and how FedRAMP security requirements were developed. One of the more interesting details he discussed was the importance of security automation for those requirements.

Three impact levels

FedRAMP has a three-tiered system for cloud service offerings based on impact level: Low, Moderate and High. Low impact systems include public websites with non-sensitive data, which have 35 FedRAMP security requirements. Goodrich said his organization has reduced the number of requirements for Low impact systems, which had been more than 100. “With these systems, we’re looking to ask [cloud providers]: Do you have a basic security program? Do you do scanning, do you patch, and do you have vulnerability management processes like that,” he said.

Moderate impact systems, meanwhile, include approximately 80% of all data across the federal government, and as such they have 325 FedRAMP security requirements for cloud providers. That includes having a well-operated risk management program, Goodrich said, as well as encryption and access controls around the data.

High impact systems are another story. “These are some of the most sensitive systems we have across the government,” Goodrich said, such as Department of Defense data. Compromises of these systems’ data, he said, could lead to financial catastrophes for government agencies and private sector organizations or even loss of life. High impact systems have 420 FedRAMP security requirements, and the focus of those requirements is on security automation.

“Basically we’re looking for a high degree of automation behind a lot of what these high impact systems do,” Goodrich said. “If you can cut what a human can do and have a machine do it, then that’s what’s going to have to be implemented. It’s the difference between moderate and high systems.”

A lot of the FedRAMP security requirements for moderate and high systems are the same, Goodrich said, but it’s how cloud providers implement the controls for those requirements that are different. Having configuration management tools, for example, in place will get you a contract to maintain moderate impact systems in the cloud, but having automated configuration management tools will get you in the door for high impact systems.

Security automation is something that’s been talked about for years, but new developments and investments around AI and automation seem to have reignited interest lately. Goodrich’s insights echo similar statements at the RSA Conference this week from the private sector on the value of automated systems that not only alleviate the burden on infosec professionals but also enhance security operations within an organization.

CrowdStrike, for instance, introduced Falcon X, the newest part of its cloud-based Falcon platform, which automates malware analysis processes to help enterprises respond to security incidents faster. In addition, ISACA’s State of Cybersecurity 2018 report emphasized the value of security automation in offsetting the shortage of skill infosec personnel within an organization.

FedRAMP’s security requirements make it clear the U.S. government doesn’t trust humans to handle its most sensitive data – which begs the question: Should enterprises adopt the same approach?

March 31, 2018  6:01 PM

Privacy protections are needed for government overreach, too

Rob Wright Profile: Rob Wright
Security

After the unfortunate yet predictable Facebook episode involving Cambridge Analytica, several leaders in the technology industry were quick to pledge they would never allow that kind corporate misuse of user data.

The fine print in those pledges, of course, is the word ‘corporate,’ and it’s exposed a glaring weakness in the privacy protections that technology companies have brought to bear.

Last week at IBM Think 2018, Big Blue’s CEO Ginni Rometty stressed the importance of “data trust and responsibility” and called on not only technology companies but all enterprises to be better stewards of data. She was joined by IBM customers who echoed those remarks; for example, Lowell McAdam, chairman and CEO of Verizon Communications, said he didn’t ever want to be in the position that some Silicon Valley companies had found themselves following data misuse or exposures, lamenting that once users’ trust has been broken it can never be repaired.

Other companies piled on the Facebook controversy and played up their privacy protections for users. Speaking at a televised town hall event for MSNBC this week, Apple CEO Tim Cook called privacy “a human right” and criticized Facebook, saying he “wouldn’t be in this situation.” Apple followed Cook’s remarks by unveiling new privacy features related to European Union’s General Data Protection Regulation.

Those pledges and actions are important, but they ignore a critical threat to privacy: government overreach. The omission of that threat might be purposeful. Verizon, for example, found itself in the crosshairs of privacy advocates in 2013 following the publication of National Security Agency (NSA) documents leaked by Edward Snowden. Those documents revealed the telecom giant was delivering American citizens’ phone records to the NSA under a secret court order for bulk surveillance.

In addition, Apple has taken heat for its decision to remove VPN and encrypted messaging apps from its App Store in China following pressure from the Chinese government. And while Tim Cook’s company deserved recognition for defending encryption from the FBI’s “going dark” effort, it should be noted that Apple (along with Google, Microsoft and of course Facebook) supported the CLOUD Act, which was recently approved by Congress and has roiled privacy activists.

The misuse of private data at the hands of greedy or unethical corporations is a serious threat to users’ security, but it’s not the only predator in the forest. Users should demand strong privacy protections from all threats, including bulk surveillance and warrantless spying, and we shouldn’t allow companies to pay lip service to privacy rights only when the aggressor is a corporate entity.

Rometty made an important statement at IBM Think when she said she believes all companies will be judged by how well they protect their users’ data. That’s true, but there should be no exemptions for what they will protect that data from, and no denials about the dangers of government overreach.


March 30, 2018  6:23 PM

Apple GDPR privacy protection will float everyone’s privacy boat

Peter Loshin Peter Loshin Profile: Peter Loshin

With less than two months before the European Union’s General Data Protection Regulation goes into effect, Apple is making notable changes in the name of user privacy. For everyone.

While all companies that collect data from EU data subjects will be subject to the GDPR, Apple has stepped up to announce that privacy, being a fundamental human right, should be available to everyone, including those outside the protection of the EU.

In a move that is raising hope for anyone concerned about data privacy, Apple GDPR protections will be offered to all Apple customers, not just the EU data subjects covered by the GDPR.

The new privacy features are part of Apple’s latest updates to its operating systems — macOS 10.13.4, iOS11.3 and tvOS 11.3 — released on Thursday. The most obvious change, for now, will be a new splash screen detailing Apple’s privacy policy as well as a new icon that will be displayed when an Apple feature wants to collect personal information.

More Apple GDPR support will come later this year when the web page for managing Apple ID accounts is updated to allow easier access to key privacy features mandated under the EU privacy protection regulation, including downloading a copy of all their personal data stored by Apple, correcting account information and temporarily deactivating or permanently deleting the account. The Apple GDPR features will roll out to the EU first after GDPR enforcement begins, but eventually they will be available to every Apple customer no matter where they are.

Apple GDPR protections for all

Speaking at a town-hall event sponsored by MSNBC the day before the big update release, Apple CEO Tim Cook stressed the company profits from the sale of hardware — not the sale of personal data collected on its customers. Cook also took a shot at Facebook for its latest troubles related to allowing improper use of personal data by Cambridge Analytica, saying that privacy is a fundamental human right — a sentiment also spelled out in the splash screen displayed by Apple’s new OS versions.

Anyone concerned about data privacy should welcome Apple’s move, but it may not be as easy for other companies to follow Apple’s lead on data privacy, even with the need to comply with GDPR.

The great thing about the Apple GDPR compliance for everyone move is that it shows the way for other companies: rather than attempting to maintain two different systems for privacy protections, companies can choose to raise the ethical bar for maintaining and supporting personal data privacy to the highest standard, set by the GDPR rules, or they can go to the effort and expense of complying with GDPR only to the extent necessary by law.

On the one hand there is the requirement for GDPR-compliance regarding EU data subjects, where consumers are granted the right to be forgotten and the right to be notified when their data has been compromised, among other rights. On the other hand, companies can choose to continue to collect and trade personal data of non-EU data subjects and evade consequences for privacy violations on those people by complying with the minimal protections required by the patchwork of less stringent legislation in effect in the rest of the world.

While a technology company like Apple can focus its efforts on selling hardware while protecting its customers’ data, it remains to be seen what the big internet companies — like Facebook, Google, Amazon and Twitter — will do.

Companies whose business models depend on the unfettered collection, use and sale of consumer data may opt to build a two-tier privacy model: more protection for EU residents under GDPR, and less protection for everyone else.

As a member of the “everyone else” group, I’d rather not be treated like a second-class citizen when it comes to privacy rights.


March 27, 2018  8:55 PM

RSA Conference keynotes miss the point of diversity

Madelyn Bacon Madelyn Bacon Profile: Madelyn Bacon

RSA Conference finalized its keynote speaker lineup this week, and while the new cast has been adjusted to include more female speakers, precious few actually work in cybersecurity.

RSA conference was criticized last month for initially only booking one female keynote. Activist and writer Monica Lewinsky was formerly the only female keynote speaker at the conference and despite her important work in cyberbullying and online harassment, she is not a security professional.

RSA Conference Vice President and Curator Sandra Toms penned a blog post Monday that introduced the finalized list of RSA Conference keynotes, which featured an additional six female speakers. However, neither the blog post nor revamped lineup adequately addressed the equal representation issues that have surrounded RSAC.

“We’ve been working from the beginning to bring unique backgrounds and perspectives to the main stage, and are thrilled to deliver on that mission,” Toms wrote. “Whether business leaders, technologists, scientists, best-selling authors, activists, futurists or policy makers, our keynote speakers are at the top of their fields and have experience commanding a stage in front of thousands of people.”

RSA Conference’s work to add more female keynote speakers stands in stark contrast to OURSA, an alternative conference that was quickly organized in response to the lack of diversity and representation in the RSA Conference keynotes. The vast majority of speakers scheduled for OURSA — an acronym for Our Security Advocates — are female and all are accomplished in various cybersecurity fields. The lineup includes big names from Google, the Electronic Frontier Foundation, the ACLU, Twitter and many others.

As the OURSA Conference was pulled together, the RSA Conference addressed the issue of diversity with another lackluster blog post from Toms.

“Invitations were extended to many potential female guest keynote speakers over the past seven months,” she wrote. “While the vast majority declined due to scheduling issues, the RSA Conference keynote line-up is not yet final. Overall this year, RSA Conference will feature more than 130 female speakers, on both the main stage, Industry Experts stage and in a variety of other sessions and labs, tackling topics from data integrity to hybrid clouds and application security, among others. And while 20% of our speakers at this year’s conference are women, we fully recognize there is still work to be done.”

The finalized lineup of speakers

The new lineup of RSA Conference keynotes is final and includes more women, but it sends mixed messages.

Keynote speakers for RSAC now include the Department of Homeland Security Secretary Kirstjen Nielsen; game designer and SuperBetter inventor Jane McGonigal; Kate Darling of MIT Media Lab; Dawn Song of UC Berkeley; founder and CEO of Girls Who Code Reshma Saujani; and New York Times bestselling author of “Hidden Figures” Margot Lee Shetterly.

In any other context, this is a remarkable lineup of speakers. But with few exceptions, these women are primarily not cybersecurity professionals who have been brought in to discuss cybersecurity topics.

OURSA Conference, which will take place the same week at RSA Conference in San Francisco, managed to bring together a diverse group of accomplished women to discuss actual technical topics — including applied security engineering, practical privacy protection, and security policy and ethics for emerging technology — in a short amount of time. However, RSA Conference has — seemingly in reaction to the negative press — selected successful women to speak about women’s issues at a security conference.

Bringing women into a security conference to discuss the issue of not enough women in security does not solve the problem. If women are to be properly represented at technology conferences, they need to be booked to speak about technology and they need to be considered initially — not as a reactionary stop-gap.

While the 2018 line-up of RSA Conference keynotes has many powerful names, perhaps next year it will take a page out of OURSA’s playbook and ask women in security to actually speak about security.


February 23, 2018  9:31 PM

Facebook’s 2FA bug lands social media giant in hot water

Rob Wright Profile: Rob Wright
Security

At Black Hat USA 2017, Facebook CSO Alex Stamos said “As a community we tend to punish people who implement imperfect solutions in an imperfect world.”

Now, Facebook has found itself on the receiving end of such punishment after users who had enabled two-factor authentication reported receiving non-security-related SMS notifications on their phones.

News reports of the issue led several security experts and privacy advocates to slam Facebook for leveraging two-factor authentication numbers for what many viewed as Facebook spam. Critics assumed the Facebook 2FA notifications, which alerted users about friends’ activity, were intentional and part of Facebook’s larger effort to improve engagement on the site, which has been steadily losing users lately. However, in a statement last week acknowledging the issue, Stamos said this was not the case.

“It was not our intention to send non-security-related SMS notifications to these phone numbers, and I am sorry for any inconvenience these messages might have caused,” Stamos wrote. “To reiterate, this was not an intentional decision; this was a bug.”

It’s unclear how a bug led the Facebook 2FA system to be used for engagement notification, but the unwanted texts weren’t the only issue; when users responded to Facebook’s notifications to request the company stop texting them, those messages were automatically posted to users’ Facebook pages. Stamos said this was an unintended side effect caused by an older feature.

“For years, before the ubiquity of smartphones, we supported posting to Facebook via text message, but this feature is less useful these days,” Stamos wrote in his statement. “As a result, we are working to deprecate this functionality soon.”

The Facebook 2FA bug did more than rattle users of the social media site – it led to a notable public feud on Twitter. Matthew Green, a cryptography expert and professor at Johns Hopkins University, was one of several people in the infosec community to sharply criticize Facebook for its misuse of 2FA phone numbers and argued that sending unwanted texts to users would turn people away from an important security feature.

However, Alec Muffett, infosec consultant and developer of the Unix password cracker tool “Crack,” took issue with Green’s argument and the critical media coverage of Facebook’s 2FA bug, which he claimed was having a more negative effect on 2FA adoption than the bug itself.

At one point during the Twitter feud between Muffett and Green, Stamos himself weighed in with the following reply to Green:

“Look, it was a real problem. You guys, honestly, overreacted a bit,” Stamos Tweeted. “The media covered the overreaction without question, because it fed into emotionally satisfying pre-conceived notions of FB. Can we just admit that this could have been better handled by all involved?”

I, for one, cannot admit that. If this episode had occured in a vacuum, then Stamos might have a point. But it didn’t. First, Facebook isn’t some middling tech company; it’s a giant, flush with money and filled with skilled people like Stamos who have a stated mission to provide first-class security and privacy protection. I don’t think it’s unfair to hold such a company to a higher standard in this case.

Also, Stamos complains about “emotionally satisfying pre-conceived notions” of Facebook as if the company doesn’t have a well-earned reputation of privacy violations and questionable practices over the years. To ignore that extensive history in coverage of the Facebook 2FA bug would be absurd.

This is not to argue that this was, as many first believed, a deliberate effort to send user engagement notifications. If Stamos says it was a bug, then that’s good enough for me. But any bug that inadvertently sends text spam to 2FA users is troubling, and there are other factors that make this episode unsettling. There are reports that this bug has existed for quite some time, and it’s difficult to believe that among Facebook’s massive user base, not a single person contacted Facebook to alert them to the situation, and no one among Facebook’s 25,000 employees noticed text notifications were being sent to 2FA numbers.

I don’t expect Facebook to perfect, but I expect it to be better. And Stamos is right: Facebook could have handled this better. And if there’s been damage done to the adoption of 2FA because of this incident, then it starts with his company and not the media coverage of it.


February 8, 2018  8:55 PM

Symantec’s untrusted certificates: How many are still in use?

Rob Wright Profile: Rob Wright
Security

The fallout from Google’s decision last year to stop trusting Symantec certificates has been difficult to quantify, but one security researcher has provided clarity on how many untrusted certificates are still being used.

Arkadiy Tetelman, senior application security engineer at Airbnb, posted research over the weekend about the number of untrusted certificates still in use by Symantec customers (Symantec’s certificate authority (CA) business was acquired late last year by rival CA DigiCert). According to Tetelman, who scanned the Alexa Top 1 Million sites, approximately 103,000 Symantec certificates that are set to have trust removed this year are still in use; more than 11,000 of those will become untrusted certificates in April with the release of Chrome 66, and more than 91,000 will become untrusted in October with Chrome 70.

“Overall the issue is not hugely widespread,” Tetelman wrote, “but there are some notable hosts still using Symantec certificates that will become distrusted in April.”

According to Tetelman’s research, those notable sites include iCloud.com, Tesla.com and BlackBerry.com. He noted that some users running beta versions of Chrome 66 are already seeing connections to websites using these untrusted certificates rejected, along with a browser security warning that states “Your connection is not private.”

Google’s decision to remove trust for Symantec-issued certificates stems from a series of incidents in recent years with the antivirus maker’s CA business. Among those incidents were numerous misissued certificates (including certificates for Google) and repeated auditing problems. Last March, Google announced its intent to remove trust from Symantec certificates based on its investigation into the company’s CA operations. After months of negotiations – and hostile public sparring – between Symantec and the web browser community, Symantec finally agreed to a remediation plan offered by Google, Mozilla and other browser companies.

That remediation plan gave Symantec a choice: either build a completely new PKI for its certificates or turn over certificate issuance operations to one or more third-party CAs. Symantec ultimately opted to sell its PKI business to DigiCert in August.

DigiCert, meanwhile, still has to make good on the remediation to which Symantec agreed. And so far, it has; DigiCert met a Dec. 1 deadline to integrate Symantec’s PKI with its own backend operations and ensure all certificates are now issued and validated through DigiCert’s PKI.

But DigiCert will still have to contend with untrusted certificates currently used by Symantec customers. Along with the Chrome 66 and 70 release dates, new versions of Mozilla’s Firefox will also remove trust for Symantec certificates; Firefox 60, scheduled for May, will distrust Symantec certificates issued before June 1, 2016, while Firefox 63, scheduled for December, will distrust the rest of Symantec’s certificates.

In other words, more work needs to be done before this mess is completely cleaned up.


January 26, 2018  3:05 PM

Blizzard security flaw should put game developers on notice

Rob Wright Profile: Rob Wright
Security

Imagine you could reach into an application that had none of the enterprise security protections we’ve come to appreciate but was still used by millions of people — themselves blissfully unaware of the risks the application posed — and use that vulnerable application to hack into millions of PCs.

That may sound like a dream scenario for cybercriminals, but it’s all too real thanks to modern video games.

Tavis Ormandy of Google’s Project Zero this week published details of a DNS rebinding flaw contained in the PC games of Blizzard Entertainment, including World of WarCraft, Overwatch, Hearthstone and StarCraft. The Blizzard security flaw, which is contained in a shared utility tool called “Blizzard Update Agent,” allows a malicious actor to impersonate the company’s network and issue privileged commands and files to the tool — which, again, is contained within all of Blizzard’s games and would theoretically put millions of players’ PCs at risk.

“Any website can simply create a DNS name that they are authorized to communicate with, and then make it resolve to localhost,” Ormandy wrote in the Chromium bug report. “To be clear, this means that *any* website can send privileged commands to the agent.”

The actual number of gamers at risk is unknown. Ormandy referenced a report claiming “500 million monthly active users [MAUs],” however that number refers to the total number of MAUs for Blizzard’s parent company, Activision Blizzard. According to Activision Blizzard’s third quarter 2017 financial results, Blizzard alone reached a record 42 million MAUs for the period, but it’s unclear how many of those users would be affected by the Blizzard security bug (the Blizzard Update Agent is only contained in the PC version of the company’s games and not used in game console versions).

If the DNS rebinding vulnerability itself wasn’t bad enough, there was a lack of communication from Blizzard as well as later miscommunication about how the issue was being addressed. In the Chromium bug report, Ormandy wrote that he notified Blizzard of the issue on Dec. 8, but weeks later the company had cut off contact with him.

Blizzard (partially) addressed the critical DNS rebinding vulnerability with an update to the tool that checks requested against blacklisted applications and executables. But the company didn’t alert Project Zero that it had updated the tool; Ormandy learned about it on his own.

As a result, Ormandy, believing the Blizzard security flaw had been silently patched, publicly disclosed the vulnerability. But Blizzard quickly restored contact with Ormandy to say the previous update wasn’t the final fix for the issue and that it was working on a different patch for the DNS rebinding vulnerability.

“We have a more robust Host header whitelist fix in QA now and will deploy soon. The executable blacklisting code is actually old and wasn’t intended to be a resolution to this issue,” a Blizzard representative said on the Chromium post. “We’re in touch with Tavis to avoid miscommunication in the future.”

Blizzard finally issued a new Blizzard Update Agent, version 2.13.8, on Wednesday with the host header whitelist to completely fix the issue.

Uncovering critical bugs in games

I’ve long worried about the relative insecurity of PC games, specifically massively multiplayer online games or MMOs. I had the fortune of covering the video game industry for several years, and it left me with several prevailing beliefs about how inherently broke the modern game development process is. Big budget games are routinely subjected to what’s known in the industry as “crunch,” which are period of substantially longer work hours and increased pressure in an effort meet deadlines and development milestones.

And yet even with these crunch periods (or, as some claim, because of them), PC games routinely launch with bugs. In fact, even blockbuster games with the biggest budgets often ship with bugs, whether it’s a minor but obvious graphical glitch or a major flaw that renders the game unplayable. And these are the bugs that “matter” to the industry’s bottom line. If these sorts of flaws are slipping by, I shudder to think what type of security vulnerabilities are lurking inside these games.

The case of Blizzard’s security flaw is concerning, and not just because of the nature of the DNS rebinding vulnerability. Blizzard is one of the most successful and respected game companies in the industry, not just for the quality of its game development but also for its technical support and service. And yet the company seemingly fumbled its way through the bug disclosure and patching processes in this case. Bugs and vulnerabilities are inevitable, which is why proper handling of the discovery, disclosure and mitigation is, for lack of a better word, critical.

Blizzard’s security flaw should serve as a wake-up call for other MMO makers and the games industry as a whole. Ormandy said he plans to look at other popular PC games in the near future. That’s a good thing; video game companies should welcome the news with open arms.

You never want one of the best known and most prolific bug hunters in infosec knocking on your door. But game companies have to answer the knock before they find themselves, and their customers, getting pwned.


January 18, 2018  4:38 PM

The strange case of the ‘HP backdoor’ in Lenovo switches

Rob Wright Profile: Rob Wright
Security

Concern about government-mandated backdoors in technology products may be at an all-time high, but the recent discovery of an “HP backdoor” in Lenovo networking gear should prove equally alarming for the IT industry.

The computer maker last week issued a security advisory, LEN-16095, for an authentication bypass in Enterprise Networking Operating System (ENOS), the firmware that runs several of Lenovo’s networking switches. The vulnerability in question, according to Lenovo, could allow an attacker to gain access to the switch management interface and change setting that could expose traffic or cause denial of service.

That seems straightforward enough, but the advisory gets complicated in a hurry. Here’s how Lenovo laid it all out:

  • The authentication bypass mechanism in Lenovo’s ENOS software is known as “HP backdoor,” though the advisory doesn’t explain what that means.
  • The “HP backdoor” was discovered, according to Lenovo, during a security audit in the Telnet and Serial Console management interfaces and the SSH and web management interfaces “under certain limited and unlikely conditions.”
  • Lenovo said a “source code revision history audit” revealed the authentication bypass formerly known as “HP backdoor” has been hidden inside ENOS for quite a long time – since 2004, to be exact, when ENOS was part of Nortel Network’s Blade Server Switch Business Unit (BSSBU).
  • This is where it gets truly strange: The Lenovo security advisory, which at this point had moved firmly into CYA mode, drops this bomb: ” The [authentication bypass] mechanism was authorized by Nortel and added at the request of a BSSBU OEM customer.”
  • Lenovo – as if to shout at the top of its lungs “We are not responsible for this backdoor!” – painstakingly explains that Nortel owned ENOS at that time, then spun off BSSBU two years later as Blade Network Technologies (BNT), which was then acquired by IBM in 2010. Then in 2014, BNT, ENOS and the HP backdoor ended up in Lenovo’s lap after it acquired IBM’s x86 server business.
  • Lenovo said it has given “relevant source code” to a third-party security partner for an independent investigation of the authentication bypass, but the company doesn’t say who the partner is.
  • And finally – in case it wasn’t clear already that the ENOS backdoor is absolutely not Lenovo’s doing – the computer maker states for the record that such backdoors are “unacceptable to Lenovo and do not follow Lenovo product security or industry practices.” Lenovo’s reaction here is understandable considering some of the recent security issues like using hardcoded passwords and pre-installing Superfish adware on its systems.

By the time the security advisory is over, the fix for the ENOS backdoor – a firmware update that removes the authentication bypass mechanism – seems like an afterthought. And to be sure, the conditions required for this vulnerability to be exploited are indeed limited and unlikely, in Lenovo’s words (for example, SSH is only vulnerable for certain firmware released between May and June of 2004).

However, there are a number of questions, starting with:

  • Is the “HP backdoor” a reference to Hewlett-Packard? And is HP the unnamed BSSBU OEM customer that requested the backdoor access for ENOS? Again, Lenovo’s security advisory doesn’t say, but according to reports, HP was indeed a Nortel customer at that time. When asked for comment, Lenovo said “This is the name of the feature in the user interface, on the command line interface, hence the name.”
  • Why would Nortel build a ubiquitous authentication bypass into its networking operating system and undermine its security based on a customer request? Getting an answer to this one will be tough since Nortel was dissolved after declaring bankruptcy in 2009.
  • How did the backdoor go unnoticed by both IBM and Lenovo for several years while ENOS was part of their respective product families?
  • Were there any factors that led Lenovo to examine ENOS interfaces “under certain limited and unlikely conditions” nearly four years after Lenovo agreed to acquire IBM’s x86 server business? Lenovo replied, “This was part of a routine security audit.”
  • Was the source code audit that Lenovo performed to find the HP backdoor the first such audit the company had performed on ENOS? Lenovo said “HP Backdoor was found through new techniques added to our recurring internal product security assessments, the first time that these techniques were applied to ENOS.” However, it’s not entirely clear from the response if Lenovo did perform earlier source code audits for ENOS and simply missed an authentication bypass that literally has the word “backdoor” in it.

An authentication bypass in a legacy networking OS with narrow parameters isn’t exactly an urgent matter. But Lenovo’s security advisory does raise some serious issues. The tech community has collectively wrung its hands – with good reason – over government-mandated backdoors. Yet it’s abundantly clear that prominent vendors within that same community have been poking their own holes in product for decades. And those self-inflicted wounds become even more dangerous as the tech changes hands again and again over the years, with little if any security oversight.

It’s troubling to think that a single customer could lead a major technology company to install a potentially dangerous backdoor in a widely used product. And it’s even more troubling to wonder how many other vendors have done exactly that – and forgotten about the doors they opened up so many years ago.


January 9, 2018  6:50 PM

Intel keynote misses the mark on Meltdown and Spectre vulnerabilities

Rob Wright Profile: Rob Wright
Security

When Intel CEO Brian Krzanich took the stage last night at CES 2018 in Las Vegas, he began his keynote by addressing the elephants in the room – the recently disclosed Meltdown and Spectre vulnerabilities affecting modern CPU architectures, including Intel’s.

Those remarks lasted approximately two minutes.

Then Krzanich turned to his prepared keynote address, which featured flying drones and a guest appearance from former Dallas Cowboys quarterback and NFL analyst Tony Romo. Celebrity sightings and gimmicky gadgets are much more of the CES culture than information security talks, so Krzanich’s keynote wasn’t exactly a surprise in that respect.

However, it was disappointing to see the world’s largest chip maker waste an opportunity to provide clarity and reassurances about its plans – both short term and long term – to address the Meltdown and Spectre vulnerabilities. Krzanich did discuss the issues briefly; he thanked the technology industry for coming together to work on the problems.

“The collaboration among so many companies to address this industry-wide issue, across several different processor architectures, has been truly remarkable,” Krzanich said during his keynote.

On that note, Intel’s CEO is absolutely correct. But Krzanich did little to explain how that collaboration happened and what benefits it will provide in the future as companies continue to grapple with these problems. And as far as Intel’s individual efforts go, Krzanich mostly repeated what the company had previously announced – that updates for more than 90% of the products released in the last five years will arrive within a week. He added that Intel expects the remaining products to received updates “by the end of January.”

But that was about it. Krzanich didn’t say what the updates would be (again, he repeated previous company statements that the performance impacts of the updates would be “highly workload-dependent”) or how Intel would “continue working with the industry to minimize the [performance] impact” of the updates. He didn’t say what Intel’s long-term plan was for the Meltdown and Spectre vulnerabilities, or even say if there was such a plan.

It’s important to note that Intel actually had new information to provide; according to a report from The Oregonian, Krzanich authored an internal memo announcing the formation of a new internal group, dubbed Intel Product Assurance and Security. But for whatever reason, Krzanich didn’t mention it.

Meltdown and Spectre are critical findings with industry-altering implications. A consumer-focused show may not seem like the best setting to discuss the intricacies of microprocessor designs and technical roadmaps, but CES is still the biggest technology event in the world. Last night was an opportunity to reach an enormous amount of consumers, enterprises and media and communicate a clear strategy for the future. And Intel largely wasted it.

“Security is job number one for Intel and our industry,” Krzanich said last night.

If that were true, then the Meltdown and Spectre vulnerabilities should have warranted more than two minutes on the biggest technology stage of the year.


December 29, 2017  6:58 PM

Official TLS 1.3 release date: Still waiting, and that’s OK

Peter Loshin Peter Loshin Profile: Peter Loshin

“Measure twice, cut once,” is a good way to approach new protocols, and TLS 1.3 is no exception.

When it comes to approving updates to key security protocols, the Internet Engineering Task Force may seem to move slowly as it measures the impact of changes to important protocols. In the case of the long-awaited update to version 1.3 of the Transport Layer Security protocol, the IETF’s TLS work group has been moving especially slowly — but for good reasons.

The TLS protocol provides privacy (encryption) and authentication for internet applications operating over the Transmission Control Protocol (TCP); TLS is a critically important protocol for enforcing security over the web. The current proposed standard, TLS 1.2, was published in 2008 and is the latest in a line of protocols dating back to the first transport layer security protocol used to secure the web, the Secure Sockets Layer (SSL), which was first published by Netscape in 1995.

The first draft version of the latest update was published for discussion by the TLS work group in April 2014, and SearchSecurity has been covering the imminent release of TLS 1.3 since 2015. Despite the long wait for a TLS 1.3 release date, I’m happy to continue waiting given all that we’ve learned from the process so far.

There is no questioning that TLS is in need of updating. The new version of the protocol will add several  important features, including most prominently the preference for support of perfect forward secrecy. Perhaps more important is the thorough pruning of obsolete or otherwise “legacy” algorithms. As the authors of the latest TLS 1.3 draft (version 22!) put it, “[s]tatic RSA and Diffie-Hellman cipher suites have been removed; all public-key based key exchange mechanisms now provide forward secrecy.” Other updates include some performance boosts as well as improvements in the security of key generation and the handshake protocol.

As Akamai Technologies’ Rich Salz put it in an October 2017 blog post, the new version of TLS is faster, more reliable and more secure; all things to be desired in a security protocol.

All in all, these are positive moves, but why is it taking so long to officially update the TLS 1.3 specification and publish it as an RFC?

It’s taking so long because the system is working.

Protocol problems

The process is intended to flush out protocol issues, especially as they relate to protocol implementations. Just because TLS 1.3 is not yet officially released, vendors — like Cloudflare, Akamai, Google and many others — have been rolling out support for TLS 1.3, and reporting on the issues as they uncover them. And issues have been uncovered:

David Benjamin, a Google developer who works on the Chromium project, wrote in a post to a TLS working group list that “TLS 1.3 appears to have tripped over” a dodgy version of RSA’s BSAFE library that some have theorized was put in place by RSA at the request of the National Security Agency (NSA).

Matthew Green, cryptography expert and professor at Johns Hopkins University, spelled out why the discovery of that BSAFE flaw may shed light on how that “theorized NSA backdoor” worked.

That is a positive outcome, especially as it gives security professionals an excellent reason to root out the potentially exploitable code.

Boxes in the middle

Another important issue that the process uncovered was that of misbehaving middleboxes, which earlier in 2017 were cited as the primary reason that TLS 1.3 appeared to be breaking the internet.

Middleboxes are the systems, usually security appliances, that sit “in the middle” between servers and clients and provide security through packet inspection. To actually inspect packets that have been encrypted using TLS, the middleboxes need to act as proxies for the clients, setting up one secure TLS connection between the client and the middlebox and another between the middlebox and the server.

It turns out that TLS 1.3 causes some issues with middleboxes, not because of anything wrong with the new protocol, but because of problems with the way middlebox vendors implemented TLS support into their products, causing them to fail when attempting to negotiate TLS connections with endpoints using the TLS 1.3. The solution will likely involve some tweaking of the updated protocol to compensate for unruly implementations while pressuring vendors to fix their implementations.

These issues with the TLS 1.3 draft likely contributed to the lengthy delay in the specification being finalized. I expect TLS 1.3 to be published sometime next year, although that’s what I’ve expected for the last three years.

But given what we’ve already learned from the process, that’s just fine.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: