Security Bytes


December 17, 2018  3:03 PM

Marriott Starwood data breach notification de-values customers

Peter Loshin Peter Loshin Profile: Peter Loshin

It’s never good news when a large organization makes headlines for a cybersecurity incident, but when they keep happening, even the most egregious data exposures become run-of-the-mill.

For example, take the latest record-setting event: the Marriott Starwood data breach, which exposed at least some data of approximately 500 million customers — and enough data to be dangerous to about 327 million of those customers. Not as big as the Yahoo breach reported in 2017, in which all of Yahoo’s users — three billion of them — were exposed. But the impact of the Marriott Starwood data breach is likely far greater.

The Marriott Starwood data breach, starting in 2014 and ongoing until this year, exposed some combination of “name, mailing address, phone number, email address, passport number, Starwood Preferred Guest (‘SPG’) account information, date of birth, gender, arrival and departure information, reservation date, and communication preferences” for about 327 million of its customers.

Just five years ago, an enterprise that exposed personal data in a cyberattack would notify its customers — usually, by postal service — and provide access to assistance, which included some form of identity theft monitoring or protection to the violated customers.

We’ve come a long way since the 2013 Target breach, after which the retail giant cleaned up its cybersecurity act and made serious efforts to regain the trust of its customers. After it was breached, Target notified its affected customers, told them they would not be liable for charges made to their cards fraudulently, and offered them a year of free credit monitoring and identity theft protection. This came to be viewed as the baseline for breach response — but Target went beyond that. Target went on the offensive to protect itself and its customers from attack: it was one of the first major U.S. retailers to roll out EMV-compliant point of sale payment terminals and EMV chip and PIN payments cards (the Target REDCard).

Now, the baseline seems to be last year’s Equifax breach, after which it was clear that the consumer credit rating agency not only failed at defending its data but also failed at properly notifying affected consumers while also initially treating the event as a revenue-enhancement opportunity by offering an inadequate protection service for free — for the first year —  which turned into a paid subscription thereafter.

What happened?

Breach fatigue happened. By now, most consumers have had their personal details exposed multiple times by giant corporations, have been notified multiple times of their exposures, may even have tried using one of the many “first year free” credit monitoring and identity theft protection services.

Even the way Marriott Starwood data breach notifications were sent out to the hundreds of millions of customers whose data was compromised raised questions. While the email Marriott sent out claimed that notifications were being sent out to affected customers “on a rolling basis” starting on Nov. 30, it wasn’t until Dec. 10 that widespread reports of the notification began to surface — including reports that many of those notification messages went directly to the spam folder. For example, Martijn Grooten, the security researcher and editor of Virus Bulletin, tweeted that “If the Marriott breach notification email was marked as spam (as it was for me), here’s a possible reason why,” linking to a Spamhaus article that explained why Marriott’s notifications wound up in spam folders: Marriott used a sender domain for its email notifications — @email-marriott.com — that looked malicious. And while the notification mentioned that affected customers could enroll with the WebWatcher monitoring service, no link to that service was provided in the notification.

If the pattern hadn’t already been set by data breach responses like those from Yahoo and Equifax and many others like the marketing company Exactis, which also exposed hundreds of millions this year, it would certainly seem as if Marriott is breaking a new trail of arrogance and ignorance, repeating many of the same failures that some enterprises seem to think are acceptable. But the hospitality giant is merely adopting what has become a sorry standard for breach responses.

November 30, 2018  9:30 PM

Are US hacker indictments more than Justice Theater?

Michael Heller Michael Heller Profile: Michael Heller

Hacker indictments by U.S. Justice Department haven’t proven effective and now the Treasury Department is also getting in on the act with questionable sanctions.

In the past five months alone, the U.S. Department of Justice has indicted at least 30 foreign individuals in connection with various cyberattacks, but only three of those individuals were arrested and extradited to the U.S., which puts into question if legal action is little more than “justice theater,” akin to Bruce Schneier’s “security theater” put on by the likes of the TSA.

In July, special counsel Robert Mueller indicted 12 Russian intelligence officers in connection with the DNC and DCCC hacks; in September, one member of the North Korean Lazarus Group was indicted; in October, seven more Russian officers were indicted; and November saw eight Russian nationals indicted for running a massive botnet — three of whom are in custody — and two Iranians in connection to the SamSam ransomware.

Before this month, in order to find a foreign national that was indicted and detained — not considering his trial proceedings have begun — you have to go back to Aug. 2017 with Marcus Hutchins, aka MalwareTech, a British security researcher detained after attending Defcon 2017 in Las Vegas.

Along with the latest hacker indictment of two Iranian nationals, the Treasury Dept. designated two additional Iranian men for their role in exchanging the bitcoin earned in ransomware attacks into Iranian rial.

According to the Treasury Dept., a designation action means, “all property and interests in property of the designated persons that are in the possession or control of U.S. persons or within or transiting the United States are blocked, and U.S. persons generally are prohibited from dealing with them.”

However, considering the “property” in this context is a decentralized cryptocurrency, it’s unclear what — if anything — this action means in real world terms. Unlike assets held by a U.S. bank or bank in a friendly nation, a bitcoin wallet doesn’t fall under any authority’s jurisdiction.

Making the case worse for the Treasury Dept., neither bitcoin wallet had a meaningful balance at the time of the designation announcement. One of the wallets hadn’t seen activity since Dec. 2017 — until receiving two payments the day after the Treasury announcement — and the other had a balance equivalent to just over $3 as of Nov. 11, before receiving two payments each on the day of and the day after the announcements.

The two bitcoin wallets combined received a total of 5901.4 BTC while in use, but the value of that is difficult to calculate because of the high volatility of bitcoin prices over the past year and the owners of the wallets always being quick to send funds to other accounts. It’s possible the amount of bitcoin was worth tens of millions of U.S. dollars.

That’s tens of millions of dollars sent to dozens or hundreds of different accounts going back to 2013, and all but about $3 of which was gone before the Treasury Dept. announced any actions.

At least with the indictments, the DoJ can theoretically limit the travel of the individuals charged or seize assets in America. The Treasury Dept. has put sanctions on two men who most likely won’t be extradited, and are attempting to “block” property that was gone before any action was taken. That feels like peak justice theater.


November 29, 2018  8:07 PM

Breaking down Dell’s “potential cybersecurity incident” announcement

Rob Wright Profile: Rob Wright

With numerous regulations and laws like the European Union’s General Data Protection Regulation putting pressure on enterprises to go public with cybersecurity incidents, we’ve seen a trend of businesses disclosing breaches first and filling in the details later.

Dell provided the latest example of this trend Wednesday, announcing a “potential cybersecurity incident” that it detected earlier in the month. But despite the disclosure, it’s unclear if Dell should be celebrating or preparing for class action lawsuits. Let’s take a closer look at Dell’s notification.

First, there’s the headline — “Dell Announces Potential Cybersecurity Incident” – which is somewhat confusing because according to Dell itself, there most definitely was an incident. The company says “it detected and disrupted unauthorized activity on its network attempting to extract Dell.com customer information, which was limited to names, email addresses and hashed passwords.” It sounds like Dell thinks there was a potential breach rather than a potential cybersecurity incident.

Regardless, Dell apparently stopped the intrusion before attackers could steal any data, which is good news. But Dell qualified that statement with this portion of the announcement: “Though it is possible some of this information was removed from Dell’s network, our investigations found no conclusive evidence that any was extracted.”

The absence of evidence, however, doesn’t mean the attackers were unsuccessful. We don’t have any idea how long Dell thinks the intrusion lasted – only that it detected the unauthorized activity on Nov. 9. But we do know that the threat actor or actors attempted to extract customer data, and that it was limited to just names, email addresses and hashed passwords – though we don’t know how they were hashed (hopefully not MD5 or a similarly weak algorithm, and hopefully securely salted).

On the positive side, Dell seemed fairly confident about the scope of the intrusion. “Credit card and other sensitive customer information was not targeted,” the company said in its notification. “The incident did not impact any Dell products or services.”

The company added that it had “implemented countermeasures,” including “the hashing of our customers’ passwords and a mandatory Dell.com password reset.” Password resets are standard operating procedure for any incident, so it’s hard to judge just how severe this potential cybersecurity incident is for Dell based on those reactions. It’s also unclear what Dell means by “hashing the customer passwords.” (Did they rehash them after they were reset? Did they hash them with something different this time around? Did they add salt?)

Nevertheless, it sounds like Dell has contained the issue. The company said it’s investigating the intrusion, hired a third-party firm to conduct a separate, independent investigation, and also engaged law enforcement.

Dell’s announcement raises an important question: is this a cybersecurity win for the company? Based on the information available, Dell was able to detect threat actors on its network and stop them before they successfully extracted any data. That sounds like a win.

However, there are a lot of unknowns that could dampen the positives. We don’t know for sure that no customer data was exfiltrated, we don’t know how long the intrusion lasted, and we don’t know how the threat actors gained the unauthorized access in the first place (if it was, for example, a website flaw that was disclosed a year earlier but never fixed, then that would be bad). The answers to those questions could significantly alter the narrative.

It’s likely we’ll hear more from Dell about this incident down the road. For now, we’ll be left to wonder whether Dell gets to the chalk this up as a win or if it’s yet another negative cybersecurity headline.

 


November 29, 2018  5:54 PM

Will cybersecurity safety ever equal air travel safety?

Peter Loshin Peter Loshin Profile: Peter Loshin

Aviation safety provides an aspirational model of a safety success story when you consider that over the past 50 years, even as total passenger miles have exploded, commercial airline fatalities have plummeted.

The commercial aviation industry has an admirable safety record, but can the lessons learned over the past decades in that industry be extended to improve the state of cybersecurity safety? When it comes to the ongoing discussion about issues related to cybersecurity safety, some of the most respected names in the business have been making an important case that we need to do much better.

The improvements in aviation safety are inarguably worth it: as the number of passengers carried annually has increased by an order of magnitude, the average number of fatal airline accidents has plummeted: flying is now anywhere from 100 to 1,000 times safer than driving, depending on the evaluation criteria used.

Former Facebook CISO Alex Stamos tweeted in October: “It would be great to move InfoSec norms closer to aviation safety, where close-calls are disclosed in a standard, centralized manner and discussed rationally by experts who extract lessons from the mistakes of others,” adding “we currently don’t live in that world.”

In the world we inhabit, cybersecurity safety seems to be modeled more on automotive safety than aviation safety, and that’s the problem.

The U.S. National Highway Traffic Safety Administration reported that 37,461 died in traffic accidents in 2016; the same year, the Aviation Safety Network reported that 258 people died in commercial airline accidents. Unlike drivers, pilots must undergo extensive and ongoing training, must perform extensive system checks on their aircraft before leaving the hangar, and are held accountable for any incident that occurs while the aircraft is under their control.

Clearly, cybersecurity safety has a long way to go, still. As Kevin Beaumont, the U.K.-based security researcher, pointed out in November: “Usually it’s us, the IT bods, being idiots for building a system so fragile one employee can bring down by clicking the ‘wrong’ link. Imagine if planes were built so the passenger could bring down a plane by pressing a button at their seat.”

Under the current model, cybersecurity safety depends on the expectation that billions of end users will be knowledgeable about cyber threats and how to defend against them as well as being aware of the need for antimalware software and patching and staying up to date on security practices and generally taking the initiative to maintain cybersecurity hygiene while also reporting any cyber incidents.

Put another way, every connected device is covered with buttons, any one of which, if pressed at the wrong moment, could “bring down” not just that device, but any number of other connected devices.

Cybersecurity safety is still up in the air, so to speak, for many reasons starting with a lack of sensible regulations and agencies to investigate, share and learn from failures. But disastrous cybersecurity safety failures aren’t seen as harming the whole industry. Consider what Mikko Hypponen, chief research officer at F-Secure, tweeted this week about one magnificent, ongoing failure of security: “I can’t believe that we are *still* fighting Office macro malware now, 20 years later.”

Airlines have a vested interest in keeping air travel safe because if passengers fear for their lives many of them will stop paying to fly. Even if they don’t particularly want to be regulated, those airlines will still accept government safety regulation because safer skies means less losses from crashed planes as well as more passengers willing to pay to ride on safer planes.

The tech sector needs to step up and accept responsibility for cybersecurity safety in the same way the aviation industry did for air travel safety, and that will only begin when vendors are held to higher standards; when vendors, enterprises and government agencies can agree to investigate cyber incidents and focus on cooperation in using that information to improve cybersecurity safety; and when consumers and all end-users can be confident that they can use their devices and the internet safely — and without being victim-blamed when things do, inevitably, go wrong.


November 12, 2018  6:19 PM

Android Ecosystem Security Transparency Report is a wary first step

Michael Heller Michael Heller Profile: Michael Heller
Android, Android security, Google

Reading through Google’s first quarterly Android Ecosystem Security Transparency Report feels like a mix of missed opportunities and déjà vu all over again.

Much of what is in the new Android ecosystem security report is data that has been part of Google’s annual Android Security Year in Review report, including the rates of potentially harmful applications (PHAs) on devices with and without sideloaded apps — spoiler alert: sideloading is much riskier — and rates of PHAs by geographical region. Surprisingly, the rates in Russia are lower than in the U.S.

The only other data in the Android ecosystem security report shows the percentage of devices with at least one PHA installed based on Android version. This is new data shows that the newer the version of Android, the less likely it is a device will have a PHA installed.

However, this also hints at the data Google didn’t include in the report, like how well specific hardware partners have done in updating devices to those newer versions of Android. Considering that Android 7.x Nougat is the most common version of the OS in the wild at 28.2% and the latest version 9.0 Pie hasn’t even cracked the 0.1% marker to be included in Google’s platform numbers, the smart money says OEM updating stats wouldn’t be too impressive.

There’s also the matter of Android security updates and the data around which hardware partners are best at pushing them out. Dave Kleidermacher, head of Android security and privacy, said at the Google I/O developer conference in May 2018 that the company was tracking which partners were best at pushing security updates and that it was considering adding hardware support details to future Android Ecosystem Security Transparency Reports. More recently, Google added stipulations to its OEM contracts mandating at least four security updates per year on Android devices.

It’s unclear why Google ultimately didn’t include this data in the report on Android ecosystem security, but Google has been hesitant to call out hardware partners for slow updates in the past. In addition to new requirements in Android partner contracts regarding security updates, there have been rules stating hardware partners need to update any device to the latest version of Android released in the first 18 months after a device launch. However, it has always been unclear what the punishment would be for breaking those rules. Presumably, it would be a ban on access to Google Play services, the Play Store and Google Apps, but there have never been reports of those penalties being enforced.

Google has taken steps to make Android updates easier, including Project Treble in Android 8.0 Oreo, which effectively decoupled the Android system from any software differentiation added by a hardware partner. But, since Android 7.x is still the most common version in the wild, it doesn’t appear as though that work has yielded much fruit yet.

Adding OS and security update stats to the Android Ecosystem Security Transparency Report could go a long way towards shaming OEMs into being better and giving consumers more information with which to make purchasing decisions, but time will tell if Google ever goes so far as to name OEMs specifically.


October 26, 2018  7:21 PM

Google sets Android security updates rules but enforcement is unclear

Michael Heller Michael Heller Profile: Michael Heller
Android, Google, Google Apps, Google Play Store, Security updates

The vendor requirements for Android are a strange and mysterious thing but a new leak claims Google has added language to force manufacturers to push more regular Android security updates.

According to The Verge, Google’s latest contract will require OEMs to supply Android security updates for two years and provide at least four updates within the first year of a device’s release. Vendors will also have to release patches within 90 days of Google identifying a vulnerability.

Mandating more consistent Android security updates is certainly a good thing, but it remains unclear what penalties Google would levy against manufacturers that fail to provide the updates or if Google would follow through on any punitive actions.

It has been known for years that Google sets certain rules for manufacturers who want to include the Play Store, Play services and Google apps on Android devices, but because enforcement has been unclear the rules have sometimes been seen as mere suggestions.

For example, Google has had a requirement in place since the spring of 2011 mandating manufacturers to upgrade devices to the latest version of the Android OS released within 18 months of a device’s launch. However, because of the logistics issues of providing those OS updates, Google has rarely been known to enforce that requirement.

This can be seen in the Android OS distribution numbers, which are a complete mess. Currently, according to Google, the most popular version of Android on devices in the wild is Android 6.0 Marshmallow (21.6%), followed by Android 7.0 (19%), Android 5.1 (14.7%), Android 8.0 (13.4%) and Android 7.1 (10.3%). And not even showing up on Google’s numbers because it hasn’t hit the 0.1% threshold for inclusion is Android 9.0 released in August.

Theoretically, the ultimate enforcement of the Android requirements would be Google barring a manufacturer from releasing a device that includes Google apps and services, but there have been no reports of that ever happening. Plus, the European Union’s recent crackdown on Android give an indication that Google does wield control over the Android ecosystem — and was found to be abusing that power.

The ruling in the EU will allow major OEMs to release forked versions of Android without Google apps and services (something they were previously barred from doing by Google’s contract). It will also force Google to bundle the Play Store, services and most Google apps into a paid licensing bundle, while offering — but not requiring — the Chrome browser and Search as a free bundle. Although early rumors suggest Google might offset the cost of the apps bundle by paying OEMs to use Chrome and Google Search, effectively making it all free and sidestepping any actual change.

These changes only apply to Android devices released in the EU, but it should lead to more devices on the market running Android but featuring third-party apps and services. This could mean some real competition for Google from less popular Android forks such as Amazon’s Fire OS or Xiaomi’s MIUI.

It’s still unknown if the new rules regarding Android security updates are for the U.S. only or if they will be part of contracts in other regions. But, an unintended consequence of the EU rules might be to strengthen Google’s claim that the most secure Android devices are those with the Play Store and Play services.

Google has long leaned on its strong record of keeping malware out of the Play Store and off of user devices, if Play services are installed. Google consistently shows that the highest rates of malware come from sideloading apps in regions where the Play Store and Play services are less common — Russia and China – and where third-party sources are more popular.

Assuming the requirements for Android security updates do apply in other regions around the globe, it might be fair to also assume they’d be tied to the Google apps and services bundle (at least in the EU) because otherwise Google would have no way to put teeth behind the rules. So, not only would Google have its stats regarding how much malware is taken care of in the Play Store and on user devices by Play services, it might also have more stats showing those devices are more consistently updated and patched.

The Play Store, services and Google apps are an enticing carrot to dangle in front of vendors when requiring things like Android security updates, and there is reason to believe manufacturers would be willing to comply in order to get those apps and services, even if the penalties are unclear.

More competition will be coming to the Android ecosystem in the EU, and it’s not unreasonable to think that competition could spread to the U.S., especially if Google is scared to face similar actions by the U.S. government (as unlikely as that may seem).  And the less power Google apps and services have in the market, the  less force there will be behind any Google requirements for security updates.

 


October 15, 2018  4:45 PM

Mystery around Trend Micro apps still lingers one month later

Rob Wright Profile: Rob Wright
Security

It’s been a little over a month since several Trend Micro apps were kicked out of the Mac App Store by Apple over allegations of stealing user data, but several crucial questions remain unanswered.

To recap, security researchers discovered that seven Trend Micro apps were collecting users’ browser data without notifying users (the vendor claims the data collection was included in its EULAs, but it later conceded the apps had no secondary, informed consent process). Following the removal of those apps, Trend Micro’s story of what took place changed several times – the first statement indicated everything was fine and that the apps were working as designed, while subsequent updates blamed the fiasco on common code libraries that were mistakenly used in certain apps and conceded that the user notification and permission processes needed an overhaul.

Trend Micro last week issued its latest statement on the situation, which included an answer to a vital question about what had happened with these Mac apps: “The data was never shared with any third party, monetized for ad revenue, or otherwise used for any purpose other than the security of customers.”

While that was an important disclosure, there were still questions Trend Micro had yet to answer. I sent some of those questions to Trend Micro; a company spokesperson replied with a statement addressing some of the points but sidestepping others.

  • What happened with “Open Any File: RAR Support”? Initially, researchers identified several apps that were collecting browser histories, and Trend Micro disclosed that five of those apps — Dr. Antivirus, Dr. Battery, Dr. Cleaner, Dr. Cleaner Pro, Dr. Unarchiver and Duplicate Finder – were the company’s property. But two days later, Trend Micro named a sixth app, Open Any Files. Why did it take two days for the company to disclose this? How did Trend Micro not know the Open Any Files app belonged to them? Trend Micro didn’t directly address these questions.
  • Why wasn’t Open Any Files listed as a Trend Micro app? This is one of the stranger parts of the Trend Micro apps controversy. According to a cached Mac App Store page for Open Any Files, there’s no mention of Trend Micro at all. Instead, the app is attributed to a developer named “Hao Wu,” and the description lists Wu as the copyright holder as well. Here is Trend Micro’s answer: “Open Any Files was created by a former Trend Micro developer as a short term pilot project to provide consumers with a number of helpful utilities,” the spokesperson wrote. “As there were no long term plans in place for the support of this application at the time of registration and copyright, full corporate branding was not applied. As you will know, we have decided to stop development and distribution of this particular app.” The spokesperson also said Open Any Files, was released in late 2017 with the browser data collection module enabled, but “starting with the version released in April 2018 (which was publicly available when this issue was reported in September) that functionality had already been removed.”
  • What was Open Any Files’ purpose? The only indications that Open Any Files belonged to Trend Micro are, according to MalwareBytes’ Thomas Reed, that the app was uploading users’ browser data to a Trend Micro domain, and it promoted another Trend Micro app in Dr. Antivirus. “Promoted” might be too soft a word; according to Reed’s assessment, Open Any Files was similar to other “scam applications” that warn users who attempt to open a file with the app that the file in question can’t be opened because it is infected and that users should scan the file with the promoted antivirus app. I asked Trend Micro if the company disputed Reed’s characterization of the app; the spokesperson did not address this question.
  • Who is Hao Wu? It appears from Trend Micro’s statement that Wu is a former developer at the company, but the company isn’t saying anything beyond that. Information from Apple’s Mac and iOS app stores is limited as well. It appears the developer behind Open Any Files is the same Hao Wu that is listed as the owner developer of other apps such as Weird Calc, iWiFiTest, Mr. Cleaner and Thinnest Calculator, but the developer’s app store profile appears to have been removed.
  • Is Trend Micro sure how much data its apps collected? On multiple occasions, the vendor explicitly stated data collection included only a small snapshot of users’ browser data – 24 hours prior to the installation of the apps. But Reed’s analysis of several of Trend Micro’s apps, including Open Any File and Dr. Antivirus, found they were collecting complete browsing and search histories from users. “It could be argued that it is useful for antivirus software to collect certain limited browsing history leading up to a malware/webpage detection and blocking,” Reed wrote in his analysis. “But it is very hard to argue to exfiltrate the entire browsing history of all installed browsers regardless of whether the user has encountered malware or not.” In addition, Reed discovered Dr. Antivirus was also uploading a list with “detailed information about every application found on the system,” which the company had yet to explain in its official statements and FAQ on the matter. Trend Micro responded to these questions. “We must reiterate our earlier statement that the apps in question performed a one-time upload of a snapshot of browser history covering the 24 hours prior to installation for security purposes,” the spokesperson wrote. “In addition, Dr. Antivirus included an app reputation feature that checked for malicious apps and fed anonymized app information into our large app reputation data base to protect users from potentially dangerous apps.”

It’s still unclear why Trend Micro would allow one of its developers to push out an app like Open Any Files if the company – by its own admission – never had any long term support plans for it. It’s also unclear why Trend Micro would remove the data collection feature for this specific app (and not others) but never properly brand Open Any Files.

To its credit, Trend Micro hasn’t ignored the situation or tried to erase its earlier denials of wrongdoing. But given the situation, the company owes more transparency about this episode and what oversight and controls it has around its app development process. The application ecosystem is full of threats, with countless apps performing a bevy of unscrupulous activity or downright malicious attacks against users. We’ve come to expect that kind of activity from get-rich-quick scam artists, cybercriminals and APTs. We don’t, however, expect it to come from one of the world’s largest and most successful security vendors.


October 1, 2018  1:46 PM

FBI, DHS blaming the victims on Remote Desktop Protocol

Peter Loshin Peter Loshin Profile: Peter Loshin

As most of the nation watched the Senate battle over a contentious Supreme Court appointment, the FBI and DHS jointly released a “Public Service Announcement,” in which they warn us all, per the announcement title, that “Cyber actors increasingly exploit the Remote Desktop Protocol to conduct malicious activity.”

An interesting aspect of this warning is that we, all of us – “businesses and private citizens” – must “review and understand what remote accesses their networks allow and take steps to reduce the likelihood of compromise, which may include disabling RDP if it is not needed.”

Got it? The government now expects all of us – every single one of us – to understand this threat and take steps to mitigate it. The person who runs the diner down the street, your parents and grandparents, college kids and retirees and disabled people and single parents; we’re all now on the hook for fixing this particular cyber problem.

It is no secret that the Remote Desktop Protocol has long been a source of exploitable vulnerabilities, and it is well known in the cyber community that RDP should be disabled in almost all cases. Jon Hart, senior security researcher at Rapid7, wrote last year in a blog post that there have been 20 Microsoft security updates on threats related to RDP, with at least two dozen individual CVEs dating back to 1999; he also noted that exploits targeting RDP were part of a 2017 ShadowBrokers leak.

However, what the FBI and DHS warning omits is that the Remote Desktop Protocol is really the Microsoft Remote Desktop Protocol, a proprietary protocol owned by Microsoft. In fact, the Public Service Announcement does not mention the words “Microsoft” or “Windows” once.

If the U.S. government truly wanted to protect its citizens from the depredations of ransomware operators who, like the SamSam threat actor, are abusing RDP to gain access to victim systems, couldn’t the government work directly with Microsoft to mitigate the vulnerability rather than putting the onus for cyberdefense on the victims?

This warning makes me wonder: What does the U.S. government really care about when it comes to “fixing the cyber”?

Cooperating with vendors on encryption, but not on RDP

When it comes to end-to-end encryption, the FBI sings a different tune.

The FBI has been targeting unbreakable end-to-end encryption because, we’ve been told, it interferes with the government’s ability to get lawful access to relevant evidence in some criminal cases. From the moment the government demanded that Apple decrypt an iPhone used by a shooter who was involved in the 2015 San Bernardino mass shooting, it was clear the FBI would continue to take the steps it deemed most effective to battle what it calls “going dark.”

That included pressuring tech giants like Apple, as well as Microsoft, Google, Facebook and others; that also includes leaders speaking out in favor of encryption backdoors and lobbying in favor of legislation that would require tech firms to “solve the problem,” or else.

It seems to me that an all-hands, all-fronts effort like the one mustered for “going dark” would be more effective in limiting cyberthreats like RDP than commanding citizens to “be on the lookout.”


September 14, 2018  8:38 PM

What the GAO Report missed about the Equifax data breach

Rob Wright Profile: Rob Wright
Equifax

The Government Accountability Office did its part to deliver some closure regarding the Equifax data breach by way of a newly published report on the now-infamous security incident.

The GAO report offers a comprehensive look at the numerous missteps made by Equifax, which allowed attackers to maintain a presence in the company’s network for 76 days and extract massive amounts of personal data without being detected. Those errors included having an outdated recipient list of system administrators for vulnerability alerts and an expired digital certificate, which led to a misconfiguration in Equifax’s network inspection system.

But for all its merits, the GAO’s report on the Equifax data breach omits or minimizes important parts of the story. Here are five things that were left out of the report.

  1. Website “issues”: The GAO noted the breach checker website that Equifax set up for consumers suffered from “several technical issues, including excessive downtime and inaccurate data.” But that’s hardly an adequate description of what ailed the website. For starters, the domain resembled a classic phishing URL — equifaxsecurity2017.com. It was also built on a stock version of WordPress (was the company trying to get hacked again?). And it was vulnerable to cross-site scripting attacks. And the site’s TLS certificate didn’t perform revocation checks. These are significantly bigger problems than website downtime.
  2. PIN problems: If the assortment of flaws with the breach checker website wasn’t enough, astute observers also discovered that the PINs generated for consumers who froze their credit files weren’t random, non-consecutive numbers – they were the date and time a consumer made the freeze request. As a result, the chances of threat actors guessing your code are significantly higher than they would be if the PIN digits were randomly selected.
  3. Encryption: This is the biggest omission in the Equifax breach report. While the report does mention encryption several times, it’s never in regard to the personally identifiable information that was exposed by the breach, and how encryption could have protected that data even after the attackers gained access to Equifax’s environment. Instead, the majority of the encryption talk is around how the threat actors abused existing encrypted communication channels to avoid detection when they issued commands and exfiltrated data. Encryption is obviously a sensitive topic within the federal government, but it’s as if the GAO is more concerned with how encryption helped the attackers rather than with how it could have stopped them.
  4. Insider trading: The GAO report doesn’t include any references to the former Equifax executive who was charged with insider trading by the Department of Justice. Jun Ying, the former CIO of Equifax’s U.S. Information Systems business unit, allegedly used non-public information about the breach to exercise his vested options and sell all of shares. While the case has no direct bearing on Equifax’s infosec posture, past or present, it’s a painful reminder that insider trading can be a by-product of enterprise breaches. Omitting any mention of Ying and the insider trading case from an accountability report seems like a missed opportunity for the federal government to address what could potentially be a reoccurring problem as breaches increase in frequency.
  5. Lack of incident response plan: Incident response is sparsely mentioned in the report, and when the GAO does mention it, it’s in the footnotes. For all the faults and errors laid out in the Equifax breach report, the GAO fails to identify a fundamental problem: the company apparently didn’t have a functional incident response plan in place. This is led to Equifax not only making several errors with its breach checker website but also later missteps, such as not knowing whether the company had encrypted consumer data post-breach. A solid, proper incident response plan would have saved Equifax a lot of trouble in the aftermath of the attack.


August 17, 2018  8:25 PM

DHS cybersecurity rhetoric offers contradictions at DEF CON

Michael Heller Michael Heller Profile: Michael Heller
DHS, Election, vote

The Vote Hacking Village at Defcon 26 in Las Vegas was an overwhelming jumble of activity — a mock vote manipulated, children hacking election results websites, machines being disassembled — and among it all were representatives from local and federal government, learning and sharing information and in the case of Jeanette Manfra, assistant secretary for the office of cybersecurity and communications in the Department of Homeland Security (DHS), deflecting the reality of the situation.

In her DEF CON  keynote address, Manfra discussed government cybersecurity in general as well as the ways election security could be improved, but she contradicted herself by refusing to acknowledge the value of the work done by DEF CON and deflecting actions to bring about real change.

The old standby arguments

“The way the government runs, naturally it’s somewhat more complicated. We don’t do IT [in] government particularly well,” Manfra said as an explanation of the DHS’ role. She said DHS was responsible for the cybersecurity of 99 different federal agencies, which have traditionally been isolated in terms of managing security. “We’re trying to get to think about enterprise risk and think about the government as a whole.”

This is a good example of the tone Manfra tried to establish: self-deprecating, but honest about the situation, even if she omitted key pieces of information — such as the challenge of having a holistic view of federal cybersecurity when so many infosec leadership roles in government remain empty — which would contradict the point she made.

Manfra continued to bring up the fact that we live in confusing times in terms of cybersecurity, especially because “the internet has challenged everything when it comes to how we think about the role of government in defending and securing its citizens and its infrastructure.”

“For the first time in a national security space, the government is not on the front lines. Our companies are on the front lines; our citizens are on the front lines; all of you are on the front lines,” Manfra said and concluded this means everyone — government, the intelligence community and the private sector — needs to think differently about their roles in cybersecurity and how to work together. “Our adversaries have been taking advantage of us for a long time. They’ve been taking advantage of our traditional principles for a really long time. And, we’ve got to come up with a way to turn it back on them.”

The idea that the roles of government and the private sector are in flux because of changes in technology is arguably accurate, but the situation is more complex than Manfra portrays. One could just as easily point to the privatization of critical infrastructure and lack of regulations surrounding necessary security and system upgrades in that infrastructure as contributing risk factors.

Manfra’s call for more cooperation between the public and private sectors in terms of security has been a common theme from the government for the past few years. However, the government’s appeal to the private sector to cooperate out of the pride of helping the country has largely fallen on deaf ears, because as was true with Manfra’s speech, the government often fails to make a compelling case.

The government wants to share, but the private sector has little incentive to do so, and experts have said the private sector doesn’t necessarily trust it would benefit from such cooperation, nor that security would improve. Despite the continued reluctance from the private sector and the lack of specifics from the government about what such cooperation would look like, the government seems ready to continue pushing the idea.

Election deflection and contradictions

Once Manfra got to the topic of election security, she began to combine omissions of information with statements that contradicted and attempts to deflect suggestions to make meaningful improvements to security.

“Elections are more than just the voting machines … The complexity is actually a benefit,” Manfra said. “Going back to 2016 when we first started to understand that the Russians were attempting to undermine and sow chaos and discord and undermine our democracy in general — which by the way, they’ve been trying to do for decades, it’s just the technology has allowed them to do it at a better scale.”

Despite acknowledging that attempts to undermine our democracy have been happening “for decades,” Manfra failed to explain why efforts to investigate risk factors and offer aid to improve security did not begin until 2016.

Manfra went on to claim the research the government has done led to the conclusion that it is “really really difficult to try to manipulate the actual vote count itself.” She said there were a lot of reasons for this, including that election machines are “physically secured.” This claim garnered chuckles from the DEF CON crowd, who have little respect for things like padlocks.

Manfra said that while misinformation on social media was an issue, DHS was focused on manipulation of voter rolls and the systems that tally the votes. She gave an example of voters becoming disenfranchised with the system because their names weren’t on the rolls at their polling places. She admitted the local officials running these systems are often under-resourced and need help because they could be using old systems.

“They’re trying to undermine our democratic process and confidence that we have in the democratic process,” Manfra said. “There’s a lot of ways to do that without actually manipulating the vote. We’re very much focused on the state and local process that you and I all participate in — I hope — all the time.”

Manfra explicitly mentioned the effect in undermining the trust in the election that could occur if an adversary were to manipulate the unofficial tally being reported by states. However, Manfra contradicted herself by discounting the efforts by DEF CON — where an 11 year old girl hacked into a mock reporting website in 10 minutes and changed the results — telling reporters after the keynote, “If all you’re saying is ‘Look, even a kid can hack into this.’ You’re not getting the full story which could have the impact of the average voter not understanding.”

Manfra admitted the DHS has begun researching more experimental security technologies, like blockchain, to see what their effects could be on election security. But, it’s unclear how serious the DHS is about making changes that would improve security because she also shied away from mandating proven election security measures such as risk-limiting audits.

“I’m not there yet in terms of mandatory requirements,” Manfra told reporters. “I think mandatory requirements could chill, so then people are only about the compliance with the requirement and not the intent.”

Ultimately, it’s unclear if the DHS has real, actionable plans to improve election security beyond the nebulous idea of helping local officials — assuming those officials ask for help in the first place. DEF CON showed vulnerable areas (reporting websites) and ways to improve security (paper trails and risk-limiting audits), but DHS seemed more interested in waiting and watching than learning from the event.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: