Security Bytes

March 4, 2015  5:19 PM

Why Hillary can’t mail

Robert Richardson Robert Richardson Profile: Robert Richardson

Reporting by The New York Times notwithstanding, it appears to this non-lawyer that Hillary Clinton probably didn’t break any laws by using a personal email account to conduct state business. But legal or not, what should probably bother us all is that we can’t help but assume that there’s no way that was secure. Because that’s a key concern here–we just can’t believe that state secrets could be safe in such an account.

The Times broke the story by saying that Clinton had not only made extensive use of a personal email account while Secretary of State, but that “Federal regulations, since 2009, have required that all emails be preserved as part of an agency’s record keeping system.”

It is neither clear that the requirements regarding email accounts were in effect in 2009 (or indeed, during the entirety of Clinton’s tenure in the office), nor that she failed to preserve the email messages (when the State Department requested the email last year, Clinton provided 50,000 messages).

The relevant law here, it would appear, begins with a memorandum to update an existing (1950) law guiding records management. The memorandum was issued by President Obama in 2011.

This directive didn’t change the rules, it simply called for the rules to be re-examined and updated. The National Archives and Records Administration (NARA) subsequently published their findings and proposed changes in August 2013. At that point, Clinton had been out of office for more than half a year. The NARA findings weren’t actually signed into law until 2014.

Nothing in the new rules (or the old ones) prevents a cabinet member from using a personal email account to conduct business, provided the records are kept and the emails are turned over to that cabinet member’s agency for preservation. And it’s been done before: Colin Powell used personal email for state business.

Legal or not, commentary on the Internet has been quick to say that using personal email is no way to treat sensitive national business. As a practical matter, that seems irrefutably true, though whether it’s equally true that the State Department’s email system is secure is, I’d say, an open question. Certainly their system for diplomatic cables had its rough edges.

But why is it so obvious or inevitable that a personal mail account is insecure? Email is, frankly, a pretty straightforward system. And yes, certificate management is a tricky subject that complicates encryption, but I see no reason why major providers of personal accounts couldn’t issue basic certifications as part of creating an account. This wouldn’t mean that messages coming from that account weren’t fraudulent (because the real identity of the account holder would presumably remain unverified), but it would mean that if I sent you a message encrypted with those keys, nobody else would be able to read it, give or take the NSA.

Hillary’s team didn’t lack for means and I suspect she has some pretty sharp people handling her IT. But somehow we’ve been concerned about computer security for a quarter century and we have no faith at all that she couldn’t set up her own secure email. Now there’s something that ought to be against the law.

February 18, 2015  5:01 PM

When is an ISAC not an ISAC?

Robert Richardson Robert Richardson Profile: Robert Richardson

A lot of what went on at the White House Summit on Cybersecurity and Consumer Protection, held at  Stanford University last week was for show — a reaction in particular to the attacks allegedly carried out by North Korea against Sony pictures. Like any live event, was also clearly some desire to get lots of the right people in the same room news reports pointed out that several of the right people, including the CEOs from Google and accept checks, opted out.

But this was also an event where the President took the time to show up and deliver a speech. Furthermore, President Obama made a point of publicly signing an executive directive, creating an air of something happening. The sound bite for what was going on, the way that the broad market media covered it, was that this directive encouraged sharing of cyber security threat information between the government and the private sector.

It’s worth noting that most of what the order calls for already exists in one form or another.

The organization traditionally tasked with combating cybercrime as the FBI, though the DHS increasingly seems to think it’s their problem, or least that it’s their problem to detect incipient attacks and help build up private-sector defenses (since most of the infrastructure that makes up the Internet is in private hands). Presumably it’s still the FBI (and local law enforcement) that crashes through a hacker’s door and impounds their electronics before they can be wiped.

The FBI funded a nonprofit organization, InfraGard, to link US businesses to the FBI all the way back in 1996. Aside from the FBI efforts to foster cooperation, there are a number of information sharing and analysis centers (ISACs) for different industry verticals that were funded as a result of a presidential directive issued by Bill Clinton in 1998.

Since ISACs are still motoring right along, you could be forgiven if you found yourself wondering about the difference between an ISAC and an ISAO (information sharing and analysis organization), which is what Obama’s directive calls for.

As it turns out, there may well be no difference between an ISAC and ISAO, according to a fact sheet that the White House published alongside the directive. ISACs can be ISAOs, though they may have to follow somewhat different rules if they are, insofar as the directive also calls for a nonprofit agency to create a “common set of voluntary standards for ISAOs.”

Perhaps the key element lurking in the directive is the idea that this network of ISAOs, connecting to the National Cyber Security and Communications Integration Center (NCCIC) to foster public/private sharing, creates a framework that could serve as a reporting channel companies could use to gain protection from liability when reporting security incidents. This idea of liability protection for companies that share comes from legislation proposed by the President in January. It’s unclear whether Congress or the American public as much stomach for letting corporate America off the hook for leaving their barn doors open.

For the time being, though, just remember that an ISAC and and ISAO are probably the same thing. It’s just that now there’s going to be a whole lot more sharing going on for reasons that, well, aren’t entirely clear.

February 13, 2015  4:56 PM

Prevoty offers context-aware, automatic RASP

Robert Richardson Robert Richardson Profile: Robert Richardson

Though I’ll admit to a bit of skepticism about Runtime Application Self Protection (RASP), I was nevertheless impressed with a recent look at Prevoty. The two-year-old company’s product, which currently has support for Java and .NET Web applications and services, can be dropped into production systems without recoding being required, uses a separate server (either on premise or in the cloud) to do the heavy compute parts, and seems to have a smarter-than-average approach to determining the application context in which requests (such as SQL queries) are made within the running application.

Getting the context right is important in RASP, because otherwise you’re more or less just melding the brains of a web application firewall (WAF) to each of your applications—to no particular advantage over just using a real WAF. Nor is context all that easy to get. It’s not simply a matter of scanning the code or scanning user input on the fly, and Prevoty does no static scanning or signature hunting, according to Prevoty CTO Kunal Anand, who started his career working on Mars rovers at NASA, moved on to build and run the security team at MySpace, then graduated to be the technology head for BBC Worldwide.

“We don’t believe in pattern matching at all,” Anand says. Instead, Prevoty has recreated the parsing procedures that browsers and other components use. “If you think about SQL databases today,” he says, “the SQL database typically has what is called a query planner, which looks at a database query and can understand things like which indexes they need to pull.” Prevoty has built in a query planner that replicates how the planners in all the major SQL databases work. “We can see every field that’s going to be accessed, every function that’s going to be invoked, every table where information is going to be coming from, and we’re also able to do this recursively, so we can see subqueries and joins.” Protection against things like injected JavaScript works in similar fashion.

Like some other RASP products (Arxan’s, for instance), Prevoty can either be used in a completely automatic fashion, where programmers don’t have to change a single line of code. “They’re simply including the Prevoty JAR files and adding it to the XML file for execution,” Anand explains.

There’s also an SDK approach where developers directly call Prevoty functions at appropriate moments in the control flow of their applications.

The automatic approach offers both a passive learning mode and a mode that will take action when problems are detected. The passive mode, being asynchronous, does not impact the normal performance of the application. The active mode’s performance impact depends on whether the deployment is using an on-premise engine or the cloud engine. If it’s on premise, Anand says customers are seeing round-trip times between app and engine in the range of one to two milliseconds. Cloud users see round trips of 20 to 40 milliseconds. Processing at the engine takes seven milliseconds.

I was walked through a demonstration of adding Prevoty to an application, in this case the WebGoat application (OWASP’s sample application that’s intentionally built with a number of significant security flaws). Before installation, it’s trivially easy to inject malicious JavaScript into the application from a text field (in this case, an alert popup that says hello). After installation (which literally takes seconds), the same injection is detected and the response is bracketed so that it’s returned in a non-executable form that the browser simply ignores.

One important distinction between this form of detection and the sort of input validation that developers are generally encouraged to include with every input field is that Prevoty doesn’t perform this function by searching for strings of JavaScript—again, no pattern matching. If static scans for matching expressions were performed, the Prevoty engine would be fooled by hackers that obscure code by transforming it into characters that, when evaluated as script, evaluate to perform the same functions as the original code, even though the obfuscated version doesn’t appear to be script at all. There are a number of ways to do this and I won’t dive into it here, but poke around and you’ll find several online tools for doing this in various ways. To be fair, most of the obfuscation approaches have sufficiently characteristic appearances that new signatures for those obfuscations can be added to a static scanning engine, but there’s always the chance of some new approach to obfuscation that one isn’t scanning for. Anyway, since Prevoty is aware of the context and what’s about to be executed, it recognizes that nothing should be executed from the text field and catches the problem whether it’s obfuscated or not.

February 21, 2014  6:01 PM

Black Hat researcher turns out the lights

Robert Richardson Robert Richardson Profile: Robert Richardson

With Black Hat’s conference in Singapore coming up next month, I found myself chatting with independent security researcher Nitesh Dhanjani, who’ll be giving a presentation at the March 25-28 event. We talked mostly about some things he’d learned about the Philips hue lightbulb system. These LED bulbs, one of the hipster gems lurking in accessory section of the Apple Store, are a fancy, Internet-connected product, squashed full of wireless circuitry. Among other things, you can use your iPhone to set them to one of 16 million colors (and these days, I should add, you can do this from an Android app as well).

The magic works because the lightbulbs talk in the Zigbee home automation wireless protocol to a base unit that acts as a bridge between Zigbee and Wifi. Your smartphone controls the lightbulb essentially by using an app that browses to a Web server running on the base unit.

There’s a security feature that Philips has incorporated – your smartphone won’t be able to send control commands through the base unit unless it’s been pre-registered. To carry this registration out, you’ve got to use the local Wifi connection and you have to first press a button on the base unit (setting the unit to an open registration mode for a few minutes).

Partway through his research, Dhanjani noticed a quirk in this security feature after he’d had to completely reload his phone’s operating system and a fresh, unconfigured copy of the app.

The thing was, TK had been forced to completely reload his phone’s operating system and to load a fresh, unconfigured copy of the app. “I was walking over to my bridge, where I needed to press the button and it just worked. So I thought, how does the app know? Because I’m supposed to press the button. I thought, it just has to be something that ties to the phone.”

Long story short, the app sends a hash of the phone’s Wifi MAC address. If you’ve got access to the Wifi network, you’ve got access to the MAC address and getting the hash value is trivial. So, imagine a virus that attacks a conventional endpoint on your home network, but then turns off your lights and just keeps looping to turn them off.

Fatal? That’s probably too dark a view of the matter. But it’s more proof, if you needed it, that the Internet of Things will repeat the security mistakes of ten years ago.

September 27, 2013  9:14 PM

Fortune 1000 companies keep their mouths closed, Willis says

Robert Richardson Robert Richardson Profile: Robert Richardson

Ran across the Fortune 1000 Cyber Disclosure Report, published earlier this month by Willis North America, a unit of Willis Group Holdings. The report found that among the Fortune 501-1,000, 22% remained silent on cyber risk. A “significant” increase compared to 12% of the Fortune 500 firms who remained silent in their disclosures, Willis said.

Other findings:

  • The top three cyber risks identified by the Fortune 1,000 include: privacy/loss of confidential data, reputation risk and malicious acts.
  • Cyber terrorism and intellectual property risks ranked lower than expected among the Fortune 1,000 given the focus of the federal government on these areas of risk and their importance to the health of the U.S. economy overall, the report said.
  • When describing the “extent” of cyber risk exposures, financial institutions and technology companies rise to the top of the list disclosing distinct cyber exposures.  Meanwhile, firms in the energy and utility sector report the fewest distinct exposures.
  • In evaluating loss control measures, the industry groups that disclosed the greatest number of technical protections against cyber risk, including firewalls, intrusion detection, and encryption, include the technology, health care, professional services and financial institution sectors. Within financial services firms, insurance companies refer to technical risk protection 63% of the time.
  • With respect to cyber insurance protection, the funds sector (33%) followed by utilities (15%), the banking sector and conglomerates (14%) reported the greatest levels of insurance.  Insurance and technology sectors both disclosed the purchase of insurance coverage at the 11% level.  However, the report indicated that many companies may be under-reporting the level of cyber insurance coverage based on Willis data and other industry data indicating higher take up rates, particularly for the health care sector.
  • The disclosure of actual cyber events remains at 1%, a seemingly low number given the number of attacks that appear in the press on a regular basis, the report said.

August 30, 2013  3:27 PM

McAfee report summarizes second quarter

Robert Richardson Robert Richardson Profile: Robert Richardson

Out in the last few days is an interesting quarterly update report from McAfee.

Topline findings from the second quarter of the year include the following:

  • Banking Malware. Malicious parties employ mobile malware that capture usernames and passwords, and then intercept SMS messages containing bank account login credentials to directly access accounts and transfer funds.
  • Fraudulent Dating Apps. These apps dupe users into signing up for paid services that do not exist. The profits from the purchases are later supplemented by the ongoing theft and sale of user information and personal data stored on the devices.
  • Weaponized Apps. These threats collect a large amount of personal user information (contacts, call logs, SMS messages, location) and upload the data to the attacker’s server.
  • Ransomware. The number of new samples in the second quarter was greater than 320,000, more than twice as many as the previous period.
  • Attacks on Bitcoin Infrastructure. In addition to disruptive distributed denial of service attacks, victims were infected with malware that uses computer resources to mine and steal the virtual currency.

That last bit, the Bitcoin bit, was kind of interesting, in that the whole Bitcoin kerfluffle has completely fallen off the radar in a remarkably short amount of time. There’s a nice timeline in the report to refresh one’s memory–and I suspect we’ll here more about Bitcoin someday fairly soon. On the other hand, it didn’t seem like McAfee could offer any information that was already generally reported in the news.

August 23, 2013  8:34 PM

Fun and games with content security policies

Robert Richardson Robert Richardson Profile: Robert Richardson

On a typical Web page, it’s possible to load a script from another file. Typically, that bit of script will be something you, the site developer, will have put there yourself and it will be loaded from your server.

But it doesn’t have to be that way. It’s possible to load that script from just about anywhere and it’s also possible that someone malicious could use an input field on a form at your site to inject some scripting, or at least a call to invoke a script from elsewhere.

Since that injected script is probably up to no good, it would be nice to ensure that it can’t run. And that’s where the HTML Content Security Policy header comes in. As a new entry on the Neohapsis blog puts it:

CSP functions by allowing a web application to declare the source of where it expects to load scripts, allowing the client to detect and block malicious scripts injected into the application by an attacker.

Sounds good, right? But the Neohapsis folks have found that really getting a handle on CSP takes some practice and some experimentation. So to facilitate this, they’ve created the CSPplayground, which includes a number of things, but in particular includes a bunch of examples of CSP screwups.

July 16, 2013  9:18 PM

Uniqul offers facial recognition payment scheme.

Robert Richardson Robert Richardson Profile: Robert Richardson

I’m fairly sure that the folks at Uniqul are serious–they’ve recently announced a product that uses facial recognition to extract your payments when you buy things. You scan your purchases, it scans your face, and then you or somebody whose face is similar to yours pays for your stuff.

From their PR pitch:

Long gone are the days of having to scour the bottom of your wallet for that missing nickel, having to pick out the applicable bonus and payment card or logging in on mobile phone wallets. Payment is handled simultaneously as your wares are scanned – which means that you will be saving that ca. 30 second payment time every time you pay with Uniqul! In other words the transaction becomes almost instant – all that is required is that you press an OK button on our Point-of-Sale tablet! Face recognition is handled automatically by our algorithms to ensure a high level of security and fast recognition.

I’ll concede, it would be pretty cool not to have to bother with things like mobile phone wallets (as if I had one). It would be cool to save the 30 seconds per transaction. But it would not be cool to have to make the awkward expressions people make when using the product, as shown in their promo video.

May 1, 2013  1:56 PM

North Korean attacks on the rise?

Robert Richardson Robert Richardson Profile: Robert Richardson

Solutionary posted a blog piece late last week that takes at look at incidents originating from North Korean IP addresses. A couple of key findings:

  • North Korea has historically generated 34-200 touches per month against Solutionary clients… until February of 2013 when Solutionary recorded 12,473 touches – an 8445% increase over the average during the previous 12 months
  • It is important to note that just over 11,000 of these touches were directed against a single financial services entity as part of a prolonged attack, but that the remaining spike of around 1,000 was spread across its client base and was still a relevant number
  • North Korea has never been considered a “big player,” BUT things are beginning to change with the new regime in North Korea
  • Coincidence? The last spike in “touches” occurred in November 2012 when North Korea replaced their defense minister with “a more aggressive, hard-line military commander”
  • While the touches span across 13 industries, the financial sector was the top target, and has been for quite some time

The percentage increase statistic strikes me as pretty well-nigh meaningless, given that the base was tiny, a couple hundred incidents a month, but this does seem to indicate that North Korea has found a new toy.

August 29, 2012  1:21 PM

Trend Micro shoots down Crisis Trojan threat to VMware

Marcia Savage Marcia Savage Profile: Marcia Savage

Last week, when Symantec researchers said they had discovered the Windows version of the Crisis Trojan could spread to VMware virtual machines, it was big news. But Trend Micro doesn’t see Crisis as a major threat for enterprises using VMware. In fact, executives at the company think Crisis’s potential to spread to virtual machines was overblown.

“There was a fair amount of hype,” Harish Agastya, director of product marketing for data center security at Trend Micro, told me in a meeting this week at VMworld in San Francisco.

The Crisis malware only impacts Windows-based Type2 hypervisor deployments, not Type 1 hypervisor deployments, which are what most enterprises use, he said. “It’s specific to Type 2,” he said.

Warren Wu, director of product group management in the data center business unit, wrote a blog post that provided more details on the different deployments and attack scenarios. Here’s his description:

Type 1 Hypervisor deployment – Prime examples are VMware ESX, Citrix Xensource etc. It would help to think of these products as replacing the Host OS (Windows/Linux) and executing right on the actual machine hardware. This software is like an operating system and directly controls the hardware. In turn, the hypervisor allows multiple virtual machines to execute simultaneously.  Almost all data center deployments use this kind of virtualization. This is NOT the deployment this malware attacks. I’m not aware of malware capable of infecting Type 1 Hypervisors in the wild.

Type 2 Hypervisor deployment – Example VMware Workstation, VMware Player etc. In this case the hypervisor installs on TOP of a standard operating system (Windows/Linux) and in turn hosts multiple virtual machines on top. It is this second scenario that the malware infects. First, the host operating system is compromised. This could be a well-known Windows/Mac OS attack (with the only added wrinkle being the OS is detected and the appropriate executable is installed). It then looks for VMDK files and probably instantiates the VM (using VmPlayer) and then uses the same infection as that used for the Host OS. This type of an infection can be stopped with up-to-date, endpoint antimalware solutions.

What makes Crisis unique, Wu wrote, is that it specifically seeks out virtual machines and tries to infect them. It also infects the VM through the underlying infrastructure by modifying the VMDK file instead of infecting the VM through more conventional avenues such as file shares, he said.

Trend Micro has made a name for itself in virtualization security, so what the company is saying about Crisis carries a lot of weight. Trend Micro was the first security vendor to partner with VMware and produce an agentless antivirus product. At VMWorld, the company launched the latest version of its Deep Security server security platform, which provides anti-malware and firewall protection, intrusion prevention and integrity monitoring to protect virtual servers and desktops.

The new version features caching and de-duplication functions to reduce file scanning and improve performance and hypervisor integrity monitoring. Deep Security 9 also includes integration with VMware’s vCloud Director and Amazon Web Services. That integration combined with a unified management console will enable customers to manage security of their physical, virtual and cloud servers from a single console, Agastya said.

Trend also launched Trend Ready for Cloud Service Providers, a program that provides certification that Trend Micro’s cloud security products – Deep Security and Secure Cloud– are compatible within a service provider’s environment, said Scott Montgomery, global strategic director of cloud provider business development at Trend. AWS, Dell, HP Cloud Services and Savvis are among the cloud service providers that have received the Trend Ready designation.

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: