Security Bytes


March 21, 2012  4:28 PM

Mobile device protection: OWASP working on a Top 10 mobile risks list

Robert Westervelt Robert Westervelt Profile: Robert Westervelt

When I arrived home from RSA Conference 2012 after attending a number of panel discussions about mobile device protection, mobile security threats and ways IT teams can build control and visibility into their employee smartphones, I left feeling that many of the session panelists overhyped the risks.

In one session, a few experts warned incessantly about weaponized applications; another had a security expert discussing the skyrocketing mobile malware statistics. It was rather off putting that there was little discussion about how mobile device platforms are built differently than desktop OSes. In fact, a Microsoft network analyst attempted to compare the evolution of iOS and Android to the evolution of Windows. Something, I was told by several security experts, is nearly impossible to do. Security capabilities, including sandboxing, designed to isolate applications from critical processes, are built right into the mobile firmware.

I spoke to Kevin Mahaffey, CTO and founder of Lookout Security, which targets security-conscious consumers with a mobile application that provides antimalware protection, device locate, remote wipe and secure backup features.  Mahaffey was very forthcoming, saying it’s his belief that both Google Android and Apple iOS are the most secure OSes ever built.

His comment, which is no doubt debatable, made me seek out good sources of non-hyped potential risks posed by mobile devices to the enterprise. I may have stumbled upon the beginnings of a good list.

Several security experts active with the Open Web Application Security Project (OWASP) are developing a list of mobile risks. OWASP, known for its Top 10 Web Application Vulnerabilities list has come up with the Top 10 Mobile Risks list. It was released in September, and has been undergoing an open review period for public feedback. It’s still a work in progress and will undergo an annual revision cycle.

List of Mobile Risks:

1.       Insecure Data Storage

2.       Weak Server-Side Controls

3.       Insufficient Transport Layer Protection

4.       Client-Side Injection

5.       Poor Authorization and Authentication

6.       Improper Session Handling

7.       Security Decisions Via Untrusted Inputs

8.       Side Channel Data Leakage

9.       Broken Cryptography

10.   Sensitive Information Disclosure

The experts who prepared the list: Jack Mannino of nVisium Security, Mike Zusman of Carve Systems and Zach Lanier of the Intrepidus Group, have been actively researching mobile security issues. They produced an OWASP Top 10 Mobile Risks presentation describing and supporting the threats posed by the issues on the list.

Attackers are going to target the data, so insecure data storage on both back-end systems that mobile applications tap into and cached data on the device itself is at risk. Properly implemented server-side controls are essential, according to the presentation.  A lack of encryption of data in transit was cited, reminding me of my earlier post on the NSA’s VPN tunneling requirement and its other mobile security recommendations.   Properly executed authentication is a must and many of the garden variety vulnerabilities (XSS and SQL injection) for desktop software are repeatable for mobile applications. It wraps up with the call for developers to use properly implemented key management as well as tips to make a mobile application more difficult to reverse engineer.

I think the list gets to the heart of the issues without overhyping the threats. I hope it gains more visibility. I’d like to see it referred to more in public discussions about the potential weaknesses in mobile devices.

March 19, 2012  8:07 PM

Duqu Trojan written by professional software development team

Michael Mimoso Profile: maxsteel

Researchers at Kaspersky Labs have determined the authors of Duqu, the remote access Trojan often linked to Stuxnet, used a custom version of the C programming language to write the module used to communicate with its command-and-control servers.

Kaspersky, which has done deep analysis of the Duqu Trojan code framework, was having difficulty identifying the programming language and put out a call for help to the development community to help identify it. Most malware, Vitaly Kamluk, Kaspersky Labs chief malware analys said, is written in simpler and faster languages such as Delphi. The lab got more than 200 responses and after further analysis arrived at the conclusion that the code was written in a custom object-oriented C dialect known as OO C, which was compiled with the Microsoft Visual Studio Compiler2008, Kamluk said.

“Few [malware writers] write in assembler and C; this is pretty rare,” Kamluk said. “Using custom frameworks is quite specific. We think they are software programmers, not criminals. This is what we call ‘civil code.’”

So what’s the big deal? Well, this likely confirms nation-state involvement in the development of Duqu. No organized band of credit card thieves or hacktivists is going to invest the time and money to build a Trojan using a reusable development framework in a language used for complex enterprise applications. Kaspersky also indicated a level of separation between developers on the team, groups of which could have been developing different components of the Trojan without knowing the full mission—plausible deniability.

The primary mission of Duqu, unlike Stuxnet, is to gather and forward information from its targets. Duqu has nowhere near the penetration of Stuxnet because it has no worming capabilities. Instead, Kamluk said, it is targeted toward specific computers or people. “It has to be sent to a target and the target must execute it,” he said.

Kamluk characterized the authors as “old-school professional developers” with a comfort level in C, which works faster and is more efficient when compiled versus languages such as Delphi. Also, Kamluk said, the framework is reusable.

“This framework could be designed by someone and other developers would use this approach to write code. This is a bigger development team, possibly 20 to 30 people,” he said. “There was a special role too of a software architect who oversaw the project and development of the framework that was reused. Other roles were likely command-and-control operators, others developing zero-day attacks, others in propagation and social engineering.”

“We suspect it could be within different organizations and each responsible for a particular part of the code, not knowing what it would be used for. They didn’t know they were developing malware probably,” Kamluk said.

While he wasn’t ready to identify the authors by name or location, Kamluk said Kaspersky was seeing some Duqu infections in Sudan, Iran and some European countries. Stuxnet, which is widely believed to be a joint U.S.-Israel operation targeting a nuclear facility in Iran, is linked to Duqu because of similarities in code and code structure.

“We are not close to answering which country might be behind Duqu,” Kamluk said. “They try to hide their identities by not using any language constructions in the code. There are no words inside the code, no random names of files or system objects. They stayed language independent.”


March 15, 2012  6:16 PM

NSA mobile security plan could be roadmap for all mobile device security

Robert Westervelt Robert Westervelt Profile: Robert Westervelt

Security research firm Securosis has started a series of blog posts about how to protect enterprise data on Apple iOS smartphones.  Securosis’ Rich Mogull explains that companies are increasingly feeling pressure from employees to support iOS. But how does the IT security team ensure the protection of sensitive enterprise data on devices they have little control over?

According to Mogull:

The main problem is that Apple provides limited tools for enterprise management of iOS. There is no ability to run background security applications, so we need to rely on policy management and a spectrum of security architectures.

Mogull’s first post in the series lays out the security capabilities in iOS and highlights some of the technical reasons why the iPhone has been relatively immune to malware and other threats.

It’s clear that a tightly controlled mobile device will have to use a combination of external security technologies and internal data protection capabilities. The NSA’s “Mobility Capability Package” (.pdf), a report outlining the first phase of its recommended Enterprise Mobility Architecture, could be the blueprint needed for the private sector, according to some experts I’ve recently talked to.

The NSA unveiled the report during the RSA Conference 2012 and held a session outlining its secure mobility strategy. While it’s extremely restrictive, I think the recommendations appear to be the way most of the security industry is headed.

Among the reports key recommendations:

  • All mobile device traffic should travel through a VPN.
  • All devices should use AES 256 full disk encryption.
  • Tight controls on the use of Bluetooth, WiFi, voicemail and texting.
  • GPS disabled except for emergency 911 calls.
  • Ability to prevent users from tethering.
  • Ability to disable over-the-air software updates.

A virtual private network (VPN) establishes a secured path between the user equipment and the secured access networks with a second layer of encryption required to access classified enterprise services.

Bruce Schneier highlighted the NSA mobile security guidance document recently on his blog post and eyed the VPN tunnel recommendation.  “The more I look at mobile security, the more I think a secure tunnel is essential,” Schneier wrote.

Full disk encryption (FDE) is currently available for Android devices. FDE for Apple devices currently falls short, but DARPA has been working on this, and according to Winn Schwartau, who serves as chairman of the Board of Directors at Atlanta-based mobile device security firm, Mobile Active Defense, well-implemented FDE for iOS devices is “weeks” away.

Apple introduced data encryption capabilities in iOS 4.0. As part of its data protection feature, Apple is enabling mobile application developers to store sensitive application data on-disk in an encrypted format. The first iteration only encrypted the files when the device was in a locked state. The phone-unlock passcode served as the encryption key. In iOS 5.0, security levels were added for protected files.

 

Under the NSA plan, smartphone users would be required to have an installed initialization program, which would immediately launch as soon as the smartphone is turned on. The program would check the device’s OS and ensure only authorized applications and operating system components are loaded. The device owner would be required to enter a PIN or passphrase to unlock the phone and then – as a second factor – a password would be needed to decrypt the device’s memory.

Once the memory is unencrypted, the user then starts the VPN, which establishes a tunnel from the device to the infrastructure. The device is then registered with the Session Initiation Protocol (SIP) server and a TLS connection is tunneled through the VPN connection.

Phone calls made by a smartphone user would be routed by the cellular carrier to mobility infrastructure maintained by the government. This device must have already established a secure VPN connection to be accessible, according to the paper.

To be clear, some of the capabilities recommended by the NSA will be easier to develop for Android devices since Google’s code base is publicly available. Under its Project Fishbowl, the agency is developing a hardened smartphone with its security requirements using a modified version of Android. But other capabilities, including FDE and the requirement of a VPN will be feasible and justifiable on any mobile platform. Exactly how this can be implemented, and more importantly, how it can be enforced by IT security teams, is an issue still being addressed by researchers. Mobile device management products typically require software running on the device and nearly all the technologies require end-user interaction and can be bypassed.

It’s going to be fun watching more robust mobile device security technologies emerge.


March 15, 2012  2:51 PM

Can a security industry association bring us all together?

Jane Wright Jane Wright Profile: Jane Wright

You don’t have to work in the infosec world for long before you hear strands of the unofficial industry anthem: “Let’s work together.” Arthur Coviello, chairman of RSA, the security division of EMC, practically sang the chorus in his keynote address at RSA Conference 2012. “We are in this fight together,” Coviello said. “Knowledge by one becomes power for all of us.”

Can security pros from different organizations really work together?

Andrew Rose, a principal analyst at Forrester Research, doubts it. In a blog post last month, Rose recounted meeting a representative of a European regulatory body. “(She believed) the future lay in open and honest sharing between organizations – i.e. when one is hacked, they would immediately share details of both the breach and the method with their peers and wider industry.”

But Rose believes this view is too idealistic, and organizations will refuse to share such information for fear of reputation or brand damage. “As a security professional, it’s tough to acknowledge in a public forum that you may even have something to share with colleagues at other firms, lest the press get hold of the information and twist it into a fictitious ‘XXXX Corp hacked!’ story,” Rose wrote.

There appears to be some hope for security information sharing between security pros within vertical industries. The Financial Services Information Sharing and Analysis Center (FSISAC) is one of 14 security information-sharing associations formed at the behest of the U.S. federal government. According to its website, FSISAC members receive “timely notification and authoritative information specifically designed to help protect critical systems and assets from physical and cybersecurity threats.”

Sounds good, right? But click on over to the FAQ page of the FSISAC website and read the question, “Why should my firm join?” The answer addresses protecting critical infrastructure, but then adds, “If the private sector does not create an effective information sharing capability, it will be regulated: This alone is reason enough to join.”

Clearly this is not the high-minded perspective Coviello had in mind. But then again, I wouldn’t count on a vendor’s call to action as the foundation for a security industry association. Vendor-neutral associations such as ISSA are probably our best hope.

We may never find a balance between our competitive, and somewhat paranoid, human nature on one hand, and values such as openness and honesty on the other. But it’s good to keep tugging on both ends of the rope, if only to keep the conversation going.


March 13, 2012  1:37 PM

Information security roles and the cloud

Marcia Savage Marcia Savage Profile: Marcia Savage

A recurring theme I hear at conferences is that security teams can’t fight the inevitable shift to cloud computing, and instead need to figure out ways to adapt. This message was echoed at RSA Conference 2012, where a panel of CISOs urged the industry to get ahead of the cloud trend and ensure cloud services are adopted securely.

With its potential to slash IT costs, cloud computing is driving fundamental change in organizations, said Jerry Archer, senior vice president and CISO at Sallie Mae. “Everyone in this room will be impacted by it,” he told attendees.

That got me thinking: How will information security roles change as cloud computing becomes more prevalent in the enterprise? Do security pros need to worry about looking for other lines of work as security responsibilities shift to public clouds?

Industry experts I talked to see security pros continuing to play an important role as cloud adoption accelerates. After the RSA panel, Archer told me that security pros may need to acquire additional knowledge, for example in the area of contracts and law. But security is necessary and those with security expertise become “the gatekeepers” in this new IT environment, he said.

Cloud Security Alliance Executive Director Jim Reavis said security roles will change depending on the organization – whether it’s a cloud provider or cloud consumer. Providers will need to be able to provide the whole stack of security expertise and technologies while consumers will be looking to leverage higher layers of the cloud stack – SaaS and PaaS. For security pros working at organizations that are cloud consumers, this will mean a shift away from operational skills to application skills and closer work with business units, he said.

“I don’t think IT teams or security teams will disappear because of cloud,” Reavis said. “If you’ve got security expertise, you’ll be well employed for many years to come.”

Randall Gamby, information security officer for the Medicaid Information Service Center of New York (MISCNY), told me he sees security’s role falling in the vendor management space when it comes to cloud. Security professionals need to help organizations ask the right legal and technical questions of a cloud provider to ensure their data is protected.

“Being able to set up criteria to judge a cloud vendor and understand not only the services it offers, but the risks it may pose is important,” he said.

How do you think information security roles will change as cloud services become more prevalent? Leave me a comment below.


March 12, 2012  1:02 PM

Getting offensive about zero-day vulnerabilities and exploits

Michael Mimoso Profile: maxsteel

The big-bad scary zero-day exploit: it sends almost the same kind of shivers down everyone’s back as APT. Yet, like the advanced persistent threat, the zero-day is suffering some hype fatigue. More Web servers are popped by known bugs and exploits than some shadowy secretive attack crafted by the Electrical Engineering University of China’s People’s Liberation Army. Yet companies are still bombarded with marketing FUD about zero-days despite numbers that indicate exploits hitting unknown vulnerabilities account for less than 1% of all malware.

So do zero-days matter? Like everything else in security, it depends. If you’re in the bug hunting and bug selling business, they sure do. Last week’s CanSecWest hacker, err, researcher conference in Vancouver was a zero-day Lollapalooza with companies like VUPEN taking dead-aim at Google Chrome and Microsoft’s IE9 browser with zero-days developed just for the event. The French company, called out by privacy advocate Chris Soghoian at the recent Kaspersky Security Analyst Summit, admits to holding on to certain vulnerabilities and exploits only for its customers, refusing at times to share information with the affected vendors. Soghoian said VUPEN and others sell exploits to governments, who pay a heck of a lot more for what can be turned into a weaponized exploit than say a security conference or a bug bounty program, such as TippingPoint’s Zero-Day Initiative.

VUPEN CEO Chaouki Bekrar told Threatpost that VUPEN’s government customers are only trusted democracies and not oppressive countries. Taking him at his word, there’s still the argument that while a select few get a fix, the general user population remains exposed. It’s silly to think attackers aren’t way ahead of the game and already have their share of unreported bugs and exploits at their disposal, but this level of backroom wheeling and dealing is disconcerting. It casts a poor light on offensive security research and events like the Pwn2Own contest are probably unwillingly aiding and abetting.

I had a conversation with Microsoft senior security strategist lead Katie Moussouris recently about zero-days and vulnerability disclosure. Katie has been in the security business a while, including a stint at @Stake back in the day, and she said Microsoft’s experience with the research community is much different. She said that 80% of vulnerabilities found in Microsoft products are disclosed privately, and 90% of those disclosures are made directly to Microsoft. Most researchers, she said, are not motivated by money, but by intellectual curiosity. As a result, Microsoft has shied away from offering a bug bounty, and has instead focused on rewarding defensive security research with initiatives such as its Blue Hat Prize.

These are watershed days for security researchers and vulnerability disclosure. To be honest, the whole disclosure debate probably gives most of you a headache, worst of all if you’re a CISO sitting between the researchers and the vendors and the VUPEN-like middlemen while all this wrangling plays itself out. Tim Stanley, former CISO at Continental Airlines, summed it up best a couple of years ago:

“I love the love-fest between the vendors and researchers, but quite honestly, I don’t give a hoot. I’m the consumer, the guy who paid for the product that I expect to be correct in the first place. I’m the guy who paid for the software. When am I gonna know? The issue becomes a matter where the people paying for the product need to be better represented in this process.”

Amen.


March 8, 2012  3:56 PM

Changes to European privacy laws foreshadow serious business impact

Jane Wright Jane Wright Profile: Jane Wright

Changes to the data protection regulations are on the way for the 27 countries of the European Union, and the fallout in Europe serves as a good case study for U.S. governing bodies and businesses who are also playing tug-of-war over compliance regulations.

Businesses in the U.K. are steaming over the DPA proposals. In fact, our U.K. bureau chief, Ron Condon, described the reaction of the Confederation of British Industry (CBI), a lobbying organization representing more than a quarter-million companies, as “hostile.”  Why such a severe reaction to proposed European privacy laws that, according to the European Commission, will save businesses £2.3 billion (about $3.6 billion) per year? 

As part of the new data protection regime, businesses operating in the EU will need to ask consumers for explicit permission to capture the consumer’s data. Businesses fear just asking for permission will make consumers nervous, and nervous consumers can be miserly consumers.

It appears businesses may be right to worry. Consider what happened to the Information Commissioner’s Office in the U.K. when it implemented its own PECR regulation, specifically asking all site visitors for permission to place a cookie on their computer.  According to the BBC, the ICO website normally received 12,000 site visitors per day, but after debuting the cookie request notice, the number of visitors dropped to about 1,400 per day. 

Actually, the number of visitors willing to be tracked dropped. The ICO said only about 10% of its visitors accepted the cookie. The other 90% were probably still there; they may have simply declined to be tracked.

This could have serious repercussions to the way many businesses operate today. Without knowing which pages visitors look at, how long they study a product page, or the order they place products in the online shopping cart, businesses will lose crucial information they need to direct their strategies. Some businesses, I wager, may even go out of business once deprived of customer information.

Where should the line be drawn between visitors who want to be anonymous, and businesses who can’t serve their customers’ needs without fundamental information about those customers?

The ICO holds out hope that, eventually, users won’t be so easily scared off by cookie warnings, but I see this playing out another way.  I got an inkling from an incident at RSA Conference 2012 last week.

A security vendor had a representative standing on Howard Street, flagging down anyone walking by who was wearing an RSA conference badge. In return for handing over a business card, the passerby received a $5.00 Starbucks gift card. Apparently $5.00 is the price this particular vendor was willing to pay for an RSA attendee to share their basic information.

As for me, I’m wondering how many cookies I can buy for $5.00 at Starbucks.  


March 7, 2012  5:14 PM

How CloudFlare’s website security service protected LulzSec

Marcia Savage Marcia Savage Profile: Marcia Savage

When it comes to customer case studies, CloudFlare has one of the most unusual and dramatic I’ve ever heard.

Last summer, the LulzSec hacking group signed up its website for CloudFlare, drawing the website security service and accelerator company into one of the biggest cyber battles ever, as LulzSec created mayhem on the Internet while rivals and others tried to knock it offline. CloudFlare’s CEO and Co-founder Matthew Prince detailed the attacks in a presentation at RSA Conference 2012; I wasn’t able to attend, but he filled me in during a briefing at the show last week.

LulzSec registered for CloudFlare on June 2, 2011 after it a substantial DoS attack knocked its newly launched site — LulzSecurity.com — offline for 45 minutes, Prince said. “We had no idea who LulzSec was,” he said. As it turns out, the group had just published information it had allegedly stolen from Sony.

For the next 22 days, LulzSec waged battle on the Web as rivals and white hat hackers launched a volley of attacks against the group’s site. “It was like a gunfight and we were sitting in the middle of it,” Prince said.

The battle proved a mighty test for Palo Alto, Calif.-based CloudFlare, which protects websites against threats like DDoS, XSS and SQL injection attacks while also boosting site performance. “It was the most massive pen test ever,” Prince said. “We learned a ton from the fact that LulzSec was with us.”

He explained that CloudFlare’s system automatically looks for anomalies to detect attacks and once it does, adds protection for all the websites it protects. More than 250,000 websites, from Fortune 500 companies to individual blogs, use CloudFlare. Using the service doesn’t require any hardware installation, only a change to network settings to allow site traffic to pass through CloudFlare, which operates 14 data centers around the world.

“We’re like a smart, skilled router on your network,” Prince said.

The fact that LulzSec stayed online for the 22 days it was with CloudFlare illustrates the company’s core value proposition, Prince said. “Because we saw these threats our network got smarter,” he added.

After those 22 days, the LulzSecurity.com website disappeared. Prince began receiving requests to tell the story of what happened, but the company has a privacy policy with its customers not to reveal them without permission. He used the contact information LulzSec provided to sign up for the service and eventually got a single line reply giving him permission.

Prince said CloudFlare never got a request from law enforcement to take LulzSec offline, but quickly added that it has no mechanism to do that anyway. He noted that CloudFlare wasn’t LulzSec’s hosting provider.

As to whether CloudFlare considered shutting off service for LulzSec – a group linked to a number of attacks on corporate government sites – Prince said his company’s role isn’t that of an Internet censor.

“There are tens of thousands of websites currently using CloudFlare’s network,” he said in a blog post last summer. “Some of them contain information I find troubling. Such is the nature of a free and open network and, as an organization that aims to make the whole Internet faster and safer, such inherently will be our ongoing struggle. While we will respect the laws of the jurisdictions in which we operate, we do not believe it is our decision to determine what content may and may not be published. That is a slippery slope down which we will not tread.”


March 6, 2012  9:01 PM

What are the best Android mobile security apps?

Robert Westervelt Robert Westervelt Profile: Robert Westervelt

Traditional antivirus vendors are doing a good job detecting and blocking known mobile malware, according to Av-Test, a Germany-based independent service provider that tests antivirus and antimalware software.

The firm tested the detection capabilities of a variety of available Android mobile security apps using a malware set of 618 malicious application package (APK) files. Malicious apps that were discovered between August and December 2011 were included in the test set.

Avast, Dr.Web, F-Secure, Ikarus and Kaspersky rated highly, according to the firm’s latest analysis, Test: Malware Protection for Android 2012 (.pdf), issued today. Zoner and Lookout, two independent security firms with mobile security apps also performed well, Av-Test said. The apps had a detection rate of more than 90%.

Products that fell between 65%-90% included AegisLab, AVG, Bitdefender, ESET, Norton/Symantec, Quick Heal, Super Security, Trend Micro, Vipre/GFI and Webroot. Despite falling below 90%, Av-Test said the mobile security apps are still very good and should be considered.

“Some of these products just miss one or two malware families, which might be not prevalent in certain environments anyway,” Av-Test said in its report.

Mobile malware continues to make up about 1% of overall malware, but despite the threat currently being minimal, experts at RSA Conference 2012 have pointed to a variety of attacks, from banking Trojans to SMS fraud, which could pose a threat to enterprise networks. Some say attackers are not too far away from weaponizing applications to perform a variety of functions all aimed at collecting as much data as possible about the device owner.

Judging by the attendance at the mobile sessions during the conference, it’s clear that security professionals are concerned about mobile device security and are looking for ways to gain control and visibility into employee devices at the endpoint. Both Google Android and Apple iOS have been built with security features right into the platform.

“I would go as far as to say they are probably the most secure platforms ever built,” Kevin Mahaffey, CTO of Lookout told me in a mobile security interview at RSA Conference. Sandboxing and granular permissions that limit the device capabilities available to installed mobile applications make it much harder for an attack to be successful, Mahaffey said.

“We haven’t really seen malicious use of vulnerabilities on mobile devices yet, but plenty of researchers have demonstrated that it’s possible. There’s no magic pixie dust in iPhone or Android that makes it somehow immune from all the problems on the desktop,” Mahaffey said.

Anup Ghosh, founder and CEO of browser security vendor Invincea, shares a different view about the
Android platform. At RSA Conference, Gosh told me Android users should be concerned about mobile malware. Apple has done a good job of controlling its platform, keeping its ecosystem closed off to potential malware writers. Meanwhile, Android is using Java as part of its sandboxing strategy. It’s highly buggy, Ghosh said, with a lot of native interfaces to the underlying firmware.

According to Gosh: “When you download an app from the Android store you are giving explicit permissions, giving that app access to all kinds of system resources, which are all holes to that sandbox. It’s a fairly rich environment for adversaries to write malware. We’re still early as far as malicious code development goes, but they will follow the money.”

It doesn’t hurt to have a layer of security for protection. Mahaffey said a good mobile security app can protect device owners from malware or spyware, provide safe browsing capabilities and locate lost and stolen devices.

Av-Labs said that its test determined a grouping of 17 trustworthy mobile security apps. Even if a mobile security app performed poorly in its detection tests, some have other capabilities such as remote lock and wipe, backup and phone locating that may make them useful.

The firm tested the latest version of available mobile security apps using an Android emulator running the Gingerbread version of Android. The results were verified on a Samsung Galaxy Nexus running the latest Android version, Ice Cream Sandwich.


March 2, 2012  6:17 PM

OpenDNS hires Websense CTO to guide enterprise DNS security services

Robert Westervelt Robert Westervelt Profile: Robert Westervelt

DNS services provider OpenDNS has hired away the chief technology officer of security vendor Websense Inc. and is laying the groundwork for a variety of DNS layer security services and products aimed at enterprises.

Dan Hubbard, who spent 14 years at Websense, is planning to build out OpenDNS’ security product portfolio. Hubbard played a significant role at Websense, building the Websense Security Labs and the company’s classification engine, which is at the heart of its security products. The engine is used to filter out malicious websites, block spam and phishing attacks and is also at the core of Websense’s content filtering technology.

Hubbard confirmed his departure this week. A Websense spokesperson said the company is already reshuffling executives to fill the CTO role. Charles Renert, an expert noted for his work with Symantec Security Labs and founding Determina, was promoted to vice president and will assume Hubbard’s responsibilities in the interim.

It’s going to be extremely interesting to see how OpenDNS’s enterprise security plans unfold under Hubbard’s guidance.

I spoke to Hubbard at a reception at RSA Conference 2012 where he exuded a lot of enthusiasm for his new gig at OpenDNS.Hubbard said there’s a potential for a whole new range of security technologies that take advantage of being in the DNS layer. The company, which launched in 2005, already provides malware protection for its users by blocking outbound botnet communications at the DNS layer. It also maintains PhishTank, the largest clearinghouse of phishing information on the Internet. OpenDNS has 12 data centers that handle DNS requests, but also have been collecting threat intelligence data for years. Combining threat intelligence with the ability to keep track of individual IP addresses opens up an interesting set of capabilities for protecting laptops and mobile devices.

The company already has a broad set of users of OpenDNS Enterprise, which provides inbound and outbound protection and is application-, operating system-, protocol- and port-agnostic since it is essentially cloud-based at the DNS layer. The company has been pushing itself as an extra layer sitting between the Internet and enterprise firewalls and antivirus technology at the endpoint. There are some built-in reporting capabilities providing data on attacks and malicious websites that were blocked by the service.

Hubbard’s move to OpenDNS and the company’s security strategy caught the eyes of at least two prominent security luminaries: Dan Kaminsky and Paul Vixie, who attended the reception. Last year, Kaminsky briefly shared with me his vision of what DNS-based security technologies can do. He believes a broad range of technologies can be built out leveraging DNSSEC architecture for authentication and establishing trust in Internet communications. It could provide a much needed injection of trust into the Internet, which has been evaporating in recent years because of a variety of issues, including breaches at SSL Certificate Authority vendors and well known weaknesses in the digital certificate system itself. Vixie has also publicly shared the potential of adding security to the DNS layer.

It was hard, however, to find the enthusiasm for OpenDNS from others at the RSA Conference. The first thing that comes to mind with OpenDNS is its consumer products that enable parents to shield porn and other websites from their children.

Several industry analysts and other security professionals I spoke to were too wrapped up in their own respective areas of expertise, but a few people said they share Kaminsky’s passion for the long-term potential of DNS-layer security technologies.

OpenDNS CEO David Ulevitch told me the company already has the foundation in place to provide a wide variety of security services. He said it just has to execute on its strategy and provide a convincing argument that enterprises can get value out of having security at the DNS layer.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: