Cloud computing breaches often are a topic that comes up in conversations at conferences. Organizations need to prepare for the complications that will come if their cloud provider is breached, legal experts warn. However, there’s little data on breaches involving cloud providers, at least that’s public.
The 2012 Verizon Data Breach Investigations Report (DBIR) (.pdf) tries to offer some insight on cloud computing breaches. The company – which expanded its cloud services by acquiring Terremark last year — notes there are many definitions for what constitutes cloud, making it difficult to figure out how cloud computing factors into data breaches. But in an interview, Christopher Porter, a principal with Verizon’s RISK team, told me the DBIR defines the cloud as something that’s externally located, externally managed and externally owned.
“In the past year, there were several breaches of externally hosted environments that weren’t managed by the victim,” he said. “We didn’t see any attacks against hypervisors. It’s really more about giving up control of your assets as opposed to any technology specific to the cloud.”
For cloud proponents, the DBIR’s observation was proof that cloud computing services are secure. However, cloud computing risks involve more than the hypervisor. Giving up control of your assets – and not controlling the associated risks, as Verizon notes – is what makes organizations queasy about cloud services.
According to the Verizon DBIR, 26% of breaches involved externally hosted assets, while 80% involved internally hosted assets. Forty-six percent of breaches involved externally managed assets (compared to 51% internally managed assets). The report notes this is the third year the company has seen an increase in the proportion of externally hosted and managed assets involved in data breaches. Porter said the increase is mostly due to economic issues; more organizations are moving to the cloud for the cost savings.
Social networking security threats have taken a back seat to mobile security and targeted attacks directed at corporate networks in recent years. But there is news of two new Facebook attacks targeting users to spread spam and malware, and ultimately steal personal information, including account credentials.
A rogue Facebook application that lures the victim into using it to discover who has viewed their Facebook profile, has been detected on the social network. The application asks permission to access the profile and once granted, it begins posting to the victim’s wall, without explicit permission according to security firm Sophos.
The second Facebook attack is targeted at Brazilian users of Facebook. It uses malicious Google Chrome extensions that it presents as a tool to change the Facebook profile color or provide virus removal. Like the attack documented above, the tool can gain full control of the victim’s Facebook account, posting messages to spread spam and malware, according to a researcher at Kaspersky Lab.
The attacks are a reminder that enterprises need to have a social networking policy in place and should educate users about phishing and other threats designed to gain access to their Facebook account. If cybercriminals are attempting to steal account credentials from Facebook users, it’s very likely that a certain percentage of pilfered passwords are used for multiple accounts, including access to the victim’s corporate network.
Tom Cross, manager of threat intelligence and security on IBM’s X-Force team, told me it’s likely that well-funded and organized cyberattackers use social networks to design targeted social engineering attacks against enterprises. “You could get a comprehensive picture of an organization,” Cross said, by just examining an employee’s Facebook profile.
In addition, IBM’s 2011 X-Force Trend and Risk report, issued last week, found automated attacks moving to social networking platforms. “Frauds and scams that were successful years ago via email found new life on the social media forums,” according to the report. Attackers are designing phishing campaigns, typically phony friend requests, made to look like they were sent from social networks.
Malicious activity on Facebook is being constantly monitored by security vendors and Facebook’s internal security team, but attackers are still slipping through. Last October, Facebook released security data (.pdf) that shed light into malicious activity on the network. The company said it classifies 4% of the content shared on Facebook as spam. Of the spam, a tiny percentage is being used to direct users to malicious websites. Facebook says one in 200 users experience spam on any given day.
The most telling of all the statistics released by Facebook: About .06% of the more than 1 billion Facebook user logins each day are compromised. That means that 600,000 Facebook users have their accounts compromised each day. Facebook doesn’t define a “compromised account,” but acknowledged to Ars Technica that the statistic stems from accounts that are blocked if Facebook is not confident that the true owner logged in. They were likely the victim of a phishing scam, the Facebook spokesperson said.
Few people probably realize that Facebook offers a one-time-password service to users as well as an ID verification service that will send a text message to verify that the user login is genuine. Websense is one of several security vendors that partners with Facebook to provide URL filtering. The company also sells a Defensio Facebook monitoring service, kind of a content filtering engine that can detect spam and malicious content posted to an account.
Charles Renert, the new head of the Websense Security Labs, told me that most attackers are sticking to email, using it as a lure to send victim’s to malicious webpages. But phishing is shifting to Twitter, Facebook and other social networking platforms. Malicious links posted on Facebook lure the victim into thinking it’s a popular viral video, but then redirects them to a website hosting malware. Other links are less malicious, but still objectionable, Renert said. They send victims to spam sites peddling porn, pharmaceuticals and other items that the victim didn’t intend to see, he said. “They’re exploiting the trust element,” Renert said.
Those of you clamoring for Internet service providers to get proactive about security and malicious activity on their networks got a win late last week from the Federal Communications Commission. The FCC’s Communications Security, Reliability and Interoperability Council (CSRIC) got unanimous support of its U.S. Anti-Bot Code of Conduct for Internet Service Providers from most of the leading ISPs.
Known as the ABCs for ISPs, participation is voluntary for the providers who must take “meaningful action” in the education of users in botnet prevention, botnet removal, detection of botnet activity on an ISP network, notification of customers of suspected infections, providing information to customers on how to remediate botnet infections, collaborating with other ISPs around botnet activity, and sharing experiences around the FCC’s code of conduct.
AT&T, CenturyLink, Comcast, Cox, Sprint, Time Warner Cable, T-Mobile and Verizon agreed to the code of conduct. Their acknowledgement, or concession, of the problem is a nice public step forward here. There have been many arguments pro and con regarding ISPs and security, and countless debates as to whether an ISP should provide a clean pipe.
ISPs clearly are in optimal position to see malicious traffic, but there’s a slippery slope choking off what an ISP believes is malicious traffic—what’s the impact on legitimate traffic caught in the crossfire, performance of services and cost, for example? Some ISPs sell security services too, raising conflict of interest issues. And then there are the net neutrality folks who protest an ISP’s ability to restrict access to content or impact network performance by throttling traffic for some and ratcheting it up for others, for example.
The code of conduct solves none of these riddles, but at least it moves the conversation forward without legislation. FCC Chairman Julius Genachowski has been vocal about an industry response to botnets. According to Arbor Networks’ Atlas service, for the 24-hour period starting last Wednesday, there were 951 attacks per subnet carried out over TCP Port 80 (http) and another 284 over TCP Port 445 (used for Microsoft Server Message Block service), accounting for 69% of attacks. Botnets are responsible for denial-of-service attacks, attacks on the DNS infrastructure, Internet routing attacks, spam campaigns and other malware attacks.
ISPs, to their credit, have been better about security. Comcast, for example, has fully implemented DNSSEC for its customers and it is part of the provider’s Constant Guard service. John Schanz, executive vice president of Comcast National Engineering and Technical Operations in Security and Privacy, wrote in a blog post: “The Code recognizes that the entire Internet ecosystem has important roles to play in addressing the botnet threat and ISPs depend on support from the other players like security companies and operating system vendors.” PayPal, Microsoft, Symantec and the Online Trust Alliance also took part in developing the code of conduct.
Nothing in the code of conduct, however, really suggests ISPs do much more today than what Comcast and others are already doing—namely monitor, notify and recommend remediation. ISPs still won’t take meaningful action about botnet removal without being forced to, and that’s a lot of lobbying down the road. Stay tuned.
Cloud outages are always big news – and for good reason, because they usually affect many people. Last month’s Microsoft Azure outage was no exception. But at least Microsoft appears to be trying to learn from its mistakes.
The software giant released detailed findings of its root cause analysis of the Azure outage earlier this month, and said it would to use lessons learned from the incident to improve its cloud service. The analysis, posted by Azure engineering team leader Bill Laing, provides a detailed description of the Leap Day bug that triggered the Feb. 28 outage. The analysis was prefaced by an apology and an offer of service credits to customers, and included a description of the steps Microsoft is taking to improve its engineering, operations and communication in the wake of the outage.
“Rest assured that we are already hard at work using our learnings to improve Windows Azure,” Laing said.
Microsoft’s plans include improved testing to detect time-related bugs, strengthening its Azure dashboard, and improved customer communication during an incident.
Kyle Hilgendorf, principal research analyst at Gartner, said he was impressed with the level of detail in Microsoft’s analysis.
“I encourage all current and prospective Azure customers to read and digest the Azure RCA [root cause analysis],” he wrote in a blog post. “There is significant insight and knowledge around how Azure is architected, much more so than customers have received in the past.”
The 33% service credit offered by Microsoft, he added, is becoming a de facto standard for cloud outages. “Customers appreciate this offer as it benefits both customers and providers alike from having to deal with SLA claims and the administrative overhead involved,” he said.
In a previous blog post, Hilgendorf summarized Azure customers concerns after the outage. Customers told him Microsoft’s communication during the outage was lacking; the company needed to be more transparent, and they were looking into options for protecting themselves against future outages.
So while Microsoft is applying lessons learned from the Azure outage, it appears Azure customers got a harsh reminder of the need to plan for service disruption. At last year’s Gartner Catalyst Conference, Richard Jones, managing vice president for cloud and data center strategies at Gartner, advised attendees to prepare for cloud failure by planning for resilience into their cloud infrastructure and services. Experts have also said organizations need to plan for outages in their cloud contracts.
“Cloud outages are a sad and unfortunate event,” Hilgendorf wrote. “However, if we learn from them, build better services, increase transparency, and guide towards better application design, then we can make something great out of something bad.”
Last week I blogged about security practitioners and other IT pros working together across companies and industries to stem security threats. A new report this week is a positive example of even broader international cooperation to stop IT attacks across national borders.
The number of countries contributing to Verizon’s 2012 Data Breach Investigations Report (DBIR), released today, increased as government agencies and law enforcement officials from three more nations added information about breaches in their countries.
The DBIR started out eight years ago as a report of breaches Verizon had investigated. Eventually, the U.S. Secret Service contributed findings from their breach investigations. Later, the Dutch National High Tech Crime Unit joined in. Now, the Verizon data breach report 2012 edition counts the Australian Federal Police, the Irish Reporting & Information Security Service, and England’s Police Central e-Crime Unit among the partners helping to track and analyze data breaches.
This is good news for the security industry. It demonstrates the synergies that can be achieved when key industry stakeholders move past their reticence and (sometimes justified) mistrust to pool their brain power to stop attackers. Let’s pause for a moment to celebrate that progress.
Next year I hope we see even more countries contributing to the DBIR or other global initiatives to work together against security threats. It’s not too late for others to get involved.
When I arrived home from RSA Conference 2012 after attending a number of panel discussions about mobile device protection, mobile security threats and ways IT teams can build control and visibility into their employee smartphones, I left feeling that many of the session panelists overhyped the risks.
In one session, a few experts warned incessantly about weaponized applications; another had a security expert discussing the skyrocketing mobile malware statistics. It was rather off putting that there was little discussion about how mobile device platforms are built differently than desktop OSes. In fact, a Microsoft network analyst attempted to compare the evolution of iOS and Android to the evolution of Windows. Something, I was told by several security experts, is nearly impossible to do. Security capabilities, including sandboxing, designed to isolate applications from critical processes, are built right into the mobile firmware.
I spoke to Kevin Mahaffey, CTO and founder of Lookout Security, which targets security-conscious consumers with a mobile application that provides antimalware protection, device locate, remote wipe and secure backup features. Mahaffey was very forthcoming, saying it’s his belief that both Google Android and Apple iOS are the most secure OSes ever built.
His comment, which is no doubt debatable, made me seek out good sources of non-hyped potential risks posed by mobile devices to the enterprise. I may have stumbled upon the beginnings of a good list.
Several security experts active with the Open Web Application Security Project (OWASP) are developing a list of mobile risks. OWASP, known for its Top 10 Web Application Vulnerabilities list has come up with the Top 10 Mobile Risks list. It was released in September, and has been undergoing an open review period for public feedback. It’s still a work in progress and will undergo an annual revision cycle.
List of Mobile Risks:
1. Insecure Data Storage
2. Weak Server-Side Controls
3. Insufficient Transport Layer Protection
4. Client-Side Injection
5. Poor Authorization and Authentication
6. Improper Session Handling
7. Security Decisions Via Untrusted Inputs
8. Side Channel Data Leakage
9. Broken Cryptography
10. Sensitive Information Disclosure
The experts who prepared the list: Jack Mannino of nVisium Security, Mike Zusman of Carve Systems and Zach Lanier of the Intrepidus Group, have been actively researching mobile security issues. They produced an OWASP Top 10 Mobile Risks presentation describing and supporting the threats posed by the issues on the list.
Attackers are going to target the data, so insecure data storage on both back-end systems that mobile applications tap into and cached data on the device itself is at risk. Properly implemented server-side controls are essential, according to the presentation. A lack of encryption of data in transit was cited, reminding me of my earlier post on the NSA’s VPN tunneling requirement and its other mobile security recommendations. Properly executed authentication is a must and many of the garden variety vulnerabilities (XSS and SQL injection) for desktop software are repeatable for mobile applications. It wraps up with the call for developers to use properly implemented key management as well as tips to make a mobile application more difficult to reverse engineer.
I think the list gets to the heart of the issues without overhyping the threats. I hope it gains more visibility. I’d like to see it referred to more in public discussions about the potential weaknesses in mobile devices.
Researchers at Kaspersky Labs have determined the authors of Duqu, the remote access Trojan often linked to Stuxnet, used a custom version of the C programming language to write the module used to communicate with its command-and-control servers.
Kaspersky, which has done deep analysis of the Duqu Trojan code framework, was having difficulty identifying the programming language and put out a call for help to the development community to help identify it. Most malware, Vitaly Kamluk, Kaspersky Labs chief malware analys said, is written in simpler and faster languages such as Delphi. The lab got more than 200 responses and after further analysis arrived at the conclusion that the code was written in a custom object-oriented C dialect known as OO C, which was compiled with the Microsoft Visual Studio Compiler2008, Kamluk said.
“Few [malware writers] write in assembler and C; this is pretty rare,” Kamluk said. “Using custom frameworks is quite specific. We think they are software programmers, not criminals. This is what we call ‘civil code.’”
So what’s the big deal? Well, this likely confirms nation-state involvement in the development of Duqu. No organized band of credit card thieves or hacktivists is going to invest the time and money to build a Trojan using a reusable development framework in a language used for complex enterprise applications. Kaspersky also indicated a level of separation between developers on the team, groups of which could have been developing different components of the Trojan without knowing the full mission—plausible deniability.
The primary mission of Duqu, unlike Stuxnet, is to gather and forward information from its targets. Duqu has nowhere near the penetration of Stuxnet because it has no worming capabilities. Instead, Kamluk said, it is targeted toward specific computers or people. “It has to be sent to a target and the target must execute it,” he said.
Kamluk characterized the authors as “old-school professional developers” with a comfort level in C, which works faster and is more efficient when compiled versus languages such as Delphi. Also, Kamluk said, the framework is reusable.
“This framework could be designed by someone and other developers would use this approach to write code. This is a bigger development team, possibly 20 to 30 people,” he said. “There was a special role too of a software architect who oversaw the project and development of the framework that was reused. Other roles were likely command-and-control operators, others developing zero-day attacks, others in propagation and social engineering.”
“We suspect it could be within different organizations and each responsible for a particular part of the code, not knowing what it would be used for. They didn’t know they were developing malware probably,” Kamluk said.
While he wasn’t ready to identify the authors by name or location, Kamluk said Kaspersky was seeing some Duqu infections in Sudan, Iran and some European countries. Stuxnet, which is widely believed to be a joint U.S.-Israel operation targeting a nuclear facility in Iran, is linked to Duqu because of similarities in code and code structure.
“We are not close to answering which country might be behind Duqu,” Kamluk said. “They try to hide their identities by not using any language constructions in the code. There are no words inside the code, no random names of files or system objects. They stayed language independent.”
Security research firm Securosis has started a series of blog posts about how to protect enterprise data on Apple iOS smartphones. Securosis’ Rich Mogull explains that companies are increasingly feeling pressure from employees to support iOS. But how does the IT security team ensure the protection of sensitive enterprise data on devices they have little control over?
According to Mogull:
The main problem is that Apple provides limited tools for enterprise management of iOS. There is no ability to run background security applications, so we need to rely on policy management and a spectrum of security architectures.
Mogull’s first post in the series lays out the security capabilities in iOS and highlights some of the technical reasons why the iPhone has been relatively immune to malware and other threats.
It’s clear that a tightly controlled mobile device will have to use a combination of external security technologies and internal data protection capabilities. The NSA’s “Mobility Capability Package” (.pdf), a report outlining the first phase of its recommended Enterprise Mobility Architecture, could be the blueprint needed for the private sector, according to some experts I’ve recently talked to.
The NSA unveiled the report during the RSA Conference 2012 and held a session outlining its secure mobility strategy. While it’s extremely restrictive, I think the recommendations appear to be the way most of the security industry is headed.
Among the reports key recommendations:
- All mobile device traffic should travel through a VPN.
- All devices should use AES 256 full disk encryption.
- Tight controls on the use of Bluetooth, WiFi, voicemail and texting.
- GPS disabled except for emergency 911 calls.
- Ability to prevent users from tethering.
- Ability to disable over-the-air software updates.
A virtual private network (VPN) establishes a secured path between the user equipment and the secured access networks with a second layer of encryption required to access classified enterprise services.
Bruce Schneier highlighted the NSA mobile security guidance document recently on his blog post and eyed the VPN tunnel recommendation. “The more I look at mobile security, the more I think a secure tunnel is essential,” Schneier wrote.
Full disk encryption (FDE) is currently available for Android devices. FDE for Apple devices currently falls short, but DARPA has been working on this, and according to Winn Schwartau, who serves as chairman of the Board of Directors at Atlanta-based mobile device security firm, Mobile Active Defense, well-implemented FDE for iOS devices is “weeks” away.
Apple introduced data encryption capabilities in iOS 4.0. As part of its data protection feature, Apple is enabling mobile application developers to store sensitive application data on-disk in an encrypted format. The first iteration only encrypted the files when the device was in a locked state. The phone-unlock passcode served as the encryption key. In iOS 5.0, security levels were added for protected files.
Under the NSA plan, smartphone users would be required to have an installed initialization program, which would immediately launch as soon as the smartphone is turned on. The program would check the device’s OS and ensure only authorized applications and operating system components are loaded. The device owner would be required to enter a PIN or passphrase to unlock the phone and then – as a second factor – a password would be needed to decrypt the device’s memory.
Once the memory is unencrypted, the user then starts the VPN, which establishes a tunnel from the device to the infrastructure. The device is then registered with the Session Initiation Protocol (SIP) server and a TLS connection is tunneled through the VPN connection.
Phone calls made by a smartphone user would be routed by the cellular carrier to mobility infrastructure maintained by the government. This device must have already established a secure VPN connection to be accessible, according to the paper.
To be clear, some of the capabilities recommended by the NSA will be easier to develop for Android devices since Google’s code base is publicly available. Under its Project Fishbowl, the agency is developing a hardened smartphone with its security requirements using a modified version of Android. But other capabilities, including FDE and the requirement of a VPN will be feasible and justifiable on any mobile platform. Exactly how this can be implemented, and more importantly, how it can be enforced by IT security teams, is an issue still being addressed by researchers. Mobile device management products typically require software running on the device and nearly all the technologies require end-user interaction and can be bypassed.
It’s going to be fun watching more robust mobile device security technologies emerge.
You don’t have to work in the infosec world for long before you hear strands of the unofficial industry anthem: “Let’s work together.” Arthur Coviello, chairman of RSA, the security division of EMC, practically sang the chorus in his keynote address at RSA Conference 2012. “We are in this fight together,” Coviello said. “Knowledge by one becomes power for all of us.”
Can security pros from different organizations really work together?
Andrew Rose, a principal analyst at Forrester Research, doubts it. In a blog post last month, Rose recounted meeting a representative of a European regulatory body. “(She believed) the future lay in open and honest sharing between organizations – i.e. when one is hacked, they would immediately share details of both the breach and the method with their peers and wider industry.”
But Rose believes this view is too idealistic, and organizations will refuse to share such information for fear of reputation or brand damage. “As a security professional, it’s tough to acknowledge in a public forum that you may even have something to share with colleagues at other firms, lest the press get hold of the information and twist it into a fictitious ‘XXXX Corp hacked!’ story,” Rose wrote.
There appears to be some hope for security information sharing between security pros within vertical industries. The Financial Services Information Sharing and Analysis Center (FSISAC) is one of 14 security information-sharing associations formed at the behest of the U.S. federal government. According to its website, FSISAC members receive “timely notification and authoritative information specifically designed to help protect critical systems and assets from physical and cybersecurity threats.”
Sounds good, right? But click on over to the FAQ page of the FSISAC website and read the question, “Why should my firm join?” The answer addresses protecting critical infrastructure, but then adds, “If the private sector does not create an effective information sharing capability, it will be regulated: This alone is reason enough to join.”
Clearly this is not the high-minded perspective Coviello had in mind. But then again, I wouldn’t count on a vendor’s call to action as the foundation for a security industry association. Vendor-neutral associations such as ISSA are probably our best hope.
We may never find a balance between our competitive, and somewhat paranoid, human nature on one hand, and values such as openness and honesty on the other. But it’s good to keep tugging on both ends of the rope, if only to keep the conversation going.
A recurring theme I hear at conferences is that security teams can’t fight the inevitable shift to cloud computing, and instead need to figure out ways to adapt. This message was echoed at RSA Conference 2012, where a panel of CISOs urged the industry to get ahead of the cloud trend and ensure cloud services are adopted securely.
With its potential to slash IT costs, cloud computing is driving fundamental change in organizations, said Jerry Archer, senior vice president and CISO at Sallie Mae. “Everyone in this room will be impacted by it,” he told attendees.
That got me thinking: How will information security roles change as cloud computing becomes more prevalent in the enterprise? Do security pros need to worry about looking for other lines of work as security responsibilities shift to public clouds?
Industry experts I talked to see security pros continuing to play an important role as cloud adoption accelerates. After the RSA panel, Archer told me that security pros may need to acquire additional knowledge, for example in the area of contracts and law. But security is necessary and those with security expertise become “the gatekeepers” in this new IT environment, he said.
Cloud Security Alliance Executive Director Jim Reavis said security roles will change depending on the organization – whether it’s a cloud provider or cloud consumer. Providers will need to be able to provide the whole stack of security expertise and technologies while consumers will be looking to leverage higher layers of the cloud stack – SaaS and PaaS. For security pros working at organizations that are cloud consumers, this will mean a shift away from operational skills to application skills and closer work with business units, he said.
“I don’t think IT teams or security teams will disappear because of cloud,” Reavis said. “If you’ve got security expertise, you’ll be well employed for many years to come.”
Randall Gamby, information security officer for the Medicaid Information Service Center of New York (MISCNY), told me he sees security’s role falling in the vendor management space when it comes to cloud. Security professionals need to help organizations ask the right legal and technical questions of a cloud provider to ensure their data is protected.
“Being able to set up criteria to judge a cloud vendor and understand not only the services it offers, but the risks it may pose is important,” he said.
How do you think information security roles will change as cloud services become more prevalent? Leave me a comment below.