Security Bytes

March 23, 2012  7:20 AM

Microsoft vows to improve cloud service after Azure outage

Marcia Savage Marcia Savage Profile: Marcia Savage

Cloud outages are always big news –  and for good reason, because they usually affect many people. Last month’s Microsoft Azure outage was no exception. But at least Microsoft appears to be trying to learn from its mistakes.

The software giant released detailed findings of its root cause analysis of the Azure outage earlier this month, and said it would to use lessons learned from the incident to improve its cloud service. The analysis, posted by Azure engineering team leader Bill Laing, provides a detailed description of the Leap Day bug that triggered the Feb. 28 outage. The analysis was prefaced by an apology and an offer of service credits to customers, and included a description of the steps Microsoft is taking to improve its engineering, operations and communication in the wake of the outage.

“Rest assured that we are already hard at work using our learnings to improve Windows Azure,” Laing said.

Microsoft’s plans include improved testing to detect time-related bugs, strengthening its Azure dashboard, and improved customer communication during an incident.

Kyle Hilgendorf, principal research analyst at Gartner, said he was impressed with the level of detail in Microsoft’s analysis.

“I encourage all current and prospective Azure customers to read and digest the Azure RCA [root cause analysis],” he wrote in a blog post.  “There is significant insight and knowledge around how Azure is architected, much more so than customers have received in the past.”

The 33% service credit offered by Microsoft, he added, is becoming a de facto standard for cloud outages. “Customers appreciate this offer as it benefits both customers and providers alike from having to deal with SLA claims and the administrative overhead involved,” he said.

In a previous blog post, Hilgendorf summarized Azure customers concerns after the outage. Customers told him Microsoft’s communication during the outage was lacking; the company needed to be more transparent, and they were looking into options for protecting themselves against future outages.

So while Microsoft is applying lessons learned from the Azure outage, it appears Azure customers got a harsh reminder of the need to plan for service disruption. At last year’s Gartner Catalyst Conference, Richard Jones, managing vice president for cloud and data center strategies at Gartner, advised attendees to prepare for cloud failure by planning for resilience into their cloud infrastructure and services. Experts have also said organizations need to plan for outages in their cloud contracts.

“Cloud outages are a sad and unfortunate event,” Hilgendorf wrote. “However, if we learn from them, build better services, increase transparency, and guide towards better application design, then we can make something great out of something bad.”

March 22, 2012  11:41 AM

Verizon data breach report 2012 edition boasts more new contributors

Jane Wright Jane Wright Profile: Jane Wright

Last week I blogged about security practitioners and other IT pros working together across companies and industries to stem security threats. A new report this week is a positive example of even broader international cooperation to stop IT attacks across national borders.

The number of countries contributing to Verizon’s 2012 Data Breach Investigations Report (DBIR), released today, increased as government agencies and law enforcement officials from three more nations added information about breaches in their countries.

The DBIR started out eight years ago as a report of breaches Verizon had investigated. Eventually, the U.S. Secret Service contributed findings from their breach investigations. Later, the Dutch National High Tech Crime Unit joined in. Now, the Verizon data breach report 2012 edition counts the Australian Federal Police, the Irish Reporting & Information Security Service, and England’s Police Central e-Crime Unit among the partners helping to track and analyze data breaches.

This is good news for the security industry. It demonstrates the synergies that can be achieved when key industry stakeholders move past their reticence and (sometimes justified) mistrust to pool their brain power to stop attackers. Let’s pause for a moment to celebrate that progress.

Next year I hope we see even more countries contributing to the DBIR or other global initiatives to work together against security threats. It’s not too late for others to get involved.

March 21, 2012  4:28 PM

Mobile device protection: OWASP working on a Top 10 mobile risks list

Robert Westervelt Robert Westervelt Profile: Robert Westervelt

When I arrived home from RSA Conference 2012 after attending a number of panel discussions about mobile device protection, mobile security threats and ways IT teams can build control and visibility into their employee smartphones, I left feeling that many of the session panelists overhyped the risks.

In one session, a few experts warned incessantly about weaponized applications; another had a security expert discussing the skyrocketing mobile malware statistics. It was rather off putting that there was little discussion about how mobile device platforms are built differently than desktop OSes. In fact, a Microsoft network analyst attempted to compare the evolution of iOS and Android to the evolution of Windows. Something, I was told by several security experts, is nearly impossible to do. Security capabilities, including sandboxing, designed to isolate applications from critical processes, are built right into the mobile firmware.

I spoke to Kevin Mahaffey, CTO and founder of Lookout Security, which targets security-conscious consumers with a mobile application that provides antimalware protection, device locate, remote wipe and secure backup features.  Mahaffey was very forthcoming, saying it’s his belief that both Google Android and Apple iOS are the most secure OSes ever built.

His comment, which is no doubt debatable, made me seek out good sources of non-hyped potential risks posed by mobile devices to the enterprise. I may have stumbled upon the beginnings of a good list.

Several security experts active with the Open Web Application Security Project (OWASP) are developing a list of mobile risks. OWASP, known for its Top 10 Web Application Vulnerabilities list has come up with the Top 10 Mobile Risks list. It was released in September, and has been undergoing an open review period for public feedback. It’s still a work in progress and will undergo an annual revision cycle.

List of Mobile Risks:

1.       Insecure Data Storage

2.       Weak Server-Side Controls

3.       Insufficient Transport Layer Protection

4.       Client-Side Injection

5.       Poor Authorization and Authentication

6.       Improper Session Handling

7.       Security Decisions Via Untrusted Inputs

8.       Side Channel Data Leakage

9.       Broken Cryptography

10.   Sensitive Information Disclosure

The experts who prepared the list: Jack Mannino of nVisium Security, Mike Zusman of Carve Systems and Zach Lanier of the Intrepidus Group, have been actively researching mobile security issues. They produced an OWASP Top 10 Mobile Risks presentation describing and supporting the threats posed by the issues on the list.

Attackers are going to target the data, so insecure data storage on both back-end systems that mobile applications tap into and cached data on the device itself is at risk. Properly implemented server-side controls are essential, according to the presentation.  A lack of encryption of data in transit was cited, reminding me of my earlier post on the NSA’s VPN tunneling requirement and its other mobile security recommendations.   Properly executed authentication is a must and many of the garden variety vulnerabilities (XSS and SQL injection) for desktop software are repeatable for mobile applications. It wraps up with the call for developers to use properly implemented key management as well as tips to make a mobile application more difficult to reverse engineer.

I think the list gets to the heart of the issues without overhyping the threats. I hope it gains more visibility. I’d like to see it referred to more in public discussions about the potential weaknesses in mobile devices.

March 19, 2012  8:07 PM

Duqu Trojan written by professional software development team

Michael Mimoso Profile: maxsteel

Researchers at Kaspersky Labs have determined the authors of Duqu, the remote access Trojan often linked to Stuxnet, used a custom version of the C programming language to write the module used to communicate with its command-and-control servers.

Kaspersky, which has done deep analysis of the Duqu Trojan code framework, was having difficulty identifying the programming language and put out a call for help to the development community to help identify it. Most malware, Vitaly Kamluk, Kaspersky Labs chief malware analys said, is written in simpler and faster languages such as Delphi. The lab got more than 200 responses and after further analysis arrived at the conclusion that the code was written in a custom object-oriented C dialect known as OO C, which was compiled with the Microsoft Visual Studio Compiler2008, Kamluk said.

“Few [malware writers] write in assembler and C; this is pretty rare,” Kamluk said. “Using custom frameworks is quite specific. We think they are software programmers, not criminals. This is what we call ‘civil code.’”

So what’s the big deal? Well, this likely confirms nation-state involvement in the development of Duqu. No organized band of credit card thieves or hacktivists is going to invest the time and money to build a Trojan using a reusable development framework in a language used for complex enterprise applications. Kaspersky also indicated a level of separation between developers on the team, groups of which could have been developing different components of the Trojan without knowing the full mission—plausible deniability.

The primary mission of Duqu, unlike Stuxnet, is to gather and forward information from its targets. Duqu has nowhere near the penetration of Stuxnet because it has no worming capabilities. Instead, Kamluk said, it is targeted toward specific computers or people. “It has to be sent to a target and the target must execute it,” he said.

Kamluk characterized the authors as “old-school professional developers” with a comfort level in C, which works faster and is more efficient when compiled versus languages such as Delphi. Also, Kamluk said, the framework is reusable.

“This framework could be designed by someone and other developers would use this approach to write code. This is a bigger development team, possibly 20 to 30 people,” he said. “There was a special role too of a software architect who oversaw the project and development of the framework that was reused. Other roles were likely command-and-control operators, others developing zero-day attacks, others in propagation and social engineering.”

“We suspect it could be within different organizations and each responsible for a particular part of the code, not knowing what it would be used for. They didn’t know they were developing malware probably,” Kamluk said.

While he wasn’t ready to identify the authors by name or location, Kamluk said Kaspersky was seeing some Duqu infections in Sudan, Iran and some European countries. Stuxnet, which is widely believed to be a joint U.S.-Israel operation targeting a nuclear facility in Iran, is linked to Duqu because of similarities in code and code structure.

“We are not close to answering which country might be behind Duqu,” Kamluk said. “They try to hide their identities by not using any language constructions in the code. There are no words inside the code, no random names of files or system objects. They stayed language independent.”

March 15, 2012  6:16 PM

NSA mobile security plan could be roadmap for all mobile device security

Robert Westervelt Robert Westervelt Profile: Robert Westervelt

Security research firm Securosis has started a series of blog posts about how to protect enterprise data on Apple iOS smartphones.  Securosis’ Rich Mogull explains that companies are increasingly feeling pressure from employees to support iOS. But how does the IT security team ensure the protection of sensitive enterprise data on devices they have little control over?

According to Mogull:

The main problem is that Apple provides limited tools for enterprise management of iOS. There is no ability to run background security applications, so we need to rely on policy management and a spectrum of security architectures.

Mogull’s first post in the series lays out the security capabilities in iOS and highlights some of the technical reasons why the iPhone has been relatively immune to malware and other threats.

It’s clear that a tightly controlled mobile device will have to use a combination of external security technologies and internal data protection capabilities. The NSA’s “Mobility Capability Package” (.pdf), a report outlining the first phase of its recommended Enterprise Mobility Architecture, could be the blueprint needed for the private sector, according to some experts I’ve recently talked to.

The NSA unveiled the report during the RSA Conference 2012 and held a session outlining its secure mobility strategy. While it’s extremely restrictive, I think the recommendations appear to be the way most of the security industry is headed.

Among the reports key recommendations:

  • All mobile device traffic should travel through a VPN.
  • All devices should use AES 256 full disk encryption.
  • Tight controls on the use of Bluetooth, WiFi, voicemail and texting.
  • GPS disabled except for emergency 911 calls.
  • Ability to prevent users from tethering.
  • Ability to disable over-the-air software updates.

A virtual private network (VPN) establishes a secured path between the user equipment and the secured access networks with a second layer of encryption required to access classified enterprise services.

Bruce Schneier highlighted the NSA mobile security guidance document recently on his blog post and eyed the VPN tunnel recommendation.  “The more I look at mobile security, the more I think a secure tunnel is essential,” Schneier wrote.

Full disk encryption (FDE) is currently available for Android devices. FDE for Apple devices currently falls short, but DARPA has been working on this, and according to Winn Schwartau, who serves as chairman of the Board of Directors at Atlanta-based mobile device security firm, Mobile Active Defense, well-implemented FDE for iOS devices is “weeks” away.

Apple introduced data encryption capabilities in iOS 4.0. As part of its data protection feature, Apple is enabling mobile application developers to store sensitive application data on-disk in an encrypted format. The first iteration only encrypted the files when the device was in a locked state. The phone-unlock passcode served as the encryption key. In iOS 5.0, security levels were added for protected files.

Under the NSA plan, smartphone users would be required to have an installed initialization program, which would immediately launch as soon as the smartphone is turned on. The program would check the device’s OS and ensure only authorized applications and operating system components are loaded. The device owner would be required to enter a PIN or passphrase to unlock the phone and then – as a second factor – a password would be needed to decrypt the device’s memory.

Once the memory is unencrypted, the user then starts the VPN, which establishes a tunnel from the device to the infrastructure. The device is then registered with the Session Initiation Protocol (SIP) server and a TLS connection is tunneled through the VPN connection.

Phone calls made by a smartphone user would be routed by the cellular carrier to mobility infrastructure maintained by the government. This device must have already established a secure VPN connection to be accessible, according to the paper.

To be clear, some of the capabilities recommended by the NSA will be easier to develop for Android devices since Google’s code base is publicly available. Under its Project Fishbowl, the agency is developing a hardened smartphone with its security requirements using a modified version of Android. But other capabilities, including FDE and the requirement of a VPN will be feasible and justifiable on any mobile platform. Exactly how this can be implemented, and more importantly, how it can be enforced by IT security teams, is an issue still being addressed by researchers. Mobile device management products typically require software running on the device and nearly all the technologies require end-user interaction and can be bypassed.

It’s going to be fun watching more robust mobile device security technologies emerge.

March 15, 2012  2:51 PM

Can a security industry association bring us all together?

Jane Wright Jane Wright Profile: Jane Wright

You don’t have to work in the infosec world for long before you hear strands of the unofficial industry anthem: “Let’s work together.” Arthur Coviello, chairman of RSA, the security division of EMC, practically sang the chorus in his keynote address at RSA Conference 2012. “We are in this fight together,” Coviello said. “Knowledge by one becomes power for all of us.”

Can security pros from different organizations really work together?

Andrew Rose, a principal analyst at Forrester Research, doubts it. In a blog post last month, Rose recounted meeting a representative of a European regulatory body. “(She believed) the future lay in open and honest sharing between organizations – i.e. when one is hacked, they would immediately share details of both the breach and the method with their peers and wider industry.”

But Rose believes this view is too idealistic, and organizations will refuse to share such information for fear of reputation or brand damage. “As a security professional, it’s tough to acknowledge in a public forum that you may even have something to share with colleagues at other firms, lest the press get hold of the information and twist it into a fictitious ‘XXXX Corp hacked!’ story,” Rose wrote.

There appears to be some hope for security information sharing between security pros within vertical industries. The Financial Services Information Sharing and Analysis Center (FSISAC) is one of 14 security information-sharing associations formed at the behest of the U.S. federal government. According to its website, FSISAC members receive “timely notification and authoritative information specifically designed to help protect critical systems and assets from physical and cybersecurity threats.”

Sounds good, right? But click on over to the FAQ page of the FSISAC website and read the question, “Why should my firm join?” The answer addresses protecting critical infrastructure, but then adds, “If the private sector does not create an effective information sharing capability, it will be regulated: This alone is reason enough to join.”

Clearly this is not the high-minded perspective Coviello had in mind. But then again, I wouldn’t count on a vendor’s call to action as the foundation for a security industry association. Vendor-neutral associations such as ISSA are probably our best hope.

We may never find a balance between our competitive, and somewhat paranoid, human nature on one hand, and values such as openness and honesty on the other. But it’s good to keep tugging on both ends of the rope, if only to keep the conversation going.

March 13, 2012  1:37 PM

Information security roles and the cloud

Marcia Savage Marcia Savage Profile: Marcia Savage

A recurring theme I hear at conferences is that security teams can’t fight the inevitable shift to cloud computing, and instead need to figure out ways to adapt. This message was echoed at RSA Conference 2012, where a panel of CISOs urged the industry to get ahead of the cloud trend and ensure cloud services are adopted securely.

With its potential to slash IT costs, cloud computing is driving fundamental change in organizations, said Jerry Archer, senior vice president and CISO at Sallie Mae. “Everyone in this room will be impacted by it,” he told attendees.

That got me thinking: How will information security roles change as cloud computing becomes more prevalent in the enterprise? Do security pros need to worry about looking for other lines of work as security responsibilities shift to public clouds?

Industry experts I talked to see security pros continuing to play an important role as cloud adoption accelerates. After the RSA panel, Archer told me that security pros may need to acquire additional knowledge, for example in the area of contracts and law. But security is necessary and those with security expertise become “the gatekeepers” in this new IT environment, he said.

Cloud Security Alliance Executive Director Jim Reavis said security roles will change depending on the organization – whether it’s a cloud provider or cloud consumer. Providers will need to be able to provide the whole stack of security expertise and technologies while consumers will be looking to leverage higher layers of the cloud stack – SaaS and PaaS. For security pros working at organizations that are cloud consumers, this will mean a shift away from operational skills to application skills and closer work with business units, he said.

“I don’t think IT teams or security teams will disappear because of cloud,” Reavis said. “If you’ve got security expertise, you’ll be well employed for many years to come.”

Randall Gamby, information security officer for the Medicaid Information Service Center of New York (MISCNY), told me he sees security’s role falling in the vendor management space when it comes to cloud. Security professionals need to help organizations ask the right legal and technical questions of a cloud provider to ensure their data is protected.

“Being able to set up criteria to judge a cloud vendor and understand not only the services it offers, but the risks it may pose is important,” he said.

How do you think information security roles will change as cloud services become more prevalent? Leave me a comment below.

March 12, 2012  1:02 PM

Getting offensive about zero-day vulnerabilities and exploits

Michael Mimoso Profile: maxsteel

The big-bad scary zero-day exploit: it sends almost the same kind of shivers down everyone’s back as APT. Yet, like the advanced persistent threat, the zero-day is suffering some hype fatigue. More Web servers are popped by known bugs and exploits than some shadowy secretive attack crafted by the Electrical Engineering University of China’s People’s Liberation Army. Yet companies are still bombarded with marketing FUD about zero-days despite numbers that indicate exploits hitting unknown vulnerabilities account for less than 1% of all malware.

So do zero-days matter? Like everything else in security, it depends. If you’re in the bug hunting and bug selling business, they sure do. Last week’s CanSecWest hacker, err, researcher conference in Vancouver was a zero-day Lollapalooza with companies like VUPEN taking dead-aim at Google Chrome and Microsoft’s IE9 browser with zero-days developed just for the event. The French company, called out by privacy advocate Chris Soghoian at the recent Kaspersky Security Analyst Summit, admits to holding on to certain vulnerabilities and exploits only for its customers, refusing at times to share information with the affected vendors. Soghoian said VUPEN and others sell exploits to governments, who pay a heck of a lot more for what can be turned into a weaponized exploit than say a security conference or a bug bounty program, such as TippingPoint’s Zero-Day Initiative.

VUPEN CEO Chaouki Bekrar told Threatpost that VUPEN’s government customers are only trusted democracies and not oppressive countries. Taking him at his word, there’s still the argument that while a select few get a fix, the general user population remains exposed. It’s silly to think attackers aren’t way ahead of the game and already have their share of unreported bugs and exploits at their disposal, but this level of backroom wheeling and dealing is disconcerting. It casts a poor light on offensive security research and events like the Pwn2Own contest are probably unwillingly aiding and abetting.

I had a conversation with Microsoft senior security strategist lead Katie Moussouris recently about zero-days and vulnerability disclosure. Katie has been in the security business a while, including a stint at @Stake back in the day, and she said Microsoft’s experience with the research community is much different. She said that 80% of vulnerabilities found in Microsoft products are disclosed privately, and 90% of those disclosures are made directly to Microsoft. Most researchers, she said, are not motivated by money, but by intellectual curiosity. As a result, Microsoft has shied away from offering a bug bounty, and has instead focused on rewarding defensive security research with initiatives such as its Blue Hat Prize.

These are watershed days for security researchers and vulnerability disclosure. To be honest, the whole disclosure debate probably gives most of you a headache, worst of all if you’re a CISO sitting between the researchers and the vendors and the VUPEN-like middlemen while all this wrangling plays itself out. Tim Stanley, former CISO at Continental Airlines, summed it up best a couple of years ago:

“I love the love-fest between the vendors and researchers, but quite honestly, I don’t give a hoot. I’m the consumer, the guy who paid for the product that I expect to be correct in the first place. I’m the guy who paid for the software. When am I gonna know? The issue becomes a matter where the people paying for the product need to be better represented in this process.”


March 8, 2012  3:56 PM

Changes to European privacy laws foreshadow serious business impact

Jane Wright Jane Wright Profile: Jane Wright

Changes to the data protection regulations are on the way for the 27 countries of the European Union, and the fallout in Europe serves as a good case study for U.S. governing bodies and businesses who are also playing tug-of-war over compliance regulations.

Businesses in the U.K. are steaming over the DPA proposals. In fact, our U.K. bureau chief, Ron Condon, described the reaction of the Confederation of British Industry (CBI), a lobbying organization representing more than a quarter-million companies, as “hostile.”  Why such a severe reaction to proposed European privacy laws that, according to the European Commission, will save businesses £2.3 billion (about $3.6 billion) per year? 

As part of the new data protection regime, businesses operating in the EU will need to ask consumers for explicit permission to capture the consumer’s data. Businesses fear just asking for permission will make consumers nervous, and nervous consumers can be miserly consumers.

It appears businesses may be right to worry. Consider what happened to the Information Commissioner’s Office in the U.K. when it implemented its own PECR regulation, specifically asking all site visitors for permission to place a cookie on their computer.  According to the BBC, the ICO website normally received 12,000 site visitors per day, but after debuting the cookie request notice, the number of visitors dropped to about 1,400 per day. 

Actually, the number of visitors willing to be tracked dropped. The ICO said only about 10% of its visitors accepted the cookie. The other 90% were probably still there; they may have simply declined to be tracked.

This could have serious repercussions to the way many businesses operate today. Without knowing which pages visitors look at, how long they study a product page, or the order they place products in the online shopping cart, businesses will lose crucial information they need to direct their strategies. Some businesses, I wager, may even go out of business once deprived of customer information.

Where should the line be drawn between visitors who want to be anonymous, and businesses who can’t serve their customers’ needs without fundamental information about those customers?

The ICO holds out hope that, eventually, users won’t be so easily scared off by cookie warnings, but I see this playing out another way.  I got an inkling from an incident at RSA Conference 2012 last week.

A security vendor had a representative standing on Howard Street, flagging down anyone walking by who was wearing an RSA conference badge. In return for handing over a business card, the passerby received a $5.00 Starbucks gift card. Apparently $5.00 is the price this particular vendor was willing to pay for an RSA attendee to share their basic information.

As for me, I’m wondering how many cookies I can buy for $5.00 at Starbucks.  

March 7, 2012  5:14 PM

How CloudFlare’s website security service protected LulzSec

Marcia Savage Marcia Savage Profile: Marcia Savage

When it comes to customer case studies, CloudFlare has one of the most unusual and dramatic I’ve ever heard.

Last summer, the LulzSec hacking group signed up its website for CloudFlare, drawing the website security service and accelerator company into one of the biggest cyber battles ever, as LulzSec created mayhem on the Internet while rivals and others tried to knock it offline. CloudFlare’s CEO and Co-founder Matthew Prince detailed the attacks in a presentation at RSA Conference 2012; I wasn’t able to attend, but he filled me in during a briefing at the show last week.

LulzSec registered for CloudFlare on June 2, 2011 after it a substantial DoS attack knocked its newly launched site — — offline for 45 minutes, Prince said. “We had no idea who LulzSec was,” he said. As it turns out, the group had just published information it had allegedly stolen from Sony.

For the next 22 days, LulzSec waged battle on the Web as rivals and white hat hackers launched a volley of attacks against the group’s site. “It was like a gunfight and we were sitting in the middle of it,” Prince said.

The battle proved a mighty test for Palo Alto, Calif.-based CloudFlare, which protects websites against threats like DDoS, XSS and SQL injection attacks while also boosting site performance. “It was the most massive pen test ever,” Prince said. “We learned a ton from the fact that LulzSec was with us.”

He explained that CloudFlare’s system automatically looks for anomalies to detect attacks and once it does, adds protection for all the websites it protects. More than 250,000 websites, from Fortune 500 companies to individual blogs, use CloudFlare. Using the service doesn’t require any hardware installation, only a change to network settings to allow site traffic to pass through CloudFlare, which operates 14 data centers around the world.

“We’re like a smart, skilled router on your network,” Prince said.

The fact that LulzSec stayed online for the 22 days it was with CloudFlare illustrates the company’s core value proposition, Prince said. “Because we saw these threats our network got smarter,” he added.

After those 22 days, the website disappeared. Prince began receiving requests to tell the story of what happened, but the company has a privacy policy with its customers not to reveal them without permission. He used the contact information LulzSec provided to sign up for the service and eventually got a single line reply giving him permission.

Prince said CloudFlare never got a request from law enforcement to take LulzSec offline, but quickly added that it has no mechanism to do that anyway. He noted that CloudFlare wasn’t LulzSec’s hosting provider.

As to whether CloudFlare considered shutting off service for LulzSec – a group linked to a number of attacks on corporate government sites – Prince said his company’s role isn’t that of an Internet censor.

“There are tens of thousands of websites currently using CloudFlare’s network,” he said in a blog post last summer. “Some of them contain information I find troubling. Such is the nature of a free and open network and, as an organization that aims to make the whole Internet faster and safer, such inherently will be our ongoing struggle. While we will respect the laws of the jurisdictions in which we operate, we do not believe it is our decision to determine what content may and may not be published. That is a slippery slope down which we will not tread.”

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: