October 18, 2011 1:32 PM
Posted by: Ben Cole
, Data Security
, Personally identifiable information
After hackers gained access to the personal information of more than 100 million user accounts last spring, Sony overhauled online security and created a chief information security officer (CISO) position. On Sept. 6, Philip Reitinger joined Sony as its senior vice president and CISO — and he’s already been busy.
In a post to the PlayStation blog last week, Reitinger said Sony detected attempts on Sony Entertainment Network (SEN), PlayStation Network (PSN) and Sony Online Entertainment (SOE) to test “a massive set” of sign-in IDs and passwords against the company’s network database. The attempts appeared to include a large amount of data obtained from one or more compromised lists from other companies, sites or other sources, Reitinger said.
“As a preventative measure, we are requiring secure password resets for those PSN/SEN accounts that had both a sign-in ID and password match through this attempt,” Reitinger wrote in the blog post.
Less than one-tenth of 1% of the PSN, SEN and SOE audiences may have been affected by the data security breach, and Reitinger assured users that credit card numbers were not at risk. This was a relatively low-risk data security breach, but perhaps Sony’s reaction was a case of lessons learned: After the April breach, Sony was criticized for waiting a week to notify customers that their personal information might have been compromised. In addition, it took more than two weeks to fully restore the network. Needless to say, Sony users (and federal regulators) were not impressed by what some viewed as a lackadaisical reaction.
There has been much public outcry over Sony’s data security breach, and those of other companies, in the past year. This likely influenced the SEC last week to mandate the “disclosure of timely, comprehensive and accurate information” surrounding cybersecurity risks.
Did Sony’s online security overhaul help detect this breach before it became another fiasco? Although critics have said Sony simply hired Reitinger as an insurance policy to pacify investors and customers after the April data security breach, he showed his value here. At least now the Sony brass and their customers have someone to go to for information about any further breaches — what happened, how it happened, how they are going to handle it in the future. (Unfortunately for Reitinger, it also gives them someone to blame.)
But if nothing else, the reaction to last week’s data security breach might be indicative of a new trend of taking a proactive approach and letting online customers know what they can do to protect themselves and their information. Judging by the comments made to Reitinger’s blog post, people are mostly happy with Sony’s reaction to the potential data security breach. Many praised Reitinger and Sony for keeping them informed.
Perhaps Sony and companies like them have learned their lesson about the futility of trying to keep a breach out of the spotlight, and know now that transparency is the best course of action. If the SEC’s recent mandate is any indication, federal regulators and customers are going to be watching companies closely to ensure cybersecurity is kept above board.
October 17, 2011 7:10 PM
Posted by: Kevin Beaver
, information risk assessment
, regulatory compliance requirements
Compliance means different things to different people. Indeed, regulatory compliance requirements are — and should be — handled differently based on the unique needs of the business. The ugly reality is that there are so many assumptions being made about compliance that it often skews the perception of what’s really going on.
Here are what I believe to be the most dangerous assumptions we make about regulatory compliance requirements and how they can get us — and our businesses — into hot water:
1. We’re compliant, so our information is safe. The most common assumption is that compliance equals security. It doesn’t. Never has and never will. Your business may be “compliant” at the moment, but odds are you’ve still got tons of low-hanging fruit that needs to be fixed. It’s time to dig deeper.
2. Our lawyer is in charge, so all is well. Lawyers should have the final say-so, but they shouldn’t be calling all the shots. Compliance is much more complex than audit reports and contracts. There’s information risk assessments, vulnerability management, incident response, access controls, etc. All the right people across the board need to be involved throughout the compliance process.
3. It’s not worth the money to become — and stay — compliant. According to the Ponemon Institute, the cost of noncompliance is 2.65 times the cost of adhering to regulatory compliance requirements. Do what needs to be done, and you’ll save a tremendous amount of money and effort. As time goes on, you’re going to be forced into compliance eventually. Why not get started now?
4. We encrypt our PII — that’s the ultimate security control. Even though data is encrypted, there are numerous ways to exploit known flaws, especially if the encryption wasn’t properly implemented or isn’t being managed the way it needs to be. You need encryption — but don’t assume it’s working as intended.
5. Our tools are telling us that we’re compliant; enough said. Good network and security tools are essential for visibility and control, but you can never rely on them completely. Be it identity management, network monitoring, vulnerability management — you name it — canned reports from such tools most often do not reflect reality. You have to look closer and validate for yourself.
6. We’ve done everything required by the regulations, that’s all we need to do. Focusing on what’s required doesn’t mean you’ve covered all your bases. The minimum regulatory compliance requirements are often a baseline of suggestions, but it may not be what your business really needs. Furthermore, I can’t tell you how many times I’ve seen businesses “become” compliant without ever performing a single information risk assessment. You can’t possibly put the right security controls in place if you don’t even know what needs attention.
7. We had a breach and subsequent compliance sanctions, so we learned our lessons and are much more secure now. Humans often assume what other people are thinking and that others are taking care of what’s needed. A prime time for this to happen is after a breach occurs and we get back into our day-to-day work, then become complacent. Assuming that everything has been uncovered and fixed is a prime opportunity for people to let their guard down and for something else to go awry. Experiencing a data breach means you’ve got to up your game, big time — and stay on top of things without ever letting your guard down.
You cannot change what you tolerate. Fix your oversights and gaps surrounding regulatory compliance requirements now, before they bite when you’re least expecting it.
Kevin Beaver is an information security consultant and expert witness, as well as a seminar leader and keynote speaker at Atlanta-based Principle Logic LLC. Beaver has authored/co-authored eight books on information security, including The Practical Guide to HIPAA Privacy and Security Compliance and the newly updated Hacking For Dummies, 3rd edition. In addition, he’s the creator of the Security On Wheels information security audiobooks and blog.
October 10, 2011 6:56 PM
Posted by: Ben Cole
, mobile device security
October is National Cybersecurity Awareness Month, and the overarching theme this year is to spread awareness of every Internet user’s role in securing their information. In other words, YOU are the first line of defense in protecting your information, so pay attention to security vulnerabilities stemming from your devices.
But certainly not everyone who goes online is overly familiar with the persistent threats. Luckily, it appears some watchdogs are here to help. This was evident when the Android Police blog recently reported a “massive security vulnerability” in HTC’s Android devices.
Android Police researchers found that in recent updates to some of HTC’s devices, the company introduced a suite of logging tools designed to collect user information — way too much information, according to the researchers. Researchers found that on affected HTC devices, any application that requests a single Internet permission (normal for any app that connects to the Web or shows ads) can access:
· The list of user accounts, including email addresses and sync status for each.
· Last-known network and GPS locations, and a limited history of previous locations.
· Phone numbers from the phone log.
· SMS data, including phone numbers.
· System logs likely to include email addresses, phone numbers and other private info.
“If you, as a company, plant these information collectors on a device, you better be DAMN sure the information they collect is secured and only available to privileged services or the user, after opting in,” wrote Android Police blogger Artem Russakovskii when announcing the HTC device security vulnerability. “That is not the case.”
After the flaw was exposed by the Android Police, HTC confirmed that it found in its software a “vulnerability that could potentially be exploited by a malicious third-party application,” and that it was working on a fix. Customers will be notified of how to download and install the security fix, the company said. HTC also urged customers to use caution when downloading, using, installing and updating applications from untrusted sources.
At least HTC moved quickly to correct the problem and inform its customers of the vulnerabilities, right? Well, not so fast. After finding the security lapse, the Android Police contacted HTC on Sept. 24 and received no real response for five business days, after which the Android Police released the information to the public.
Perhaps HTC was waiting to tie in its response to the vulnerabilities with Cybersecurity Awareness Month.
The point to take from the story surrounding HTC mobile device security is that companies are not going to come out and announce when there is a huge risk to using their products — especially those designed for consumers. The problem is the average consumer is not going to know what to look for, and will trust that information is protected when using devices for everyday use.
As shown with the HTC mobile device security issue, this is not always the case. How many more security vulnerabilities are there in other mobile devices that have not been exposed yet? And it’s not just individual consumers that need to be concerned: The spread of personal devices (and their associated security risks) in the workplace make due diligence necessary. People can obviously no longer just assume that they’re protected.
October 3, 2011 6:18 PM
Posted by: Ben Cole
October is National Cyber Security Awareness Month, and this year’s theme is meant to remind individuals of their role in securing information, as well as the devices and the networks they use. Failure to understand this relatively simple cybersecurity message can have embarrassing consequences, as banking giant The Goldman Sachs Group, Inc. learned earlier this week when hackers published the personal information of several employees, including CEO Lloyd Blankfein.
Goldman Sachs was not the only big name in the in cybersecurity news this week. After news surfaced that Facebook had been gathering information about the websites its users visited even after users logged out of the social network, two congressmen urged the Federal Trade Commission (FTC) to investigate the company’s practices.
In a letter to the FTC, Congressmen Edward J. Markey (D-Mass.) and Joe Barton (R-Texas) said tracking users’ behavior without their knowledge “raises serious privacy concerns.” Facebook says it is working to correct the matter, but Barton and Markey want the FTC to investigate and make sure the practice is stopped. Barton and Markey also urged the FTC to investigate the use of so-called “supercookies” that allow websites to capture personal data about consumers.
Daniel Conroy, CISO and global head of information security at BNY Mellon Corp., says organizations should make clear to employees their role in protecting their own — as well as the company’s — sensitive information. Conroy said protecting data starts with communicating to employees what is acceptable and what is not with regard to risk management. He suggests providing security awareness training to all employees.
“If employees don’t know what information is important to the company, how are they going to know what not to post?” Conroy asked during a presentation at the MIS Training Institute’s IT Governance, Risk and Compliance Summit in Boston last month.
Conroy focused on how the expansion of social media makes sensitive company information especially vulnerable — and noted that it’s important to establish a balance between satisfying business needs and mitigating risk when using such sites. He noted that avoiding things as simple as posting organizational charts at companies online could go a long way toward avoiding leaks of business info. Most importantly, he suggests companies anticipate the evolving risks as part of a cybersecurity strategy, and communicate these risks to all employees. Companies could go so far as to create a security awareness campaign using techniques such as posters, videos and email blasts to get the message out and encourage employees to participate.
The bottom line is that protecting information starts with the individual. With more people incorporating personal technology in their business activities, companies can be hurt when personal information is leaked. As a result, companies would be well served to show employees how they can protect themselves and the information they offer online … which will in turn help the protect the business.
September 26, 2011 7:17 PM
Posted by: Ben Cole
cloud computing standards
Recognizing the “significant opportunities” surrounding cloud computing, the Subcommittee on Technology and Innovation held a hearing last week to examine the benefits — and obstacles — of widespread cloud adoption. The hearing could be a first step to more exacting cloud computing standards.
Subcommittee members said cloud computing can provide users with increased computing capability, greater efficiency and lower energy and infrastructure costs. However, cybersecurity remains a major concern for many users, said Subcommittee Chairman Rep. Ben Quayle (R-Ariz.). Quayle pointed out that users must have confidence that their data and applications, as well as their privacy, will be protected. Quayle added that cloud service providers would need to offer users different tiers of security depending on the sensitivity of their data in order to alleviate these concerns.
Nick Combs, federal chief technology officer at EMC Corp., and Dr. Dan Reed, corporate vice president of the technology policy group at Microsoft, were among those testifying at the hearing. In response to Quayle’s concerns, Combs suggested cloud security be driven by a “flexible policy” aligned to the business or mission need, and that a common framework would be needed to ensure that cloud security policies are consistently applied. Reed added that clear policy goals surrounding cloud security are necessary, but regulators need to be careful to avoid rules that will hinder cloud innovation or quickly become outdated.
These cloud security concerns echoed statements recently made by Alan Barnes, director of risk and advisory at Services Assurant Inc., at a GRC training summit in Boston. Barnes noted that cloud computing creates additional third-party security risks, such as hacking, a lack of compliance standards and intellectual property vulnerabilities. Barnes added that the current lack of agreement on cloud computing standards ensures that cloud provider risk evaluation will remain inexact and inconvenient for the next several years.
The National Institute of Standards and Technology (NIST) is spearheading stakeholder efforts to develop cloud data security and interoperability standards, which witnesses at last week’s hearing said are critical to the cloud’s success.
“As an agency considers migrations to cloud computing, NIST must develop the appropriate consensus standards and guidelines to ensure a secure and trustworthy environment for federal information,” according to a statement from the Subcommittee on Technology and Innovation.
Developing such “consensus standards and guidelines” is an appropriate first step to alleviate concerns surrounding the mass migration to the cloud. But until these cloud computing standards are established and implemented, users need to remain cautious moving to the cloud.
September 19, 2011 4:08 PM
Posted by: Ben Cole
, Data Security
, online privacy
A few weeks ago in this space, I wondered if increased scrutiny of Google’s business practices was just the beginning of the federal government’s efforts to regulate the Internet. Judging by a handful of news stories and announcements last week, online data security and online privacy concerns have shot to the top of at least some lawmakers’ lists of concerns.
For starters, Sen. Richard Blumenthal (D-Conn.) introduced the Personal Data Protection and Breach Accountability Act of 2011. The legislation is designed to protect consumers’ personally identifiable information and improve online data security.
The bill would create a process for companies to establish appropriate online data security, and it would hold companies accountable for failing to comply with those plans. In what may be spurred by Sony’s slow response to a huge data breach earlier this year, Blumenthal’s bill also requires companies to promptly notify consumers after a breach has occurred, and to provide consumers with solutions to alleviate online security threats.
To help prevent future beaches, the bill encourages better information-sharing among federal agencies, law enforcement and the private sector to alert businesses of specific online security threats.
Also last week, an Op-Ed piece in The New York Times highlights an upcoming Supreme Court case that could have huge ramifications for online privacy concerns. But this time, it regards how much information the government should have access to.
The case, United States v. Antoine Jones, concerns a GPS device placed on the car of a suspected drug dealer without a warrant, which the man says was a violation of the Fourth Amendment.
“If the court rejects his logic and sides with those who maintain that we have no expectation of privacy in our public movements, surveillance is likely to expand, radically transforming our experience of both public and virtual spaces,” wrote Jeffrey Rosen, a law professor at George Washington University.
Rosen pointed out that technologies such as Facebook’s facial-recognition tool could be used by law enforcement to help identify criminals. Rosen also referenced a 2008 comment from a Google executive saying that, within a few years, public agencies and private companies could be asking Google to post live feeds from public and private surveillance cameras all around the world.
“If the feeds were linked and archived, anyone with a Web browser would be able to click on a picture of anyone on any monitored street and follow his movements,” Rosen wrote in The New York Times piece.
These news items were among a handful reporting on online data security regulations in the past week. Here are some others:
- The Federal Trade Commission announced it is seeking public comment on proposed amendments to the Children’s Online Privacy Protection Rule, which gives parents control over what personal information websites may collect from children under 13. The amendments are aimed at keeping pace with new technology and devices that give children Internet access.
- Connecticut Attorney General George Jepsen announced the creation of a task force to investigate Internet privacy and data breaches while educating the public and businesses about data protection.
- On Thursday, the House subcommittee on Commerce, Manufacturing, and Trade held the first of a series of hearings to address online privacy. The hearing examined the European Union’s privacy and data collection regulations and how they have affected the Internet economy. Some have expressed concern that limiting the tracking of Internet users (as is done in the EU) could dramatically hurt online marketing effectiveness.
Federal online privacy concerns and the increased government involvement in online data security may be warranted, at least according to a new PricewaterhouseCoopers survey of 9,600 security executives. The survey found that 43% of global companies think they have an effective information security strategy in place and are proactively executing their plans. However, only 16% of respondents say their organizations are prepared and have security policies that are able to confront an advanced persistent threat attack, creating more online data security concerns.
It appears that most people with a stake in the game are at least aware of the severity of online security threats. Perhaps a combination of legal regulations and private efforts surrounding online data security could have the movement heading in the right direction.
September 12, 2011 8:21 PM
Posted by: Ben Cole
, Dodd-Frank Wall Street Reform and Protection Act
The rollout of regulations under the Dodd-Frank Wall Street Reform and Protection Act has been pushed back until at least early 2012, according to the Commodity Futures Trading Commission (CFTC). The delay marks the second time the CFTC has put the brakes on new rules governing the over-the-counter derivatives market.
In June, the CFTC said the rules would be finalized by the end of 2011 — which already would have been six months past the original deadline.
The New York Times’ Dealbook called the derivatives rules delay “another setback for the sweeping overhaul passed in the aftermath of the financial crisis.” But CFTC Chairman Gary Gensler said the federal regulator is “not against a clock” to implement the Dodd-Frank regulations.
“No doubt, as this is a human endeavor, there will likely be changes to this outline down the road,” Gensler said at a public meeting Thursday. “We also will continue to reach out to other regulators, both here and abroad, for their input as we consider the many thousands of comments on these rules.”
To date, the CFTC has finished 12 final rules under Dodd-Frank, and the agency will host a full schedule of public meetings this fall. Compliance represents a “major step” in the CFTC’s efforts to make financial reform a reality and to protect the American taxpayer, Gensler added.
“We are looking to consider external and internal business conduct rules related to risk management, supervision, conflicts of interest, record-keeping and chief compliance officers,” Gensler said.
Gensler also said he supports a proposal to establish schedules to phase in compliance with previously proposed requirements, including one surrounding the swap trading relationship documentation. The proposal would provide greater clarity to swap dealers and major swap participants regarding the time frame for compliance, as well as give them an adequate amount of time to comply, he said.
As SearchCompliance.com Executive Editor Chris Gonsalves pointed out earlier this summer, delays such as the recent CFTC’s recent announcement could be only the tip of Dodd-Frank’s iceberg of problems. I understand federal regulators want to get things right under Dodd-Frank regulations, but a failure to implement hard and fast compliance rules in a timely manner could end up only further alienating consumers already mired in the current economic malaise. Stay tuned.
September 7, 2011 2:58 PM
Posted by: Ben Cole
federal regulations online
, Google settlement
, regulating the Internet
In a recent settlement with the Department of Justice (DOJ), Google gave up $500 million due to questions surrounding its advertising practices. This is one of the largest settlements ever in the U.S., according to the DOJ, and it might have larger ramifications: Could this be just the beginning of the federal government’s involvement in efforts to regulate the Internet?
Google was accused of allowing online Canadian pharmacies to place advertisements targeting U.S. consumers through its AdWords program — which the DOJ called “unlawful importation of controlled and non-controlled prescription drugs.” As a part of the settlement, Google has also agreed to “a number of compliance and reporting measures” to prevent similar violations.
The Google settlement with the DOJ follows other run-ins the company has had with federal regulators in 2011. In March, the company agreed to adopt a privacy program in response to charges that it deceived users and potentially violated user privacy when it launched the social networking service Buzz. In June, Google acknowledged that it was the subject of an FTC investigation examining whether it uses its Internet search dominance to stifle competition in expanding markets. The investigation is ongoing.
Google said in its Online Security Blog last week that it had received reports of attempted SSL “man-in the-middle attacks” against Google users, “whereby someone tried to get between them and encrypted Google services.” The issue affected users primarily located in Iran, but Google was quick to show transparency surrounding the potential security issue.
As Google expands its reach to areas including advertising and social networking, federal regulators must take increased notice. Data from research firm Nielsen showed that Google was by far the most popular online destination in July among Americans. According to Nielsen, 172.5 million unique visitors in the U.S. visited Google.com in July, and spent an average of nearly 1.5 hours on the site.
The increased scope and use of Google mirrors the increased scope and use of the Internet in people’s everyday lives. This trend will no doubt continue to lead to increased security vulnerabilities, and someone needs to mind the store. By monitoring and regulating the business practices of one of, if not the, most recognizable online brands, the government may be putting all Internet businesses on notice. More Internet-based businesses need to be aware of the security vulnerabilities, and hard and fast rules would make protecting consumers much easier.
With the increase in Internet use and time spent online, new compliance regulations will have to be adopted to adapt to the change in habits. The fed’s active role may be just the beginning.
August 29, 2011 6:10 PM
Posted by: Fohlhorst
, Cloud Security
The Cloud Security Alliance is launching a new program for gathering information on how cloud service providers are securing their services and meeting compliance initiatives.
The CSA Security, Trust & Assurance Registry (STAR) program enables cloud service providers to submit self-assessment reports that document compliance regarding best practices published by the alliance. According to the CSA, the searchable registry will allow potential cloud customers to review the security practices of providers and determine the level of compliance offered — or better yet, learn from the best how to secure their own cloud initiatives.
Some may find this a bit disconcerting and will worry that transparency will expose them to attacks and breaches. However, transparency also leads to better understanding and improvements in security by exposing possible flaws and weaknesses.
STAR offers a “major leap forward in industry transparency, encouraging cloud service providers to make security capabilities a market differentiator,” according to a CSA release. CSA STAR will be available in the fourth quarter. Cloud providers can submit two different types of reports — the Consensus Assessments Initiative Questionnaire and the Cloud Controls Matrix.
Find out more at www.cloudsecurityalliance.org/star/.