In a post to the PlayStation blog last week, Reitinger said Sony detected attempts on Sony Entertainment Network (SEN), PlayStation Network (PSN) and Sony Online Entertainment (SOE) to test “a massive set” of sign-in IDs and passwords against the company’s network database. The attempts appeared to include a large amount of data obtained from one or more compromised lists from other companies, sites or other sources, Reitinger said.
“As a preventative measure, we are requiring secure password resets for those PSN/SEN accounts that had both a sign-in ID and password match through this attempt,” Reitinger wrote in the blog post.
Less than one-tenth of 1% of the PSN, SEN and SOE audiences may have been affected by the data security breach, and Reitinger assured users that credit card numbers were not at risk. This was a relatively low-risk data security breach, but perhaps Sony’s reaction was a case of lessons learned: After the April breach, Sony was criticized for waiting a week to notify customers that their personal information might have been compromised. In addition, it took more than two weeks to fully restore the network. Needless to say, Sony users (and federal regulators) were not impressed by what some viewed as a lackadaisical reaction.
There has been much public outcry over Sony’s data security breach, and those of other companies, in the past year. This likely influenced the SEC last week to mandate the “disclosure of timely, comprehensive and accurate information” surrounding cybersecurity risks.
Did Sony’s online security overhaul help detect this breach before it became another fiasco? Although critics have said Sony simply hired Reitinger as an insurance policy to pacify investors and customers after the April data security breach, he showed his value here. At least now the Sony brass and their customers have someone to go to for information about any further breaches — what happened, how it happened, how they are going to handle it in the future. (Unfortunately for Reitinger, it also gives them someone to blame.)
But if nothing else, the reaction to last week’s data security breach might be indicative of a new trend of taking a proactive approach and letting online customers know what they can do to protect themselves and their information. Judging by the comments made to Reitinger’s blog post, people are mostly happy with Sony’s reaction to the potential data security breach. Many praised Reitinger and Sony for keeping them informed.
Perhaps Sony and companies like them have learned their lesson about the futility of trying to keep a breach out of the spotlight, and know now that transparency is the best course of action. If the SEC’s recent mandate is any indication, federal regulators and customers are going to be watching companies closely to ensure cybersecurity is kept above board.]]>
This is something that CIOs, of course, have known for a long time. IT executives owe their livelihoods to the fact that there is barely a company in the world that doesn’t do business in this material known as information.
Now the rest of us — from computer cave dwellers like me to the oversharing Facebook generation — are on to the fact that our PII comes at a price. And, one way or another, companies will pay up. The uproar over Facebook’s shape-shifting privacy “rules” and the anger in Europe over Google’s collection of private data are two current and noisy examples.
To get a sense of the change in public attitude in a few short years, consider the evolution of the Netflix Prize. Back in 2006, the company that changed the way people consume movies announced an open competition to improve Netflix’s algorithm for predicting which movies its customers might like to watch based on their past viewing habits. In September 2009, to breathless media reviews about tapping into the wisdom of the smart crowd, Netflix awarded the $1 million prize to BelKor’s Pragmatic Chaos, a seven-man (yes, man) multinational team of computer scientists and machine learning experts, and promptly announced a second contest. By March, the Netflix Prize 2 was called off. Netflix’s chief product officer, Neil Hunt, reported that that the company had decided not to pursue round 2 after reaching “an understanding” with FTC investigators and settling a class action suit on whether the contest violated customer privacy. The investigation and suit were prompted by a research study by two University of Texas at Austin scientists showing that the anonymity of the Netflix prize data set was not so anonymous.
“You can take somebody’s name off their personal data, but the more personal information you provide, the easier it is to re-identify that person. Anonymized is never truly anonymous,” Helmer said. The FTC started to investigate because it wanted to know what Netflix told its customers their personal data would be used for when they turned it over.
“Most likely, Netflix did not say they would take the name off and give that personal information to the entire world in order to create a better algorithm,” Helmer said.
Helmer finds the case a “fascinating example” of the strengths and weaknesses of cloud computing — of the enormous gains that that can be realized by making real data available for analysis to large groups of people, along with the obvious dangers of doing that. As a consumer, he said he likes that companies will do the work to tell you which media or products you might like to consume. But the people who brought the class action suit against Netflix are realizing that those services “come with a price.” They are demanding that the price for personally identifiable information be borne by the business, whether it means paying for personal information in exchange for a service, or guaranteeing the data will remain private, or cease and desisting. But the lawyer says it is still early days for knowing how such transactions will play out.
”The reason why it is such an exciting time is that people really have not decided what they will put up with, and what they like and don’t like about personally identifiable information,” he said.
His bet? Just as technology got us into this quandary, technology will quickly point us the way out. It will likely do so in the form of biometric scans or ways to identify people other than using a Social Security number, an address or your mother’s maiden name, much of which is already widely available on the Internet.]]>
Building more intelligence and efficiency into the network, however, has relevance to more than energy policy. As a working group of information security professionals determined over the course of the summer, there are significant smart grid privacy concerns to consider.
These considerations can be neatly summarized in the following excerpt from the NIST report: “The major benefit provided by the Smart Grid, i.e. the ability to get richer data to and from customer meters and other electric devices, is also its Achilles’ heel from a privacy viewpoint. Privacy advocates have raised serious concerns about the type and amount of billing and usage information flowing through the various entities of the Smart Grid … that could provide a detailed time-line of activities occurring inside the home.”
As privacy expert Rebecca Herold explains on her blog, smart grid privacy needs to be considered as utilities move to a next-generation infrastructure. Those implications were concisely listed by Herold as follows:
Sarah Cortes, a contributor for SearchCompliance.com, was the project manager for the Privacy Sub-group of the NIST’s Cyber Security Coordination Task Group.
Key points in the current release of the smart grid privacy document include the following issues, according to Cortes:
The body of the privacy groups work may be found in this draft: NISTIR 7628 Smart Grid Cyber Security Strategy and Requirements (PDF).
Social networking and distributed collaboration sped up report writing for infosec team
One aspect of the report’s generation is worth recognizing: the role that the various collaborative technologies and social networking platforms played in gathering, synthesizing and producing the final deliverable for NIST. As Cortes explained in an email, preparing the current release of the Smart Grid privacy document included the following considerations:
According to Christophe Veltsos, a Midwestern-based information security professional who participated in the NIST CSCTG, the team used the suite of collaborative technologies common to many enterprises in late 2009.
“Gal Shpantzer and I used Google Docs to do live edits, both of us working at the same time,” said Veltsos. “We used either a live phone line or GChat to help facilitate the conversation.” The team members, including Herold, also used email, free conference-calling websites and tweets to send quick bursts of info/updates to each other.
Cortes also said NIST involved Twitter users from the start.
UPDATE: Christophe Veltos wrote to correct the record on the central role that DC-based information security consultant Gal Shpantzer played in organizing the CSCTG. Veltsos points out that “while Sarah was the project manager, Gal was the catalyst and is considered by NIST to be the team leader of the privacy group.”
“When forming the group, NIST staff turned to the industry professionals they most respected across the U.S.: members of Twitter’s online information technology privacy, compliance and security community,” she explained. ”One by one, Gal recruited respected members of the IT professional community, met with prospective members in person at times, and sought out suggestions for additional members. All prospective members could quickly and easily be thoroughly checked out as far as qualifications, accomplishments, and references, all informally through common Twitter features. The breadth and depth of advisory group members was substantial compared to similar panels formed with more traditional methods taking far longer.
According to Cortes, “Twitter has become the medium of choice for networking IT professionals for a few reasons, among them:
If you have thoughts and comments about either smart grid privacy or the utility of social networking for collaboration between compliance and security professionals, please leave them in the comments. Or, if you like, @reply on Twitter. You’ll find SearchCompliance.com there under @ITcompliance, as well as this author as @digiphile.
Specifically, Anthony stated that, “We know right now that there’s no widespread technology for encrypting mobile devices, but we know it’s there for laptops.”
Given that the regulation’s language includes a requirement for encryption where “technically feasible,” the issue demanded clarification. I contacted Secretariat CIO Gerry Young, who was involved in drafting the original regulation. He offered the following guidance on mobile encryption:
“This just belies unfamiliarity with the current state of encryption. Even a cursory scan will show that technologies like Snapcell, Navastream, AlertBoot, SecurStar PhoneCrypt, Endoacustica and Babylon nG have carried cell phone encryption to fairly sophisticated stages.
“Encryption for cellular phones has evolved beyond even enterprise-class smartphones, and you are beginning to see robust offerings for 3G phones available at attractive price points.
“European companies like Navastream (Germany) are making inroads in U.S. markets to fill a clear void. This will help to drive competition, and push price points lower for the consumer.
“I would think that once there are free, open source encryption alternatives — along with a plethora of low-cost encryption vendors in the cellular market — that we would be ready to mandate cell phone encryption in the near future.”
In other words, encrypting mobile devices and smartphones remains a best practice, particularly where resident PII is present, but is not mandated for 201 CMR 17.00 compliance — yet.
In an interview with SearchCompliance.com, Undersecretary Barbara Anthony stated that “consumer protections have not been weakened in this amendment. Monitoring, reviewing the scope of security measures – and encryption is still required if you are going to transmit resident PII over public networks. What we’ve tried to do here is to not impose additional burdens which weren’t involved in the consumer protections.”
With the permission of the OCABR, we are posting the FAQ released by the office in full and unedited below. We will post the rest of Anthony’s comments tomorrow, along with further analysis of the changes referenced below.
What are the differences between this version of 201 CMR 17.00 and the version issued in February of 2009?
There are some important differences in the two versions.
First, the most recent regulation issued in August of 2009 makes clear that the rule adopts a risk-based approach to information security, consistent with both the enabling legislation and applicable federal law, especially the FTC’s Safeguards Rule. A risk-based approach is one that directs a business to establish a written security program that takes into account the particular business’ size, scope of business, amount of resources, nature and quantity of data collected or stored, and the need for security. It differs from an approach that mandates every component of a program and requires its adoption regardless of size and the nature of the business and the amount of information that requires security. This clarification of the risk based approach is especially important to those small businesses that do not handle or store large amounts of personal information.
Second, a number of specific provisions required to be included in a business’s written information security program have been removed from the regulation and will be used as a form of guidance only.
Third, the encryption requirement has been tailored to be technology neutral and technical feasibility has been applied to all computer security requirements.
Fourth, the third party vendor requirements have been changed to be consistent with Federal law.
To whom does this regulation apply?
The regulation applies to those engaged in commerce. More specifically, the regulation applies to those who collect and retain personal information in connection with the provision of goods and services or for the purposes of employment.
The regulation does not apply, however, to natural persons who are not in commerce.
Does 201 CMR 17.00 apply to municipalities?
No. 201 CMR 17.01 specifically excludes from the definition of “person” any “agency, executive office, department, board, commission, bureau, division or authority of the Commonwealth, or any of its branches, or any political subdivision thereof.” Consequently, the regulation does not apply to municipalities.
Must my information security program be in writing?
Yes, your information security program must be in writing. The scope and complexity of the document will vary depending on your resources, and the type of personal information you are storing or maintaining. But, everyone who owns or licenses personal information must have a written plan detailing the measures adopted to safeguard such information.
What about the computer security requirements of 201 CMR 17.00?
All of the computer security provisions apply to a business if they are technically feasible. The standard of technical feasibility takes reasonableness into account. (See definition of “technically feasible” below.) The computer security provisions in 17.04 should be construed in accordance with the risk-based approach of the regulation.
Does the regulation require encryption of portable devices?
Yes. The regulation requires encryption of portable devices where it is reasonable and technically feasible. The definition of encryption has been amended to make it technology neutral so that as encryption technology evolves and new standards are developed, this regulation will not impede the adoption of such new technologies.
Do all portable devices have to be encrypted?
No. Only those portable devices that contain personal information of customers or employees and only where technically feasible The “technical feasibility” language of the regulation is intended to recognize that at this period in the development of encryption technology, there is little, if any, generally accepted encryption technology for most portable devices, such as cell phones, blackberries, net books, iphones and similar devices. While it may not be possible to encrypt such portable devices, personal information should not be placed at risk in the use of such devices. There is, however, technology available to encrypt laptops.
Must I encrypt my backup tapes?
You must encrypt backup tapes on a prospective basis. However, if you are going to transport a backup tape from current storage, and it is technically feasible to encrypt (i.e. the tape allows it) then you must do so prior to the transfer. If it is not technically feasible, then you should consider the sensitivity of the information, the amount of personal information and the distance to be traveled and take appropriate steps to secure and safeguard the personal information. For example, if you are transporting a large volume of sensitive personal information, you may want to consider using an armored vehicle with an appropriate number of guards.
What does “technically feasible” mean?
“Technically feasible” means that if there is a reasonable means through technology to accomplish a required result, then that reasonable means must be used.
Must I encrypt my email if it contains personal information?
If it is not technically feasible to do so, then no. However, you should implement best practices by not sending unencrypted personal information in an email. There are alternative methods to communicate personal information other through email, such as establishing a secure website that requires safeguards such as a username and password to conduct transactions involving personal information.
Are there any steps that I am required to take in selecting a third party to store and maintain personal information that I own or license?
You are responsible for the selection and retention of a third-party service provider who is capable of properly safeguarding personal information. The third party service provider provision in 201 CMR 17.00 is modeled after the third party vendor provision in the FTC’s Safeguards Rule.
I have a small business with ten employees. Besides my employee data, I do not store any other personal information. What are my obligations?
The regulation adopts a risk-based approach to information security. A risk-based approach is one that is designed to be flexible while directing businesses to establish a written security program that takes into account the particular business’s size, scope of business, amount of resources and the need for security. For example, if you only have employee data with a small number of employees, you should lock your files in a storage cabinet and lock the door to that room. You should permit access to only those who require it for official duties. Conversely, if you have both employee and customer data containing personal information, then your security approach would be more stringent. If you have a large volume of customer data containing personal information, then your approach would be even more stringent.
Except for swiping credit cards, I do not retain or store any of the personal information of my customers. What is my obligation with respect to 201 CMR 17.00?
If you use swipe technology only, and you do not have actual custody or control over the personal information, then you would not own or license personal information with respect to that data, as long as you batch out such data in accordance with the Payment Card Industry (PCI) standards. However, if you have employees, see the previous question.
Does 201 CMR 17.00 set a maximum period of time in which I can hold onto/retain documents containing personal information?
No. That is a business decision you must make. However, as a good business practice, you should limit the amount of personal information collected to that reasonably necessary to accomplish the legitimate purpose for which it is collected and limit the time such information is retained to that reasonably necessary to accomplish such purpose. You should also limit access to those persons who are reasonably required to know such information.
Do I have to do an inventory of all my paper and electronic records?
No, you do not have to inventory your records. However, you should perform a risk assessment and identify which of your records contain personal information so that you can handle and protect that information.
How much employee training do I need to do?
There is no basic standard here. You will need to do enough training to ensure that the employees who will have access to personal information know what their obligations are regarding the protection of that information, as set forth in the regulation.
What is a financial account?
A financial account is an account that if access is gained by an unauthorized person to such account, an increase of financial burden, or a misappropriation of monies, credit or other assets could result. Examples of a financial account are: checking account, savings account, mutual fund account, annuity account, any kind of investment account, credit account or debit account.
Does an insurance policy number qualify as a financial account number?
An insurance policy number qualifies as a financial account number if it grants access to a person’s finances, or results in an increase of financial burden, or a misappropriation of monies, credit or other assets.
I am an attorney. Do communications with clients already covered by the attorney-client privilege immunize me from complying with 201 CMR 17.00?
If you own or license personal information, you must comply with 201 CMR 17.00 regardless of privileged or confidential communications. You must take steps outlined in 201 CMR 17.00 to protect the personal information taking into account your size, scope, resources, and need for security.
I already comply with HIPAA. Must I comply with 201 CMR 17.00 as well?
Yes. If you own or license personal information about a resident of the Commonwealth, you must comply with 201 CMR 17.00, even if you already comply with HIPAA.
What is the extent of my “monitoring” obligation?
The level of monitoring necessary to ensure your information security program is providing protection from unauthorized access to, or use of, personal information, and effectively limiting risks will depend largely on the nature of your business, your business practices, and the amount of personal information you own or license. It will also depend on the form in which the information is kept and stored. Obviously, information stored as a paper record will demand different monitoring techniques from those applicable to electronically stored records. In the end, the monitoring that you put in place must be such that it is reasonably likely to reveal unauthorized access or use.
Is everyone’s level of compliance going to be judged by the same standard?
Both the statute and the regulations specify that security programs should take into account the size and scope of your business, the resources that you have available to you, the amount of data you store, and the need for confidentiality. This will be judged on a case by case basis.
This FAQ was posted with the direct permission of OCABR. For more information on the regulation, visit Mass.gov/consumer.