Last month, Sens. Richard Burr and Dianne Feinstein from the Senate Select Committee on Intelligence unveiled a draft of the Compliance with Court Orders Act of 2016 that would require all technology companies — from mobile device manufacturers to application makers — to comply with court orders granting federal officials access to encrypted information. “No one is above the law,” the draft states, adding that tech companies should be able to protect user privacy with strong security while still complying with these legal requirements.
The Compliance with Court Orders Act is the latest development in the continuing battle to protect personal privacy while at the same time maintaining national security. It further brings to light how maintaining that balance presents a particular challenge as companies strive to meet their customers’ expectations regarding user experience and privacy.
One way tech companies try to figure out where this balance between security and privacy lies is through regulations and court rulings. But relying only on laws is problematic, according to John Pescatore, director at the nonprofit cybersecurity training provider SANS Institute.
“Regulations just specify some bare minimum; they don’t define security for anything,” he said at a CompTIA IT professional webinar last month. He added that not only is there no global definition for attaining this security-privacy balance because privacy laws vary by region or by country, but also that regulations and legal precedents can change over time.
Instead, Pescatore advises companies start with three basic principles that can be combined and implemented in various ways:
- Confidentiality, or making sure the right people have access to information
- Integrity, or ensuring the accuracy of the information and that changes to data are tracked
- Availability, or making sure the information in your systems is accessible when it is needed.
These three foundational ingredients should add up to help meet current regulatory requirements, but more importantly satisfy consumer expectations.
“No law came along and told Apple they had to protect things better than Microsoft did. … Those are not laws driving things; those are actually people’s demands for increased security and privacy,” Pescatore said.
Encryption can be an effective tool to enable this increased security and privacy if the aforementioned basic principles have been laid out properly as the foundation, Pescatore said. In the case of passwords, for instance, encryption is useless if users employ easy-to-guess or reusable passwords, or are susceptible to phishing attacks.
“The vast majority of attacks would have been foiled if we used … something as simple as a text message to your phone in addition to a password,” Pescatore said. “Once we’ve gotten to the point where we can at least protect the user’s authentication, that’s where encryption becomes very powerful” by allowing companies to be more flexible about where they store their data.
But despite encryption’s security strengths including only permitting access when explicitly allowed, it’s very easy to implement the tool “badly,” Pescatore added. If keys aren’t managed properly, for example, encryption could potentially prevent or hamper the right person from accessing their data.
Further complicating matters are countries with laws with language similar to the Compliance with Court Order draft that include the government in their definition of the “right person” to access data.
For companies such as Apple and WhatsApp that built their business model on giving their consumers sole control of their own encrypted communications, this puts them at a legal quandary, technology advocates recently told Recode.
Law enforcement has also suggested building backdoors into encrypted systems and data as the answer to this issue, but Pescatore doesn’t think so: He equated backdoors to securely locking a house and then leaving a key under the welcome mat.
“In the digital world, sooner or later, someone is going to find that key under that welcome mat, no matter how well that backdoor was hidden,” he said.
In part two of this blog post, Pescatore discusses how companies can wade through the various security standards to get guidance on developing security policies.
Information security has become a vital business driver as the huge data volumes generated by modern companies contain a treasure trove of intellectual property and PII that is enticing the hackers. A variety of security certifications and standards have been developed to help companies navigate the increasingly complicated data security landscape, as well as protect both business and customer data. One such standard is ISO 27001, developed by the International Organization for Standardization to help businesses establish, maintain and improve an information security management system. In this guest post, Kyle Anixter, PMO manager of IT services at Curvature, an IT infrastructure and services provider headquartered in Santa Barbara, Calif., outlines the business benefits of ISO 27001 certification.
The business benefits of ISO 27001 certification
by Kyle Anixter
News regarding the unwanted release of corporate and/or consumer information makes headlines nearly every week. Whether it’s Anthem, AOL or Adobe, the biggest names in corporate America have watched their reputations be sullied by the continuous onslaught of data breaches. According to a data breach report released earlier this year by the Identity Theft Resource Center, the business sector topped the ITRC 2015 Breach List with nearly 40% of the breaches publicly reported last year, compared with about 32% in 2014.
The ignominious impact of this spike in breaches only seems to be intensifying, proving there is no better time to make sure your company takes a systematic, proactive and certified approach to managing the security of its sensitive information. For that reason, investing in highly structured and validated security certifications should be a top business priority.
Achieving the ISO 27001 certification, for example, is a solid strategy to ensure proper control over critical information assets. First published in October 2005 and updated in 2013, this standard pertains to internal employee records, financial information and intellectual property, as well as external data from customers and vendors. The ISO 27001 certification also makes sure information shared by and with third parties, such as customers, partners and vendors, is protected.
The ISO 27001 certification is particularly useful to companies by helping develop a stringent information security management system. Most important, it will demonstrate to employees, customers and business partners that when it comes to security, your company is prepared.
Here are the five most compelling benefits to investing in ISO 27001 security certification:
Manage risk: ISO 27001 focuses on proactive risk management, which is crucial for building a solid, sustainable security foundation. All companies realize they must invest in security, but having the proper risk management procedures in place goes a long way toward maximizing investment in the areas where it can deliver the biggest benefits while avoiding wasteful spending.
Security management frameworks: ISO 27001 provides a proven framework and all the general requirements for establishing information security best practices (for example, asset management, access control, cryptography, network security, etc.). The framework forces structure across the entire department, including roles, responsibilities, leadership and decision making. As a result, operations are more efficient, organized and successful. Improving operations has become an increasing priority for most companies, especially given the ongoing desire to keep IT operations lean and functioning optimally amid constant change and greater demands. With ISO 27001, there is the proof that systems and procedures are in place to enable the company to be better prepared to meet the known and unknown security challenges ahead.
A concentration on compliance: The laws, rules and regulations at all levels of government are continually changing, but this is no excuse for IT organizations to fall out of compliance with any of the legal requirements that apply to their operations. Aside from being the subject of the latest front-page news, falling out of compliance can lead to financial penalties, loss of trust and tarnished reputation. In addition to keeping its own ship on course, companies must remain vigilant regarding all information security-related requirements that originate in customer and supplier contracts and agreements.
Protect suppliers and customers: It’s sad but true: In a troubling number of instances, a company’s biggest security vulnerability comes from its customers and suppliers. The ISO 27001 certification delivers a well-defined structure by which both are made aware of their information security roles and responsibilities. With continual monitoring and measuring, everyone’s data — and reputations — are protected.
Improve customer confidence: It is common knowledge that solution and service providers often introduce and deliver products before fully realized security procedures have been put in place. Having ISO 27001 certification lets your customers know their sensitive and confidential data is protected within your company. Another key benefit is that it will set you apart from competitors. When working with large companies, certifications such as ISO 27001 are often necessary for inclusion on the list of approved partners.
In today’s fast-moving and evolving world of professional and managed services, ISO 27001 now is considered table stakes. Though not mandated by law, this certification ensures the holder is taking advantage of best practices and adheres to a set of proven procedures. Adding ISO 27001 to your corporate resume ensures customers and partners that you have the right controls in place and that data is not vulnerable inside or outside your corporate walls. As a result, you can proceed with a high level of confidence that all information and systems are safe and secure.
Details surrounding the updated Payment Card Industry Data Security Standard show that version 3.2 includes new multifactor authentication and encryption requirements. Also in recent GRC news: SEC enforcement actions — or the lack of them — are raising concerns about the agency’s ability to regulate Wall Street, and IBM rolled out security and compliance standards for blockchain technology use.
Multifactor authentication one of the biggest changes in PCI DSS 3.2
The Payment Card Industry Data Security Standard (PCI DSS) version 3.2 was published on April 28, 2016, and includes stronger encryption and multifactor authentication requirements. The new version also provides criteria for PCI DSS compliance programs, as well as specific dates for banks and merchants to implement the changes.
PCI Security Council CTO Troy Leach said the requirement to implement multifactor authentication for any type of administrative access to payment card data and systems is the biggest change in PCI DSS 3.2. Leach added that a password alone is not enough to very a user’s identity and grant access to sensitive data, even within a company’s own network.
To prepare for this change, Leach recommends that organizations review how they manage data access authentication and examine administrator roles to find the areas that will most likely be affected by the new requirement.
Goldman Sachs cases call SEC’s watchdog role into question
Recent articles by media outlets The New Yorker and Fortune have called into question whether the Securities and Exchange Commission (SEC) is able to adequately regulate Wall Street because it failed to call one firm to task.
The New Yorker highlighted the SEC’s decision not to pursue charges against Goldman Sachs senior executives for their role in a complex deal known as Abacus that the SEC believed involved securities violations by the Wall Street firm. Fabrice Tourre, a low-ranking trader at Goldman, was the only person held liable for any wrongdoing. Although the SEC considers the 2013 case a success, documents provided to The New Yorker by a lawyer who was assigned to the case showed that “SEC officials considered and rejected a much broader case against Goldman.”
Fortune, meanwhile, focused on a Goldman mortgage bond called Fremont Home Loan Trust 2006-e that included more than 5,000 residential subprime mortgages. The SEC had evidence that many of the loans were deficient, including 10% that were classified as EV3s, or “unacceptable risks.” Yet Goldman still waived these deficient loans into the mortgage bond deal, which eventually cost investors more than $500 million when they went into default. While the SEC sent Goldman a notice in February 2012 saying that the regulator was planning on pressing civil fraud charges based on the mortgage bonds, Goldman said in a securities filing in August of that year that the SEC had dropped the case. In April 2016, the U.S. Department of Justice reached a nearly $5.1 billion settlement with Goldman for defrauding investors using mortgage-backed securities.
As GDPR grace period approaches, questions abound
The two-year grace period for companies to prepare for the General Data Protection Regulation, the European Union’s overhaul of its data protection laws, is expected to begin in May. The GDPR’s provisions, however, vary extensively between the 28 EU member countries and create a lot of ambiguity, legal experts told The Wall Street Journal.
These unclear mechanisms include how companies can protect their intellectual property under the regulation and how data use consent is granted.
Companies will likely use the 24-month grace period not only to prepare for the GDPR’s passage, but also to navigate these ambiguities, according to the legal experts. EU officials also said that despite having a privacy working group and a privacy board providing guidance to companies regarding the GDPR mandates, they must still be translated in a manner that accurately conveys the law’s concepts in the 24 working languages used throughout the EU.
IBM releases new blockchain security standards
IBM last week announced a framework to help companies across several industries securely run blockchain technology that underpins Bitcoin and other digital currencies. Jerry Cuomo, vice president of blockchain for IBM, told Forbes that the standards are aimed to help companies in industries such as financial services, healthcare and government navigate data security regulations. Under the new standards, companies using IBM’s cloud-based blockchain technology will be able to create comprehensive log data to use for audits and compliance. IBM’s framework could also help companies comply with data privacy regulations such as the Gramm leach Bliley Act, HIPAA and the EU Data Protection Directive, Cuomo added.
The lack of comprehensive federal privacy legislation leaves not only consumers vulnerable, but also companies frustrated. Many consumers lack information about the many ways their personal data is used, what parties use it, and the ways it could be potentially misused; meanwhile, companies seeking guidance to protect this data and be adequately transparent with their customers are left navigating a patchwork of privacy rules without a clear direction.
At present, companies must rely on a various privacy regulations that target only specific industries and types of data (e.g., HIPAA, FERPA), and many must wade through a number of constantly evolving state laws. Moreover, global companies must keep international privacy regulations in mind.
This patchwork of rules and lack of broad legislation such as a federal data breach protection law are a problem, said Sarah Holland, senior analyst of public policy and government relations at Google. And she is not just talking about companies like Google, a global company that has to navigate local, state, federal and global privacy laws as it builds products and services. Rather, she is referring to the risks facing the technology industry as a whole, and to the startup economy in particular.
“If you were only three people, and you were trying to get your company off the ground, how do you deal with that patchwork? How do you understand that and how are you incentivized to comply?” Holland said at a recent privacy forum hosted by the Massachusetts Attorney General’s Office at MIT.
Holland believes that Google is able to manage and keep up with constantly changing privacy regulations because it has created a strong culture of privacy and security across the company. This culture extends to the relationship Google has with its partners and encompasses the entire lifecycle of its products.
“You can have that [culture] before a product has been launched, and then [something like] the right to be forgotten comes down after a product has been launched, and you have to deal with that. We have colleagues around the world that help us deal with that,” she said.
That culture of privacy, Holland added, does not just incentivize compliance, but also earns their users’ trust by putting them in control of their data. She advised following Google’s four-pronged approach to building this type of culture:
– Bake security in from the beginning of a product’s development, and also throughout its lifecycle.
– Strive to be upfront and fair about how the company uses customers’ data, and use clear language when informing customers of the ways their data could be used.
– Give users control of their data. Google’s users can manage their data on myaccount.google.com, which functions as a one-stop-shop for them to control and secure their data. “It helps you do everything from managing your advertising settings to opting out of interspace marketing. You can also control your watch history and your location data,” Holland said.
– Demonstrate to users why it’s valuable for them to allow you to use their data. At Google, Holland believes that “our users trust us with their data … and in turn, we use that to power products and services that benefit them.” These services include ones that will likely sound familiar, such as turn-by-turn directions, instant translation applications, and the ability to find flight information on Google Now. But they also include other initiatives that are more under-the-radar, such as Project Sunroof, which analyzes satellite data to encourage solar-powered energy use.
The FCC’s newly proposed privacy protection rules requires broadband and wireless providers to obtain consumer consent before collecting and sharing their data, but some are concerned this approach is detrimental to innovation. Also in recent GRC news: The U.S. Department of Justice announced a program to incentivize self-disclosure of foreign bribery violations, and the U.S. Securities and Exchange Commission called for hundreds more employees.
WSJ: FCC’s proposed consumer privacy rules could stifle innovation
Late last month, the Federal Communications Commission (FCC) proposed a new set of privacy regulations outlining how Internet service providers (ISPs) collect, use and share consumer data. With these rules, the FCC will take over the majority of the consumer protection enforcement formerly the domain of the Federal Trade Commission — a change that is part of the new net neutrality rules passed last year that reclassified ISPs as common carriers.
Under the rules that specifically target broadband companies, in the majority of situations consumers would need to “opt in,” or give consent to, a cable company seeking to sell their data to third parties. The proposed rules would also require wireless and broadband companies to communicate with consumers about how that data is being collected, used and shared.
But according to a Wall Street Journal opinion column, this approach will be detrimental to innovation because it restricts almost all uses of consumer preferences instead of punishing particular cases of unfair practices. Furthermore, the new rules exclude Google and Amazon, two companies whose business models profit greatly from data collection, according to the WSJ. These tech giants were deemed “edge providers” that are too big to regulate. Instead, they will continue to be monitored by the FTC.
Companies approach U.S. FCPA discount program with caution
The U.S. Justice Department rolled out a one-year pilot program that provides companies who self-disclose foreign corruption violations a discount of up to 50% on the associated sanctions. Assistant Attorney General Leslie Caldwell said that the program aims to encourage companies to self-report Foreign Corrupt Practices Act (FCPA) violations and build up the DOJ’s ability to deliver enforcement actions against individual offenders.
Reductions in sanctions will also be offered to companies who report all known facts and remediate “bad actions” that are outlined by the program.
While the program has garnered praise from experts in the space, they told WSJ that it’s also being viewed cautiously by companies. Companies are reluctant to participate in the pilot program because it does not set a minimum discount, and the amount of the reduction is totally under the discretion of the DOJ. The program also does not lay out what levels of lenient treatment from the DOJ correspond with specific types of cooperation. This lack of guidance could offset any incentives to report of FCPA violations, said Eric Bruce of the Kobre & Kim law firm.
“The steps articulated by DOJ in order to receive ‘full cooperation credit’ are still fairly subjective and subject to varying interpretations,” he told WSJ.
U.S. SEC Chair calls for additional funding and 250 more staffers
Mary Jo White, chair of the U.S. Securities and Exchange Commission, said the regulatory agency needs additional funding to hire 250 additional staffers to strengthen its oversight of today’s marketplaces and better protect investors. White said the additional staffers would also boost the SEC’s IT infrastructure and improve the cybersecurity and risk analyses of areas such as exchange-traded funds.
“Additional funding is imperative if we are to continue the agency’s progress in fulfilling its responsibilities over our increasingly fast, complex, and growing markets,” White said during a budget hearing before the Financial Security Oversight Committee at the Treasury Department on April 12.
Despite the prevalence of consumer data collection and analysis today, there remains a glaring lack of clear policies and legislation around the protection of that data, according to privacy experts at a recent public forum on the topic at MIT in Boston. This gap poses great potential risks not just to consumers and their privacy and safety, but also to industries like the high-tech, financial and healthcare sectors that rely on consumer trust to thrive and innovate, experts said at the forum hosted by the Office of the Massachusetts Attorney General.
Panelist and Twine Health CEO John Moore said that even in the heavily regulated healthcare industry, where legislations such as HIPAA and HITECH place restrictions on how organizations use patient data and heavily favor patients’ right to access their data, there are still considerable risks to customers.
In order to retain patients in their network and churn office visits, the powers that be at many of these healthcare organizations often use these regulations as “an excuse to withhold data from patients even though it was designed to make it accessible to them,” he said.
If the system continues restrict this access, it could be detrimental when, in the near future, the majority of patient data will be stored on mobile medical devices (e.g., blood glucose device) that will be able to produce, in real time, the results currently produced by hospital or office labs.
“Patients haven’t seen and experienced yet where sharing data provides value to them — as soon as you provide them an experience that helps them solve problems in their health and in their life, they exhibit drastically different behaviors,” Moore said.
Twine Health attempts to buck this trend by enabling patients to manage chronic disease by keeping track of personalized goals via a synchronized cloud-based app.
A patient with high blood pressure, for example, has an atypical healthcare experience when using the Twine Health app. While the condition is still treated like an acute disease, a patient using Twine Health won’t interact with a physician prescribing medication. Instead they encounter a health coach who acts as a bridge to the doctor and collaborates with the patient to assess motivations and life barriers. The coaches also help create actionable goals, such as bringing blood pressure down by 10 points.
“All of this shows up in a patient’s data hub. … When a patient leaves the office, they start communicating with that health coach on a daily basis on medication, diet, exercise, etc.,” Moore said.
Patients using Twine Health are experiencing a dramatic decrease in their blood pressure compared with the average patient, Moore said: 70% reach their target blood pressure within one month; while the normal rate is 30% of patients reaching their goal within a year.
This approach, Moore said, “instills great sense of trust and affects customers’ behavior and what kind of data they want to share.”
The conference dived into cloud, IoT, network and mobile security, as well as supply chain risk management and tips for defending against nearly all types of cyberthreats. If there’s one big takeaway from the conference sessions, it’s the importance of partnerships — both internal and external — in helping keep a company secure and compliant in today’s threat-laden IT environment.
“Incident response is a shared activity”
In her keynote presentation, Dawn-Marie Hutchinson stressed the importance of partnerships in incident response, explaining that forming and nurturing key relationships before a breach occurs is the best form of incident response. She broke down her list of important partnerships into three categories:
*The technical team, including the CISO as well as the team’s assigned applications, database, network, analytics and evidence experts. This category also includes physical security, which can often factor into breach response.
*The business team, including top executives, the general counsel, media relations and customer relations. These people are critical in maintaining the company’s reputation following a data breach.
*The outside support team, including outside counsel, forensic providers and insurance providers.
As Hutchinson notes, establishing these relationships now keeps the organization ready for effective incident response when a data breach happens at your company.
Data inventory team
Data inventory is a key component of any compliance initiative, according to Michael Corby, Executive Consultant at CGI. It helps companies stay within regulation boundaries and avoid costly investigations into their companies’ data management. But a good data inventory project needs a solid team.
Finding the data in question means cooperating with the application development team that manages much of the data, as well as key application managers, said Corby. Analyzing and classifying that data requires a team that includes the project manager, compliance subject matter experts, database architects and business analysts. Compliance subject matter experts advise the rest of the team on data regulations protecting PII, like HIPAA and PCI Compliance.
One of the reasons why IT teams often struggle to properly complete data inventory projects is because they can get overwhelmed with the data and fail to reach out to their colleagues for help, according to Corby. Establishing those IT partnerships early on expedites the process and makes the task more manageable for all involved, he said.
First you need a team
Partnering with people is difficult when there’s no one to partner with, however. During the “Emerging Threats” panel, Jimmy Ray Purser, technical evangelist at Illumio, labeled a shortage of skills as one of the top emerging threats in IT. Whether understaffed or overworked, IT teams are feeling the effects of the skills shortage, he said.
With today’s advanced threats, companies need a solid team of skilled workers to combat them, he said. That means not just security professionals, he suggests, but other IT professionals who are familiar the digital landscape. Purser believes companies should allow IT teams to invest more in finding the right skills to benefit the entire team.
Drawing the line between protecting consumers’ right to data privacy and giving the government access to that data to keep the public safe isn’t as simple as looking at legal cases that have dealt with this issue, various data privacy experts explained at this month’s Forum on Data Privacy hosted by the Massachusetts Attorney General’s Office at MIT.
One major obstacle to drawing this line is a lack of transparency. While there is much publicity regarding the ongoing debate between Apple and the FBI and whether the tech company should build a backdoor into the smartphone of one of the San Bernardino shooters, this is not an isolated case. There have been dozens of cases prior to San Bernardino in which the government has sought to compel private companies to turn consumer data over for public safety reasons, but these incidents have happened behind closed doors and outside the public spotlight, said panelist Carol Rose, executive director of the American Civil Liberties Union of Massachusetts.
This secrecy and lack of a public debate is a huge problem when it comes to strengthening existing laws and policies that address consumer data privacy, said Rose. She added that an open, democratic public debate is the only way to settle where the line should be drawn.
Rose warned that a lack of transparency, combined with the legal precedent set by the government by secretly recruiting private companies to create backdoors to access consumer data, is the recipe for losing consumer trust. This will start a dangerous domino effect for companies.
“When people don’t know, then they really lose trust, and they don’t become early adopters because they don’t think that the tech companies are on their side,” she said. This could result in consumers losing out on the benefits of these technologies, becoming more vulnerable to hackers.
The stance technology companies, lawmakers and other parties should be thinking about when discussing data privacy law reform is not the idea of “technology versus liberty” but rather “technology in the service of liberty,” said Rose, who launched the Technology for Liberty Project in 2013 to promote this point of view.
These are the three major areas privacy law reform needs to address, according to Rose:
- Access and control. This is the notion of giving users the right to access their information and correct it if it’s wrong. “We get those kinds of reports all the time, where the data is wrong and put it in incorrectly, and [users] have no way to access it,” said Rose.
- The “third-party doctrine.” This is the idea of questioning whether users or the third-party providers that hold their data are the rightful owners of that information, whether that third party is a bank, a doctor, healthcare provider or Internet service provider.
- Notice. Related to the third-party doctrine, Rose said that lawmakers also need to address, with special rules, consumers’ right to get notified if another party is going to use their data, particularly if that party is the government. This is where U.S. policy varies from other countries. In Europe, for example, data protection laws dictate that consumers technically own their data. “If [an EU] government needs your data, they have to tell you at least, and in some cases get your permission. In the U.S., that does not have to happen,” Rose said.
Most importantly, these policies should distinguish between the government and private sector, said Rose, warning about the dangers of the U.S. government going further down the road of recruiting private companies to do its data collection bidding in secret.
“Invariably, abuse will happen; invariably people will find out about it; [there will be] loss of consumer trust and damage to the high-tech community, and the potential promise of all these innovations could really be lost if we go down that road,” she said.
“Security has transcended from an IT issue to a boardroom issue.” This was how Microsoft corporate vice president and CISO Bret Arsenault opened his panel discussion at last month’s RSA Conference in San Francisco. He made it clear that security is no longer solely the responsibility of the IT department; it’s one that’s shared with business units, users and members of the supply chain.
Yet other than a very few edge cases, the types of security compromises of the past two years that have hit high-profile retail brands and other companies aren’t really that much different from those of the past, argued Arsenault. So why has the security conversation changed? Namely from the convergence of these three factors: the scale of compromises, laws that require the disclosure of breaches related to customer data, and the political motivation of malicious actors.
“We absolutely see [these factors] creating a different news cycle … and a different level of interest for the board,” Arsenault said.
These factors, combined with increased regulatory attention from agencies such as the Federal Trade Commission and the Securities and Exchange Commission, as well as increasing class-action lawsuits filed by consumers and shareholders, have caused today’s boards of directors to pay greater attention to cybersecurity questions.
“One of the big questions I see most boards asking — one of the big differentiators in the maturity of the security program — is, ‘Are you holding IT accountable or are you holding the whole company and the supply chain accountable? And how do you go do that?'” said Arsenault.
The board’s duty now includes determining previously unimagined cybersecurity risks, but balancing this with its other, more pressing responsibilities is no small feat.
“Cybersecurity really isn’t the most important thing in the grand scheme of things — strategic risk, financial risk, operational risk and legal risk — until something goes wrong,” he said.
Arsenault advised boards to focus on the following cybersecurity issues when scoping potential risks:
Security strategy and budget review. It’s not the board’s job to produce a detailed, 40-page budget review, but it does need to make sure it’s aligned with management on the security budget, Arsenault said. A good board will ask one key question during a discussion on cybersecurity with the IT and security teams: Do you have everything you need? “The answer to that would be, ‘Yes, yes I do. Or, I do, but here are the things I see coming,” he added.
Security leadership. “Most companies now list CISOs as a critical role relevant to the performance of the company,” said Arsenault. Questions the board should ask regarding senior leadership include who security teams report to, and whether the right person is employed for the job.
Incident response plans. Look at whether the company has adequate firewalls and intrusion detection tools in place, as well as how to allocate resources between detection and response tools.
Ongoing assessment. This doesn’t mean the board needs “a ton of KPIs,” said Arsenault, just enough to show them trends and developments in the security industry.
Internal education. Ask about how frequently employees are educated about cybersecurity risks and the steps they should take if the company is breached. “Enlightened boards will ask about this,” Arsenault said. “Users are in control, so what is the user effectiveness of that control?”
When General Data Protection Regulation — a new EU-wide data protection framework that will replace Safe Harbor — was introduced by European Union on December 2015, global companies such as Adobe cheered.
“We were hoping for one law that would finally govern all of Europe, instead of differing interpretations,” MeMe Rasmussen, chief privacy officer (CPO) for the software company, said at a RSA 2016 panel session.
The current text of the regulation, however, seems far from a one-stop shop: For starters, the 200-page document is made up of 50 different components. Moreover, data protection authorities (DPAs), or the local authorities of each EU country, have retained interpretive capabilities over each of these components.
“It was written by people who don’t run businesses. [Companies] have to look at it and figure out how we comply. … What they did was leave a lot open to resolution,” Rasmussen said.
GDPR aims to create stronger standards for data protection, but won’t be finalized until later this year and won’t go into effect until 2018. This hasn’t stopped Adobe, along with Google and Microsoft (which were also represented in the panel), to already start considering the risks that could arise once the regulation is in full force.
Not only does the regulation’s sheer breadth mean challenges in deciphering its terms, but it could also create a greater number of compliance obligations that spell significant risk for organizations. Moderator Trevor Hughes, president and CEO of the International Association of Privacy Professionals, cited one sobering statistic: The maximum fining authority given to DPAs and regulators in Europe amounts to 4% of a company’s annual global turnover.
So how are organizations like Adobe, Google and Microsoft looking at GDPR and preparing their responses?
Rasmussen said Adobe is still waiting for the dust to settle — which she doesn’t expect to happen soon. “We kind of ended up with a mixed bag and don’t yet know a lot of what we’re going to have to do. … We’re still waiting for guidance on what certain terms mean.”
Microsoft CPO Brendon Lynch said his company has been developing and investing in its privacy management program for several years, and views GDPR as an “incremental step” instead of a huge shift.
“The reality is, yes, there are more obligations and details to work out, but ultimately it feels like [Microsoft has] taken into account all the new requirements,” he said. “We’ll do a gap analysis against what we currently have. I’m sure there will be some places where we have to do some more.”
Lynch added that while he doesn’t intend to play down GDPR, it doesn’t necessarily change Microsoft’s security posture. It does, however, raise the stakes of failing to comply.
Microsoft also expects greater assurances from GDPR. “How can we get more assurance that all the controls we have in place are effective … that everything is working as they should?” Lynch said.
Keith Enright, legal director of privacy at Google, said that his company is fully aware of GDPR’s diverse requirements that it will have to decipher for users.
“I don’t think we ever really deluded ourselves, given our experiences in Europe, that we would have absolute uniformity of law,” he said.
Enright added that Google and other global tech industry leaders must actively engage with DPAs and regulators in Europe around data privacy.
“[We need to] negotiate and draw out rationality and application as much as we can so that our interests aligned with the DPAs’. We want to protect user privacy to the greatest extent possible, and GDPR gives us the framework,” he said.
One area in which Google will have more work to do under GDPR, said Enright, is developing its privacy programs so that it not only addresses protecting users’ data privacy but also strengthens the transparency of the company’s compliance efforts.