Big data analytics have proven extremely beneficial to both companies and consumers across a wide range of industries, producing valuable insight in fields like healthcare, education and transportation. There is also, however, the potential for this data to be used in a way that harms consumers, according to a Federal Trade Commission report published earlier this month. As I covered in Searchlight last week, the FTC report had a clear message: The federal agency will not hold back from investigating unethical big data analytics processes and bringing enforcement actions against businesses that employ them.
But which big data processes exactly does the FTC report say are potentially problematic, and which does it consider acceptable? Highlights include the following:
Problematic: Differentiating products based on population subsets. Any big data analytics practice that limits provision of products or services to certain population subsets based on statistical input is considered unfair by the FTC, “especially if any of the data can be a proxy for poor or minority or underserved populations,” said Brenda Sharton, head of law firm Goodwin Procter LLP’s business litigation division. She offered the following industry examples of such practices: withholding access to credit, housing or employment due to background checks or screenings; and offering different rates or prices on insurance or product delivery based on an individual’s address. The FTC considers collecting several types of data problematic, including zip codes, social media usage/membership, and shopping habits. Sharton also warned against differentiating products based on characteristics such as race, gender and marital status.
Problematic: Making false promises to customers about how data is analyzed. Another big data practice companies should be wary of is making promises to consumers about how their data will be used and whether it will be entered into predictive analytics platforms. “If you’re going to be using [the data] for any statistical analysis, and it’s either you or your third-party vendor, you want to make sure you don’t promise the consumer that you ‘won’t do that,’ or that you’re informing them that you will,” said Sharton, who also serves as co-chair of Goodwin Procter’s privacy and cybersecurity group.
Acceptable: Targeted advertising. In most cases, this practice is OK with the FTC, said Sharton. For example, “A company’s advertisement to a particular community for credit offers that are open for all to apply is unlikely to violate [equal opportunity credit laws],” she said.
Problematic: Failing to reasonably secure consumers’ data. The FTC acknowledges that in the era of big data, companies are justified when collecting more data than they need. This means their information security needs to be proportionally robust — leading to another practice the report’s authors say is unacceptable: failure to implement security measures that are sophisticated enough to secure data “commensurate with the amount and sensitivity of the data at issue, the size and complexity of the company’s operations, and the cost of available security measures.” For example, organizations that maintain sensitive data such as Social Security numbers or customers’ medical data must have stronger security safeguards than those that only maintain consumers’ names, they added.
The FTC recommends that companies look at three factors to determine whether their information security is strong enough in proportion to the types of data they manage:
- The amount and sensitivity of data
- The size and complexity of the company’s operations (“What’s right for a massive Fortune 100 company will be different than what’s right for a … small company,” said Sharton.)
- What security measures are available and how much they cost
The bottom line for companies employing big data analytics? Full regulatory compliance will likely require some legal advice. “They should ensure they have the counsel to determine whether they are complying with things like FCRA, ECOA and other laws,” Sharton said.
What’s your take on the downside of big data analytics? Let us know at firstname.lastname@example.org.
Despite several HIPAA violations, recent data analysis found U.S. healthcare providers such as CVS and the VA face few punitive actions. Also in recent GRC headlines: Companies have two more years to meet the TLS requirement under PCI DSS, and experts foresee big changes ahead for the FCPA’s corruption enforcement practices.
Few penalties imposed on frequent HIPAA violators
CVS Health, Kaiser, the U.S. Department of Veterans Affairs and Walgreens were among hundreds of U.S. health providers that repeatedly violated the Health Insurance Portability and Accountability Act (HIPAA) between 2011 and 2014, according to an analysis of federal data by ProPublica, a non-profit newsroom. Despite numerous violations, sanctions against these providers have rarely been imposed, ProPublica’s research found.
The provider with the second-highest number of violations, CVS, pledged to improve its privacy safeguards or was reminded of its HIPAA obligations by the Office for Civil Rights (OCR) more than 200 times, according to ProPublica. The VA was the most persistent HIPAA violator, according to the data: Its clinics, hospitals and pharmacies violated HIPAA compliance 220 times, but the OCR never publicly reprimanded or sanctioned the health provider.
Experts say that while some privacy problems are to be expected among large healthcare providers, persistent complaints are a sign of organizational failures. Deven McGraw, deputy director for health information privacy at OCR, told ProPublica that the agency’s top priority is to investigate breaches that affect at least 500 people. She also acknowledged that the OCR can do more about providers who repeatedly violate HIPAA. Although OCR receives thousands of privacy complaints a year, it has issued fewer than 30 financial sanctions for privacy violations since 2009.
Merchants get two extra years to meet key PCI DSS requirement
The Payment Card Industry Security Standards Council (PCI SSC) announced last month that merchants that need to be compliant with Payment Card Data Security Standard (PCI DSS) version 3.1 now have until June 2018 to migrate away from vulnerable encryption protocols, two years later than the original date of June 2016.
Under PCI DSS 3.1, which was released in April 2015, organizations must migrate away from older versions of Transport Layer Security (TLS) — versions 1.0 and earlier — and any version of Secure Sockets layer (SSL) by this date. Furthermore, effective immediately, these organizations are prohibited from implementing new technology that relies on SSL and early TLS. According to a large body of research, these protocols were deemed cryptographically insecure and put payment data at higher risk of exposure.
One of the main reasons PCI SSC extended the deadline until June 2018 is because it hasn’t been seeing criminals accessing cardholder data through the protocol flaws, PCI SSC international director Jeremy King told eWeek. PCI SSC is trying to balance risks with operational needs, King added, but he also warned that the date change is not an excuse to “do nothing for two years.” Instead, he suggested merchants migrate away from the flawed protocols as early as possible.
Major changes to FCPA enforcement expected for 2016
The Foreign Corrupt Practices Act (FCPA) unit of the U.S. Justice department will be receiving more scrutiny and resources this year in light of low enforcement in 2015. The FPCA plans to add 10 new staff members, and DOJ leaders say they will focus on greater transparency as well as pursuing “high impact” bribery cases, The Wall Street Journal reported.
The Justice Department’s foreign corruption unit settled only two corporate FCPA cases last year, down 10 cases from 2014 and nine from 2013. The DOJ has indicated that it will put a policy shift into motion this year that will create rewards for companies that report violations and cooperate with the government. It will also put greater priority on prosecuting individuals.
Defense lawyers worry that the DOJ’s policy shift of incentivizing companies to completely cooperate maybe actually limit voluntary disclosure rather than encourage them, according to The Wall Street Journal. However, they also expect to see more compliance cooperation from companies as a result of the DOJ’s hiring of a compliance expert last year.
Three years in the making, European Union officials finally agreed on a draft of the General Data Protection Regulation. The EU-wide legal framework sets standards for data collection, sharing and privacy that will replace 28 different sets of national privacy laws. Officials promise stronger personal data protection measures for Europeans and an easier compliance process for businesses. While the new rule package won’t go into effect until 2018, cybersecurity experts are already forecasting greater discussion around data privacy, particularly among U.S. companies, because of GDPR.
The first thing about GDPR that struck security experts at a webinar earlier this month is its complexity. During the event, hosted by incident response provider Resilient Systems, three security and privacy specialists — Bruce Schneier, a leading cryptologist and the CTO of Resilient Systems; Jon Oltsik, senior principal analyst at Enterprise Strategy Group; and Gant Redmon, general counsel for Resilient — discussed their security predictions regarding the 201-page document as we enter the New Year.
It’s not only the length of the legislation that’s intricate — “If you have to say it in 201 pages, it’s going to be complex,” Redmon quipped — but also the fact that although EU negotiators are hoping to develop a centralized authority with GDPR. Upon closer inspection, however, that doesn’t actually appear to be the case, Redmon added.
For example, the rule package stipulates that each member country have its own national data protection authority, called a supervisory authority, to which companies and organizations are required to report data breaches.
“It’s going to start to look more like the U.S. There will be different folks and different bents to compliance that you have in each of these member states,” Redmon said.
He added that even though proponents of GDPR were hoping for more objective data protection standards from the legislation, in the future these codes of conduct will likely be outsourced to different industries.
“Both of these [factors] will have a big effect on industry, and compliance is going to be more complex than people were hoping,” Redmon said.
There is also the danger of over-regulation, which could push data collection activities underground, Schneier warned.
“I’m a big fan of the regulation of privacy … but too little regulation and it’s a free for all; too much regulation and we lose visibility into practices,” he said.
Contributing to this potential over-regulation is data sharing legislation passed by the U.S. Congress last week that takes a different stance from the EU: The Cybersecurity Act of 2015 (CISA). The bill, which was attached to a must-pass spending bill, provides liability protection and antitrust exemptions for companies that choose to share cybersecurity information with federal agencies such as the NSA. Detractors say the law allows companies to bypass civil rights and privacy mandates, including warrant requirements to conduct surveillance.
“Privacy protections have been weakened and Europe is now 200 pages and counting in complication. This is going to be hard,” Schneier said. “We want one single Internet — one single set of rules — and we’re further and further away from that.”
Oltsik agreed, saying that CISA not only waters down privacy but also creates a complete disconnect between U.S. legislators and their EU counterparts. This gap must be bridged in order to address the global problem of what he calls the “Balkanization” of privacy.
“We’re talking about data privacy on one hand and backdoors … on the other,” Oltsik said. “It’s as if we’re debating these issues with a complete lack of cooperation with each other.”
The governor of New York has introduced new state banking rules designed to curb money laundering and block terrorism funding. Also in recent GRC news: Most healthcare organizations lack HIPAA-compliant messaging apps; the Fed adopts stricter bailout measures; and a former SEC commissioner says the agency faces a “crisis of confidence” due its controversial use of internal judges.
New York introduces banking rules to stop terrorism funding
Last week, New York Gov. Andrew Cuomo (pictured left) proposed rules that would require New York State banks to follow stringent measures designed to prevent money laundering and terrorism funding. Under the proposed rules, banks must have a chief compliance officer (CCO) who would certify that systems that detect and prevent such illicit activity are in place. CCOs who file false certifications will face criminal charges, according to the rules, which are being written by the New York State Department of Financial Services (NYSDFS).
Cuomo’s proposal to prevent militant groups from using the New York financial system comes as the U.S. government and international authorities intensify efforts to thwart terrorism funding to groups such as ISIL in the wake of recent attacks in Paris.
The proposed rules also require improving banks’ systems to monitor and filter illicit transactions, as well as boosting software that automatically block suspicious transactions, according to NYSDFS.
Survey: 92% of health institutions use non-HIPAA-compliant messaging apps
An October survey found that 92% of healthcare institutions currently use mobile messaging apps that are not compliant with the Health Insurance Portability and Accountability Act (HIPAA).
The survey, conducted by mobile provider Infinite Convergence Solutions, garnered responses from 500 professionals in the finance/banking, healthcare, legal and retail industries about their mobile messaging behaviors. The survey further found that more than half of healthcare organizations (51%) do not have an official mobile messaging platform. Of those that did have one, less than a quarter (24%) employed an internal, corporate-created app.
“Healthcare employees communicate inherently sensitive information, like patient prescriptions, medical information, etc., yet their employers do not have the proper mobile messaging security infrastructure in place to adhere to HIPAA or other regulatory requirements,” Anurag Lal, CEO of Infinite Convergence, said in statement.
Fed adopts new rule to rein in emergency lending powers
The Federal Reserve Board last Monday approved a rule that curbs the central bank’s ability to bail out firms during a crisis. The rule was required by the Dodd-Frank Act, which mandates that the Fed limit its emergency lending authority to “broad-based programs” instead of specific institutions. It is meant to assuage concerns by legislators that the Fed holds too much power to pump money into the financial system. Critics say that after the 2008 financial crisis, the Fed used its emergency lending powers for the first time since the Great Depression and operated without adequate restrictions.
Under the adopted rule, the Fed will only be able to use its emergency lending power to help a market or industry sector, and not to bail individual firms out of bankruptcy. The rule requires that at least five firms be eligible for participation in each Federal Reserve lending program, and that each organization has enough collateral to protect taxpayers.
Ex-official says SEC faces ‘crisis of confidence’ over internal judges
The Securities and Exchange Commission defends its practice of bringing cases to its own in-house judges instead of federal court, but a former SEC official disagrees.
Former SEC Commissioner Joseph Grundfest testified before a House Financial Services subcommittee that the agency is undergoing “a crisis of confidence over the fairness of its internal administrative procedures.” The subcommittee, which covers capital markets, recently proposed a bill that would allow defendants to choose a trial by a federal judge or jury instead of before one of the SEC’s internal judges.
In his testimony, Grundfest, now a professor at Stanford Law School, urged the SEC to change its internal policies.
Critics have been challenging the SEC’s increasing use of its in-house court to preside over serious cases such as those involving insider trading. They cite the SEC’s possible “home court advantage” because defendants lack protections that are available in a federal court trial, such as pre-trial discovery rights.
In the wake of the horrific attacks in Paris earlier this month, government and intelligence officials pointed a finger at end-to-end encryption (E2EE) and how it enabled attackers to “go dark” — in other words, become invisible to law enforcement.
This is only the latest development in a years-old encryption debate between intelligence officials and Silicon Valley: Should tech companies give intelligence agencies back-door access to encrypted devices and networks, or hold their ground on strong encryption to protect their customers’ right to privacy? Even before the attacks rekindled the public safety vs. privacy debate, earlier this month a panel of experts from both sides of the argument weighed in on the pros and cons of E2EE at the Advanced Cybersecurity Center’s conference in Boston.
One panelist, FBI General Counsel James Baker, stressed that there is no perfect technical solution to the public safety vs. privacy debate. He added that it’s up to legislators and individual technology companies to decide how far they want to enable government surveillance.
“Under what set of circumstances do the people want that to happen? What do you want us to do, and what risks are you willing to take on all sides of the equation?” Baker asked the audience, which mostly consisted of information security professionals.
He added that Congress remains behind in addressing the problem as well.
“Current legislative thinking is unsatisfactory in balancing all these types of risks,” Baker said. “It’s about creating laws that effectively enable the government to obtain the results of surveillance in a way that’s consistent with our constitutional rights.”
At present, the FBI is offering two solutions for its “going dark” dilemma: Split-key encryption (data can only be decrypted by combining several keys) or encryption via “key escrow” (one key out of many is stored by a government agency).
Eric Wenger, director of cybersecurity, privacy and global affairs at Cisco, argued that these solutions are insufficient for tech companies. He said that they gave the public a mixed message: “We want you to use really strong encryption, but we just want a way to break into it,” he said.
Wenger also questioned whether the FBI and other law enforcement entities needed access to encrypted data for every investigation. With kidnapping cases, for instance, unencrypted geolocation data from a suspect’s device could prove to be the most important piece of information to move the investigation forward.
“We need to tease these problems apart and get to the things that are really the most meddlesome for law enforcement,” Wenger said.
Fellow panelist Susan Landau, a security policy professor at WPI, agreed, saying the encryption conversation is complex and needs to be considered on a case-by-case basis. This would require changing how local and federal law enforcement conduct investigations, however, and would likely come with considerable costs, she said.
“It’s complicated. It takes time. But I find myself in the position of actually supporting extra funding and saying, ‘Look, we’re talking about securing everybody and making investigations more expensive — or the reverse,'” she said.
But reservations around split-key encryption, key-escrow encryption and other proposals that facilitate government surveillance doesn’t mean that technology companies don’t care about public safety, Wenger insisted.
“I want [the FBI’s Baker] to be able to get what he’s entitled to get, but the problem is at what cost?” he said.
Head to SearchCompliance to read more about the panelists’ take on end-to-end encryption.
(This blog post was written by Diane K. Carlisle, executive director of content at ARMA International.)
Day by day, effective information governance (IG) is made more urgent and more complicated by disruptive technologies and new business models that are rocketing throughout organizations. Many companies are still in the early stages of solving such digital-age challenges as big data and the bring-your-own-device (BYOD) model, but advanced technologies continue to emerge. The Internet of Things (IoT), for instance, is a phenomenon that offers profound business opportunities while carrying great risk.
In a 2015 white paper titled “Internet of Things: Privacy and Security in a Connect World,” the FTC defined IoT as “the ability of everyday objects to connect to the Internet and to send and receive data.”
Examples include cameras that permit users to post pictures online with one click; automated systems that let users turn lights on and off remotely; and sensors in storage systems that can detect RFID data in order to manage inventory more efficiently. The IoT umbrella also includes such wearable devices as bracelets that track and share your workout data and heart implants that monitor and transmit health information.
The business benefits — and detriments — of IoT
Many companies hope the IoT will enhance productivity and generate new business models and revenue sources. Typically, IoT devices collect data and flow it to other devices. Some smart televisions, for example, can detect whether anyone in the room is actually watching the screen and transmit that information to other smart devices. Companies then use the data to negotiate ad rates, or to target products or other programs for that household.
Organizations are eager to pursue IoT because they stand to benefit dramatically from using the information they collect through it. For example, smart meters can help utility companies reduce the costs of manual meter reading and can monitor and predict resource usage at peak times. This information helps them ensure adequate supplies to meet customer demand. It might also help them justify rate hike requests to public utility commissions. The technology helps customers understand their power usage so they can make beneficial changes.
But there are down sides as well.
Such data generation and aggregation present big challenges in the governance of information because the IoT intensifies the volumes of data, the variety of sources and how it is dispersed. IoT devices spread immeasurable volumes of data to other connected devices, some of which may be external to an organization’s infrastructure and therefore beyond its sphere of information security. Thus, organizations that turn to the IoT for business advantage must be prepared for the associated information risks — especially when it comes to privacy and security.
Chief among the IG concerns and potential risks is the organization’s duty to protect customer privacy. Customer identification and banking information is linked to that smart meter, and the customers may be skeptical of the organization’s ability to protect that information from unauthorized use. And of course, managing and protecting such large volumes of information are challenging for most companies.
Providing adequate security for the information throughout its capture, transmittal, and storage requires financial resources that management may not have anticipated. Information is vulnerable at any of these points. And as the sensitivity of the data and functionality increases, consumers’ concerns about privacy protection may increase as well. Consumers may be more sensitive about banking information transmitted through Apple Pay than they are about their consumption of electricity. Take it a step further and think of the “smart home.” The security measures must be extremely effective to prevent unauthorized access.
These are but a few examples. In short, the IoT world is changing rapidly and it’s easy to foresee an overwhelming volume of data entering the corporate environment.
Planning for IoT initiatives
With that said, organizations must address the full scope of IG considerations as they implement IoT applications. In the following list are steps they can take to plan for IoT initiatives:
- Convene an IoT implementation team: It should include representatives from IT, RIM (records and information), legal and the relevant business units. If your organization has an IG steering committee, this would suggest a natural fit. Otherwise, convene a collaborative team to understand the benefits and risks and to make decisions on the implementation, considering all the factors involved.
- Conduct a risk assessment: Depending on the specific application, the organization may be taking on a greater degree of information-related risk.
- Make a plan: An IoT initiative must be taken seriously. Build the plan around specific business goals and strategies. Establish benchmarks and metrics to evaluate the success or failure of the initiative.
- Integrate regulatory and compliance requirements. These requirements will continue to apply, regardless of how information is captured.
- Assess the impact of IoT on the retention/disposition policy and schedule. You will be greatly lengthening the retention period for some types of information. Make sure you can delete the information when the retention period has expired and that the necessary retention schedule modifications are made.
- Ensure that IT has the capacity to deal with the additional volumes of information. The growth in data that requires storage can quickly overwhelm an organization. Such volumes can hinder business efficiencies, make e-discovery more costly and jeopardize the defensibility of legal holds.
While the IoT trend can be overwhelming, a sound IG program can give your organization a head start on addressing the challenges that come along with the opportunities.
The building blocks for a sound IG program are the Generally Accepted Recordkeeping Principles® developed by ARMA International, a thought-leader on IG. These Principles — Accountability, Transparency, Integrity, Protection, Compliance, Availability, Retention, Disposition — work together to foster a collaborative approach that ensures information is treated as an asset, protected in compliance with all regulations, and disposed of according to a legally defensible retention plan.
Accompanying the Principles is the IG Maturity Model, which defines characteristics of various levels of recordkeeping programs. It’s an assessment tool you can use to evaluate your IG program against the Principles. It helps you identify the gaps between your current situation and your desirable level of maturity for each principle. More information on both the Principles and the Maturity Model are available on the ARMA website.
The collaborative IG approach and conformity to the Principles will ensure maximum information security and a sound, holistic IG program that will prepare your organization for virtually any information-related challenge.
Diane K. Carlisle, IGP, CRM, is executive director of content at ARMA International, a not-for-profit professional association and authority on governing information as a strategic asset.
In recent regulatory compliance news, the Federal Reserve Chairwoman testified before a House panel that very large U.S. banks still experience “substantial” GRC management failures; recent research casts doubt on the effectiveness of new compensation “clawback” rules proposed under the Dodd-Frank Act.
Fed Chair: Big firms still face regulatory compliance issues
The leader of the Federal Reserve has rebuked very large U.S. banks for persistent regulatory compliance and risk management breakdowns, but also suggested legislation to lighten the regulatory burden on midsized firms.
Last week during a three-hour testimony in front of the House Financial Services Committee, Fed Chairwoman Janet Yellen said that although the largest financial firms the Fed regulates have improved governance, internal controls and risk management since the 2008 financial crisis, they still undergo “substantial compliance and risk management issues.” Yellen said the Fed is prepared to require very large firms to make considerable changes to their businesses if these banks’ “living wills” — or plans that detail how they would dismantle operations during bankruptcy — don’t pass muster with the Fed.
Yellen also said the Fed is open to tweaking its regulatory regime to help regional banks with more than $50 billion in assets. Under the Dodd-Frank Act, these banks are accountable to more stringent rules than those with fewer assets. However, she pushed back against a bill proposed by the House that would dictate criteria for which of these firms would face tougher rules. Instead, Yellen requested that the Fed have the flexibility to modify the rules.
Study: Execs tend to refuse restatements if their pay is incentive-based
New research shows that compensation clawback rules proposed under the Dodd-Frank Act might not be as effective as proponents anticipate in influencing companies to fix faulty financial statements. Under the new rules, which will likely be adopted later this year, issuing these restatements will initiate the “clawback,” or return, of financial executives’ inappropriate bonuses.
The research, published by Accounting Review this month, concluded that senior executives (mainly CFOs, controllers and treasurers) are less likely to agree to fix faulty financial statements when most of their compensation is incentive-based. Jonathan S. Pyzoha, an assistant professor of accountancy at Miami University, conducted a study to determine whether executives from 112 public financial companies negotiate more firmly with auditors to fully avoid a restatement if their incentive-based pay is at stake.
Attorneys that work with companies considering restatements believe that clawbacks are not among their principal concerns, according to MarketWatch. However, Pyzoha’s research shows that this is not the case for executives with the bulk of their pay being incentive-based. He found that these executives were less amenable to fixing financial statements if the restatement was proposed by a “low quality” auditor — with quality based on the auditor’s time and experience in the field. However, he also found that executives were more open to restatements if the proposal came from a high-quality auditor.
Pyzoha advised companies to have their audit committee’s financial experts play a greater role in the restatement process to counterbalance these executives’ influence.
This week, Goldman Sachs agreed to pay a $50 million fine to settle a case in which a former employee leaked confidential information from the New York Fed. Also in the news: Bristol-Myers Squibb and other pharma companies face foreign bribery probes; a study found that earnings misstatements are “contagious”; and an extensive investigation of Wal-Mart’s operations in Mexico has found little wrongdoing.
Goldman Sachs faces $50 million fine, criminal charges for ex-banker
A former Goldman Sachs’ banker is pleading guilty to federal criminal charges, a rarity on Wall Street. Last year, the banker allegedly obtained confidential documents from an employee at the Federal Reserve Bank of New York, one of Goldman’s regulators, and shared that information with his team. Both the Goldman banker and the New York Fed worker will accept a plea deal that could put them behind bars for up to a year, anonymous sources briefed on the matter told The New York Times.
Both men were fired after the leak. Goldman Sachs representatives said that once the company discovered the leak, it immediately notified regulators and began an investigation. Still, under a settlement with the Department of Financial Services, the bank is expected to pay a penalty of $50 million and come up against new constraints for handling sensitive regulatory information. According to the NYT, Goldman will also have to acknowledge that it failed to sufficiently supervise the former banker.
More pharma companies to be probed for foreign bribery
In the wake of Bristol-Myers Squibb’s settlement of foreign bribery charges with the federal government earlier this month, more pharmaceutical companies may be put under the microscope.
New York-based pharmaceutical company Bristol-Myers Squibb agreed to pay a $14 million penalty to settle U.S. Securities and Exchange Commission (SEC) charges that it violated the Foreign Corrupt Practices Act (FCPA) by bribing healthcare providers in China in exchange for prescription sales.
Now, according to Forbes, AstraZeneca, Eli Lilly, GlaxoSmithKline, Novartis, Novo Nordisk, Sanofi, Teva Pharmaceutical Industries Ltd., UCB and probably other pharmaceutical companies will reportedly be investigated for FCPA violations. The U.S. Department of Justice also plans to beef up its enforcement staff and resources dedicated to “high-impact” foreign bribery cases.
Study: Earnings misstatement is infectious
A study that examined 2,376 financial restatements made by companies between 1997 and 2008 found that firms are more likely to misstate their own earnings after another company in their industry or region publicly announced a restatement. However, when a misstating firm was penalized by the SEC, faced lawsuits, or media reports surfaced regarding their malpractices, their peers did not imitate misconduct, the study discovered. This finding, the authors said, suggests the “deterrent effects of enforcement activity.”
The study, which was published by the American Accounting Association, did not identify particular companies, but uncovered that when larger and higher-profile firms manipulated their earnings, misconduct was more likely to be copied by others in their industry. The study also found that imitation stopped during the years between 2003 and 2005, likely due to enforcement actions related to the Sarbanes-Oxley (SOX) Act. The trend resurfaced between 2006 and 2008, possibly because “the sting associated with SOX has worn off,” the authors said.
Wal-Mart bribery probe turns up little proof of major violations
A high-profile federal investigation of Wal-Mart Stores’ operations in Mexico will likely end up becoming a smaller case than investigators had anticipated, sources familiar with the matter told The Wall Street Journal.
While the three-year probe of corruption allegations remains ongoing, the work is approaching completion and the case could be settled with a fine and no criminal charges. The investigation was launched by the U.S. Department of Justice after articles by the NYT described alleged bribes paid by the retailer to get permits to build stores in Mexico. The articles also detailed how company executives allegedly terminated an internal inquiry into the questionable payments. The federal investigation, however, found evidence that contradicted some of the claims made in the NYT articles.
In part one of this blog post, we unpack the drivers behind the surge of demand on compliance investments and skilled staff, including new agencies that take a behavior-based approached to regulation, as well as an expansion in their powers. In part two, we talk about how compliance officers can help transform their organization into one that is conduct-risk-aware.
Compliance functions now have considerable influence on the board and its decision-making process, according to Roger Miles, behavioral risk lead at Thomson Reuters. The majority of boards (74%) now have an increased focus on conduct risk, and the chief risk officers or heads of compliance in 70% of organizations directly report to the board on conduct risk.
Compliance practitioners should seize this opportunity to lead “the transformation that regulators are looking for, to help build and promote a responsive business culture that encourages intelligent, behaviorally aware risk taking and decision making,” wrote Miles.
To jumpstart this transformation, Miles advises compliance officers to encourage all staff to work “risk-aware.” This means educating everyone in the organization about why good conduct is good for the business, and that poor conduct comes with a wide range of costs beyond fines — including negative effects on customers.
“Conduct breaches are not just about paying fines in your local jurisdiction. They have wider business impacts on capital (prudential reporting, capital adequacy, brand value, share premium, cost of borrowing) and ultimately on the ability of the business to maintain self-determination (strategic governance and control),” he said in an email.
While conduct breaches come with obvious business costs such as the possibility of a senior manager getting suspended as a result of a violation, they may also bring unexpected damages.
“Businesses hit by a major conduct-related enforcement may also find themselves the targets of shareholder activism, boardroom coups and hostile takeover,” Miles said.
Miles also encourages compliance leaders to take a look at the current state of their compliance training programs, and making sure that training content is up to date. They should also add new training programs on behavioral risk awareness and new conduct regulations in the company’s jurisdiction.
While this could involve requesting more resources from the board, Miles said that “the signs are this will be more sympathetically heard than in the past.”
Boards of directors are increasingly seeing the value of regulatory compliance, as the past year has seen a worldwide spike in compliance spending and the hiring of skilled compliance staff, according to data collected by intelligence firm Thomson Reuters.
In North America, 60% of firms report that they expect a “significant increase” in compliance investments from 2015 to 2016. For instance, one of these firms, HSBC, expects year-over-year spending on compliance to increase by 300%, to $750 million.
Firms also expect to dedicate a considerable amount of time and staff to compliance processes and procedures. Twenty percent anticipate committing between four to seven hours on compliance per week (up 1% from 2013), and 21% expect more than seven hours (up from 18% in 2013).
Where is the pressure is coming from?
One driver that’s increasing demand for compliance specialists is pressure from the influx of new regulatory initiatives created after the 2008 financial crash, according to Roger Miles, behavioral risk lead at Thomson Reuters. Regulators are looking beyond transaction data organizations produce internally and instead define violations based on human behavior.
“A key feature of this revolutionary approach … is that it looks beyond the dry theory of economic utility toward a real-life, empirical view of human interactions, the ‘what actually happens’ view of financial markets,” wrote Miles in a whitepaper titled “What’s Compliance Worth?”
Regulators that follow this behavior-based regulation approach examine firms’ processes, decision making and how they design systems for employees. Moreover, they look at how these organizations behave in financial markets and how they interact with their customers in real time.
This regulatory approach has not only increased compliance costs, but regulatory fines as well. According to research by Thomson Reuters, cumulative fines for conduct-related offenses are projected to surpass $20 billion globally — and will continue to grow.
Another factor is that regulators are expanding their powers. Local agencies, for example, are extending their reach beyond their jurisdiction and target sector. Additionally, there has been an increase in regulatory initiatives that impact multiple sectors or territories. Some examples are Basel III, Foreign Account Tax Compliance Act and the Foreign Corrupt Practices Act.
Furthermore, there’s been a rise in local regulatory schemes that are subsequently copied by agencies in other jurisdictions, such as “clawbacks,” or recovery of inappropriate compensation and bonuses, and examining senior managers’ personal responsibility for criminal behavior. In the U.S., for example, “the SEC is currently staffing up with behaviorally aware enforcers headhunted from other jurisdictions,” Miles said over email.
In response to this increase in enforcement actions, compliance staffs’ dockets are getting longer. Their tasks must now include, at the very least, the following:
- Protecting senior management against regulatory risk and managing regulatory relationships;
- Providing evidence to management and the board on appropriate compliance actions and developing reporting mechanisms;
- Managing the convergence of compliance, internal audit and risk functions; and
- Keeping abreast of new requirements of conduct risk regulations and create their firm’s own definition of what “good conduct” is.
In part two of this blog post, find out how compliance practitioners should take the lead in transforming their organization into one that is conduct-risk-aware.