TotalCIO


March 15, 2018  4:24 PM

AI’s exponential curve: More from my interview with ISACA’s Rob Clyde

Nicole Laskowski Nicole Laskowski Profile: Nicole Laskowski
Artificial intelligence, cybersecurity

I recently talked about the use of AI in the enterprise with Rob Clyde, vice chairman of the board of directors at ISACA, an organization focused on IT governance. The conversation focused on a new report about the threats AI poses to society and floated possible recourses. Remedies ranged from building security into AI chips to delaying publication of new AI technology until it could be vetted for security vulnerabilities.

The 26 authors wanted the 101-page report to act as a conversation-starter and motivate policy makers, academics and corporations to start thinking not just about the benefits and drawbacks that come with the use of AI.

Clyde did just that, providing SearchCIO with a cogent analysis of the report as well as with a number of his own suggestions for protecting ourselves against malicious AI — from beefing up training for AI skills to figuring out how to audit AI applications. Not everything Clyde and I talked about made it into the published articles, which you can find here and here. Here is a snapshot of his views on the validity of the report.  

On what the report is lacking, Rob Clyde said:

“This issue of self-training — where an AI learns from itself. I don’t know if you know this but [AlphaGo Zero] has tipped the master Go players’ games upside down. The [masters] have learned new ways to play Go [from Google’s robot] after a thousand years or more playing in human history. For the first time ever we have radically new ways to play and win the game of Go. And it came from AlphaGo Zero playing itself.”

On whether the report left out any important voices, Rob Clyde said:

“This is a good group. It’s an impressive group. I always look at something like this and I don’t ask who was left out but were enough people included that you start to say there’s a good chance this had some balance to it. And I do feel like that.”

On whether the report’s focus on available technologies or technologies that are plausible in the next five years was too narrow, Rob Clyde said:

“It is the right timeframe because we are in the exponential part of the curve. So I’m all for that. … This is one of the greatest times for AI. And unlike many others of the so-called golden periods in computer science, I’m thrilled that at this point — we’re at the beginning of that exponential curve of very rapid growth — that we’re having conversations like this to ensure that we’re at least putting some thought and questions around the use of AI, allowing the community to rally around it and have a better chance of protecting ourselves as we go into the future.”

March 9, 2018  7:21 PM

Who needs a tech futurist? In today’s world, you do

Jason Sparapani Jason Sparapani Profile: Jason Sparapani
Blockchain, Digital transformation, Technology

Pat Ryan’s job is to think about tomorrow.

He fills a new role as tech “futurist” at tech consulting company SPR. His job is to study budding technologies he projects will catch on in the next few years and “make them concrete” for clients.

“What we’re trying to do is be much more proactive and much more deliberate on what we think our clients are going to be asking us for instead of being reactive,” said Ryan, who’s proper title is executive vice president of emerging technologies for the Chicago-based company. “We’re now trying to get ahead of that curve because things change really fast, and they’re changing faster.”

Pat Ryan. SPR

Pat Ryan

Though Ryan is in the consulting business, the role he plays — as tech futurist — has increasing relevance for practically every organization in an age when digital technologies are enabling entirely new businesses (ridesharing, for example) and threatening old ones (the taxi industry). Companies may not have a dedicated role like Ryan’s, but looking to the future for what’s to come and then taking action in the present needs to be someone’s job.

“If they don’t do that, their competitor will do that — that’s an existing competitor, or it’s a competitor that they haven’t even heard of today,” he said.

A tech futurist at work

Forecasting which way the tech winds will blow is no simple task. Ryan watches technologies that show real business promise — blockchain, for example, for steeling transactions against fraud. He then learns their ins and outs — “I probably still code every single day,” he said – so can he train his co-workers, consultants who field questions from eager clients.

Lots of companies are on the cutting edge, too — they just need some guidance to push ahead in the right direction. Ryan gave the example of a client he’s working with to figure out how to put blockchain to use. (He didn’t identify the client for confidentiality reasons.) The company’s CTO is also a tech futurist: He envisions a world reshaped by the distributed ledger technology. “Part of his role as the CTO is to have this futurist hat,” Ryan said. “That’s exactly why we’re working with him now.”

Just a few years ago, blockchain was a curiosity piece with an odd name, and there was little interest in it beyond the financial services industry. Now law firms, manufacturers and healthcare facilities are exploring the technology, one that Ryan says will someday be a platform companies will use for a whole range of applications.

Preparing for what’s to come

Now, with the technology still green, is when the real work begins, Ryan said. For example, he’s looking beyond the business implications of a smart contract, a computer program that brokers agreements between parties.

“How do you unit test it? How do you put it into the continuous integration pipeline?” he said. “If you’re going to really make it stick, we’ve got to get through some of that blocking and tackling of the technology.”

Another technology that’s bound for prime time, Ryan said, is natural language processing, which facilitates conversations between humans and machines. SPR’s clients aren’t yet asking for it, but the technology is improving at a rapid pace — and demand will take off. So companies should act now.

“You could and should be incorporating voice user experiences into whatever application you’re making,” Ryan said. “So if you have a mobile app, you should have a voice application.”


March 9, 2018  9:58 AM

Security best practices to guide your digital transformation journey

Mekhala Roy Mekhala Roy Profile: Mekhala Roy

With more companies embarking on a digital transformation journey, enterprise data is being spread across devices, systems and even in the cloud. As a result, focusing security efforts on protecting network perimeters is no longer enough.

“In digital transformation, as we move from the physical into the digital world, information security becomes harder, particularly for those of us who are operating in businesses in which we have personal, medical or confidential information,” Bill Packer, CIO at American Financial Resources, said. “It becomes harder to know where that information is, to plug those [information] leakages, and becomes more difficult to know whether that information is being compromised or stolen.”

During a recent webinar highlighting the best practices that organizations should adopt during their digital transformation journey, panelists were asked to discuss ways to address constantly changing threat vectors while staying innovative and transformative.

Packer noted that the issue is an ongoing debate for companies: Do they decrease speed to market to ensure they have addressed all potential threats to avoid during their digital transformation journey?

“It continues to be an everyday dialogue where we try to think security, and sometimes we accept the risk, sometimes we mitigate the risk … but it’s the methodology of risk identification, the dialogue, the conversation, and then deciding what we are going to do about it,” he said.

When it comes to security, organizations should assume that they are always under attack or can be attacked anytime, according to Inder Sidhu, executive vice president of global customer success and business operations at Nutanix. He stressed that AI technology has come a long way to help companies solve security problems.

“When I was at Cisco, we implemented a system for context-aware monitoring where … each day we cracked three billion events from 14,000 servers. We had profiles for the users, profiles for the types of data, and we had AI looking at it to figure out when somebody was accessing data that seemed a little bit out of the ordinary.”

It is equally important to think about what happens before, during and after a cyberattack, Sidhu said. Organizations should not only have the capabilities to discover vulnerabilities before an attack happens, but also have the appropriate detection, blockage and defense mechanisms in place when it does occur. After an attack, steps like scope checking, containment and remediation come into play, he added.

“Having clear responsibilities and clear processes for the before, during, and after phases has been very helpful for us,” Sidhu said.


March 5, 2018  7:30 AM

Eying lower costs of cloud computing, Informatica to curb overbuying resources

Jason Sparapani Jason Sparapani Profile: Jason Sparapani
Cloud Computing, Cloud economics, Cloud infrastructure, Predictive Analytics

Alec Chattaway is committed to getting the most out of cloud — and lowering the costs of cloud computing. The director of cloud infrastructure operations at Informatica is increasing the efficiency of cloud infrastructure use at the data management software company. He’s doing that by almost eliminating overprovisioning, or buying more computing resources in case they’re needed — a bad habit that could have a drastic impact on the bottom line.

“It’s not 100%. It’s such a large percent that it may as well be,” Chattaway said. He reckons about 95%. For Informatica, getting any closer wouldn’t be worth it.

Alec Chattaway, Informatica

Alec Chattaway

To unwind what sounds like a counterintuitive argument, start with overprovisioning itself: On its face, it makes sense. If there’s an unexpected surge in visitors to a website, for example, there will be no shortage of processing power, memory and storage to handle it.

But if there isn’t, those resources are wasted — and so is the cash that bought them. Cue the increase in costs of cloud computing. (Amazon Web Services, for one, isn’t complaining, Chattaway said. The No. 1 cloud infrastructure provider is “relying on people being inefficient so that they can make money.”)

Predicting out of a predicament

Chattaway’s plan is to run predictive analytics on all of Informatica’s cloud use to determine how much computing power he’ll need at any given time — and then supply just the right amount. He won’t entirely eliminate the need to overprovision, because he can’t predict everything — a massive influx in demand for the company’s integration tool, for example.

And he can’t underprovision, or supply too conservatively. If he does, and the company’s web tools slow down and customers flee in frustration, “that’s a really bad thing.”

To avoid having to err toward not enough resources, Chattaway needs a “fudge factor,” or a margin of safety, and expects that to be approximately 5%. So with the use of predictive analytics, 95% of supplied computing resources will meet the need.

‘A pipe dream’

As Informatica gets better at the analysis, Chattaway plans to edge down from 5% overprovisioning. But getting to 1% is “basically a pipe dream.” It’s too costly. Chattaway thinks of the calculus as a modified 80/20 rule. The original says 80% of the effects come from 20% of the causes.

“It’s really easy to do the 80%,” he said, slashing the costs of cloud computing. “The 20% is 10 times more expensive, and then that last 1% is 100 times more expensive.”

Some can do it. Multinationals with many billions of dollars in revenue — say, Johnson & Johnson or Coca-Cola — can afford to do the kind of predictive analytics that would be needed to get to 1%. And for those companies, the difference between 5% and 1% is colossal.

For Informatica, a midsize company with $1.05 billion in revenue, “We’ve got some good resources, but not enough to make those final few percentages.”

For more on Alec Chattaway’s efforts at cutting the costs of cloud computing at Informatica, read this SearchCIO column.


March 2, 2018  3:49 PM

Gartner: Machine learning platforms are in demand, market is in flux

Nicole Laskowski Nicole Laskowski Profile: Nicole Laskowski
Artificial intelligence

The market for data science and machine learning platforms is in flux. According to a new report by the research outfit Gartner, the more established players are being usurped by nimble players capable of adapting quickly to growing market demands.

Indeed, the Gartner 2018 “Magic Quadrant for Data Science and Machine-Learning Platforms” ranked smaller players such as H2O.ai and KNIME above more established players such as IBM, Microsoft, SAP and even SAS, a software company that practically invented the term “advanced analytics.”

And the analysts fully expect changes to continue, saying they’re seeing a maturity in analytics programs and a demand for data science tools from non-data scientists such as application developers. Indeed, investment in the market for data science and machine learning platforms is on the rise: Revenue grew by 9.3% in 2016 to $2.4 billion. Compare that to the overall business intelligence market, which saw a 4.5% growth in revenue in 2016.

The uptick of interest in data science and machine learning platforms is driving an increasingly crowded market. Cloud vendors such as Google and Amazon didn’t make the cut for one reason or another but are developing competing technology in this market. Nontraditional players are expanding their analytics portfolio or are edging their way in. The analysts pointed to a couple of examples, including Salesforce with the launch of its Einstein technology, and Workday, the human capital management and financial software vendor, with its acquisition of Platfora, a big data analytics company.

The rapid changes in the market are a signal to CIOs to keep a close eye on things. Gartner advises that CIOs stay abreast of how their current vendors are keeping up with this fast-moving market. If  their IT organizations are considering an investment with new vendors, they need to investigate how new data science and machine learning platforms would complement rather than compete with existing platforms, according to the report.

The report analyzed 16 vendors that integrate the basic tools needed to build a data science solution, such as accessing and preparing the data, and offer the platforms that provide ways to implement a data science solution into business processes, infrastructure and products. Unlike last year, this year’s report teases out machine learning, calling it a “subset of data science” that requires “specific attention when evaluating these platforms.”

The Magic Quadrant, an actual four-by-four chart, ranks vendors on their completeness of vision and ability to execute. It then labels the vendors as a challenger, niche player, visionary or leader. Vendors deemed to be the most robust are categorized as leaders in the market. And the leaders are Alteryx, SAS, KNIME, RapidMiner and H2O.ai. Although in this field, those leaders almost certainly are looking over their shoulders at challengers TIBCO Software and MathWorks.


February 28, 2018  9:48 PM

Forrester to CIOs: Blockchain networks are not truth machines

Linda Tucci Linda Tucci Profile: Linda Tucci
Blockchain, CIOs, Fraud

Blockchain networks are not only not truth machines, as noted above, they are also a “game of tradeoffs” that — in order to work — require a high degree of governance. That, CIOs, is according to a recent (and refreshing) report from Forrester Research: “Blockchain technology, A CIO’s Guide to the Six Most Common Myths.”

Written by analyst Martha Bennett, a blockchain and business intelligence expert in the research firm’s CIO practice, the report lays out common misconceptions about blockchain-based networks, including claims of immutability, decentralization and transparency. The piece ends with some common sense advice to CIOs on how to navigate the blockchain hype.

And, hype it is. Blockchain, a type of distributed ledger technology for maintaining a tamper-proof record of transactional data, has been heralded as a transformative force that will revolutionize everything from banking and healthcare to the seafood industry.

Much of the appeal of blockchain networks is in their inherent ability to prevent fraud and ensure the provenance of goods, digital and physical. And to some degree, this primary claim to fame is correct, says Bennett: Blockchain transactions are hard to mess with and when they are messed with, it’s obvious; plus, the “trackability” of blockchain networks may also work as a deterrent to fraud, she said. But they cannot prevent fraud. Truth machines, blockchains are not.

“Just because it’s on a blockchain doesn’t mean it’s true. If somebody makes a false declaration or issues a fraudulent certificate or prescription and enters this into a blockchain-based system, it’s still fraud. Or if somebody registers ownership of a property, the system can’t tell whether this really is the rightful owner,” Bennett asserts.

Guaranteeing the provenance of physical goods can’t be done by blockchains alone, Bennett argues, but  will require “off-chain” mechanisms in conjunction with regulatory and enforcement measures.  “As long as the incentive to commit fraud is greater than the likelihood of being caught, let alone punished, blockchain-based networks cannot provide proof that your tuna fish has been ethically caught or your coffee really is organic.”

Indeed, many of the mechanisms and policies that will make blockchain networks transformative — for consumers and enterprises — have yet to be designed and will require unprecedented levels of cooperation and collaboration among public and private sectors and a variety of industries, Bennett said.

Her main advice for CIOs is that blockchain-based networks in the enterprise will involve tradeoffs. One compromise will be between the blockchain’s virtue of transparency and the enterprise’s need to limit visibility of some data. Participants in blockchain networks will have to create governance models that satisfy confidentiality requirements but ensure that the information not open to everyone is nonetheless valid.

Another piece of advice won’t come as surprise to CIOs: The technology will be ready long before businesses, governments and regulators catch up. And finally, as with so many automation technologies, the focus so far has been on how blockchain networks can improve upon existing business processes. The real value will come in using blockchain to reinvent business as usual.


February 28, 2018  7:37 PM

Quest for resilience turns up ransomware backup strategy

Jason Sparapani Jason Sparapani Profile: Jason Sparapani
cybersecurity, Disaster Recovery, malware, Ransomware, Resilience

When Hubert Barkley, vice president of information and technology at Waste Industries, bought copy data management software Actifio, he didn’t know he was also buying a ransomware backup strategy.

The Raleigh, N.C., waste and recycling collection company is digitizing fast, installing sensors on its fleet of trucks to supply information on the vehicles’ maintenance and help drivers find the best routes for pickup. With all that information generated by the trucks passing through the company’s blend of data center and cloud architecture, Barkley needed to make sure it was always available and accessible in case of a disruption. The vogue term today, instead of disaster recovery is resilience, as SearchCIO’s sister site SearchSecurity points out.

Hubert Barkley, Waste Industries

Hubert Barkley

Copy data management software gets him there. It stores one master copy of data that absorbs changes and reduces the proliferation of copy data, duplicate copies of data that’s often kept on servers for data recovery purposes. The software reduces the risk of losing wide swaths of data and hastens recovery time, Barkley said.

But it has also had an unexpected side benefit: It has given him a ransomware backup strategy, practically abolishing the specter of ransomware. That’s the malicious software that locks up computer systems, turning data into encrypted gibberish, until money is paid. WannaCry — which targeted computers in 150 countries in 2017 and continues to terrorize — and other attacks frayed no shortage of nerves on boards and in IT.

“I can’t really divulge a lot of information, but let’s just say we had another company in the waste space that got their environment compromised — and they had to pay a ransom,” Barkley said.

He wondered if he could be in the same situation. He wouldn’t, not with Actifio’s software. He laid out what he’d do if Waste Industries were attacked.

“I’d find out where the leak is. I’d tell the people to go stuff it, and in about 15 minutes I can have my data back from my golden copy of data.”

Malware attacks are still something to be concerned about, Barkley said, but the ransomware backup strategy his new software installation equips him with also gives him with newfound confidence.

“I care if somebody ransomwares me; I care about our standing in the community,” he said. “But the point is, I don’t have to pay a ransom to get my data. That was really an unintended consequence of the project.”

Hubert Barkley discusses more about Waste Industries’ copy data management project and how his IT organization aims to deliver value to the business in a two-part SearchCIO interview.


February 28, 2018  6:54 PM

AI attacks are coming soon to a network near you

Nicole Laskowski Nicole Laskowski Profile: Nicole Laskowski

The rapid development of artificial intelligence and machine learning is a double-edged sword. The technologies are becoming cheaper and easier to apply to the enterprise, which is also making it easier for bad actors to utilize the emerging tech.

Twenty-six researchers, policy and industry experts from institutions such as Oxford, Cambridge and Yale and non-profit organizations such as OpenAI and the Electronic Frontier Foundation are sounding the alarm bell on AI attacks. In a new report “The Malicious Use of Artificial Intelligence: Forecasting, Prevention and Mitigation,” which was published last week, the authors warn that AI will pose serious threats to the safety and security of systems, people and governments in the next few years. They advise researchers and policymakers take matters into their own hands now by closely collaborating, talking about and implementing new policies, including possibly delaying the publication of new AI breakthroughs, as an IT security strategy.

As AI technologies become more commoditized, the report states that the low cost and fast-paced evolution of these tools will add purpose and power to existing security threats and give rise to new threats, the likes of which CIOs have never faced before.

Specifically, the authors predict that the kinds of attacks companies face today will come at a faster rate with a broader focus, be harder to attribute and involve more attackers than they do today. For example, spear phishing, which is the practice of sending bogus emails to individuals that look like they came from a trusted source, is done manually and could be sped up with AI, threatening a company’s cybersecurity strategy.

The authors also predict new kinds of attacks will emerge as AI is used to automate tasks and even analyze human behavior. Attackers will also exploit weaknesses in AI technologies such as manufacturing a person’s speech to gain access to a system. These attacks won’t just threaten today’s digital security practices but could also threaten a person’s physical security by, for example, causing autonomous vehicles to crash. AI attacks could also threaten a country’s political security by targeting propaganda, as well as through the use of automated surveillance or deception techniques such as manipulating videos.

The 101-page paper, which came out of a two-day workshop at Oxford a year ago, included recommendations on how to prepare for the impending future of AI attacks. But the recommendations are more a call to action than actual solution, including such anodyne advice as the need for close collaboration between policy makers and researchers, continued discussion as well as curating and sharing best practices on how to handle the exploitation of technology.

A section for “open questions and potential interventions” provides a forum for more radical ideas, including the suggestion to delay the publication of some findings so that they can undergo a “risk assessment” to “determine what level of openness is appropriate.” Another suggestion was to borrow techniques  from the more mature domain of cybersecurity such as red teaming, where a group of AI developers would seek out and fix security vulnerabilities, and apply best practices to other targeted areas. The authors also suggest developing a code of ethics that holds AI researchers to standards that include ones on social responsibility.


February 28, 2018  5:36 PM

Unlockable iPhones, leaked code among Apple’s security woes

Brian Holak Brian Holak Profile: Brian Holak

Security on Apple devices might not be as impenetrable as many thought.

Forbes reported this week that Cellebrite, an Israel-based vendor and major U.S. government contractor, is now able to unlock new iPhones. Bryce Austin, CEO at Minneapolis-based IT consulting company TCE Strategy, said if the reports are true, it’s a major blow to the security that all iPhone users assume Apple has built into their devices.

“Ever since the showdown between Apple and the FBI in February 2016, it was assumed that Apple was trying hard to make extremely secure mobile devices,” he said.

Cellebrite reportedly has developed techniques to get into devices through operating systems as recent as iOS 11 and is advertising these techniques to law enforcement and private forensics experts across the globe.

Austin’s advice for consumers and IT leaders in response to this news is two-fold:

  1. “It’s critical for consumers and for businesses to keep mobile devices patched, and to retire those devices that can no longer be patched.
  2. It’s critical to adopt a cybersecurity posture that assumes that end point devices can be compromised, and to limit the amount of damage that any compromised end point can do.”

Add that to an earlier leak…

But that’s just the most recent example of Apple’s security woes. Earlier this month, a chunk of Apple’s iOS source code was leaked online. The leak could’ve led hackers and others to comb it for vulnerabilities in iOS for nefarious purposes or to make iPhone jailbreaks easier. The leaked code was quickly removed from GitHub but by that time was the damage may have already done.

“I’d love to know if this was a result of individuals examining the recent iOS source code that was leaked or if the timing was just coincidence,” Austin said.

Either way, it’s clear that the leaked code wasn’t good news for the tech giant.

“Anytime proprietary or intellectual property is leaked publicly, it tends to erode trust in the organization,” said Shane Whitlatch, enterprise vice president at security solutions provider FairWarning.

Apple has always been very careful to safeguard against leaks, which makes this situation unusual. Jonathan Levin, who writes books on iOS and macOS programming, went so far as to tell Motherboard that this was “the biggest leak in [Apple’s] history.” If reports about the ability to unlock iPhones are true, it could prove to be an even bigger security incident than the leaked code.

Advice: Get to the root of the problem

Time will tell the extent to which Apple’s security and reputation have been undermined — and what repercussions will follow — but it’s not too late for your organization to take stock of its security practices.

How should companies respond to leaks like this? It’s all about access, Whitlatch said.

“It’s a possibility that an insider threat could have leaked this source code,” he said. “Organizations need to monitor user access to the crown jewels of their organization, whether it’s a privileged user or a third party vendor who has access to the company network.”

In today’s data-driven business environment, that’s sometimes easier said than done.

“Due to the nature of modern business and the size of a business’s network where data is stored and transferred, it’s hard to keep track of users’ access to this type of proprietary information,” said Whitlatch. “Moving forward, organizations need to develop a birds-eye view of their network so they can properly address access rights management and mitigate risk associated with insider threats who can leak this type of proprietary information.”


February 28, 2018  5:10 PM

Survey: Attorneys still lack proficiency in e-discovery technology

Ben Cole Ben Cole Profile: Ben Cole

E-discovery technology has become an integral — and essential — element of the modern legal process, but a new report suggests attorneys are still struggling to embrace the technology.

A survey of 30 current or recently retired Federal Judges found that 60% said that the lack of cooperation between parties causes e-discovery problems, compared to 10% that said the issues stemmed from either lack of defensible compliance policies or lack of e-discovery technology education.

Jeffrey Ritter, the founding chair of the ABA Committee on Cyberspace Law, noted that while the survey sample size was very small it does make clear an unfortunate truth about e-discovery: Attorneys are still not up to speed despite the dramatic increase in digital evidence and amendments to the Federal Rules of Civil Procedure (FRCP) that provide guidance on the discovery process.

“Twelve years after the first e-discovery amendments, and over two decades since the ABA first encouraged revisions to the rules of civil procedure and evidence to adapt to the shift toward digital records, professional lawyers are failing to responsibly discharge their duties as officers of the court,” Ritter said.

The Federal Judges Survey: Judicial Perspectives on the State of E-Discovery Law and Practice, was conducted by legal software provider Exterro, Inc. in conjunction with BDO Consulting and EDRM/Duke Law. The report was designed to examine the legal community’s e-discovery technology proficiency, identify areas to improve e-discovery processes and outcomes, and study how 2015 FRCP changes are influencing how electronically stored information (ESI) is handled during legal cases.

The trend could continue to cause problems: According to the National Law Review, there are currently over 3,000 case opinions involving ESI and ESI-related issues nationwide. The report’s authors suggest that the biggest obstacles facing attorneys are their mindset and approach to using e-discovery technology, and Ritter agreed that this is a big factor.

“Most attorneys have realized that digital records, when properly preserved and produced, often serve as the best evidence of the truth,” Ritter said. “ESI deprives lawyers of their historic role as advocates to shape alternative versions of the truth from the subjective, oral testimonies of human witnesses. To become competent on ESI is to step away from the skills lawyers believe they are paid to perform.”

Amendments were made to the FRCP in both 2006 and 2015 to reflect the increased use of digital evidence and electronic discovery, but have done little to improve the e-discovery process, Ritter said. For example, Ritter pointed to the Civil Procedure Rule 37(e), which was designed to prevent failures to preserve ESI for court cases.

“In many respects, the revisions only provided counsel reasons to avoid positive collaboration,” Ritter said. “The increased judicial discretion granted under Rule 37(e) only invites greater chances for attorneys to oppose, drag one’s feet, or otherwise avoid the consequences of full and responsible preservation and production.”

Attorneys are also slow to develop their e-discovery competency, the survey found: Only 23% of judges strongly or somewhat agree with the statement “the typical attorney possesses the legal and technical subject matter knowledge required to effectively counsel clients on e-discovery matters.”

Almost half (46%) of the surveyed judges said that e-discovery education should be mandatory, either in law school or through continuing legal education courses.

Ritter agreed that mandatory training and e-discovery certification are both reasonable ideas to offset the trend, especially given that ESI will only continue to be essential to legal cases.

“E-discovery is complex; its proper execution is vital to sustaining the rule of law in the digital age in which we now live,” Ritter said. “The simple truth is that the evidence of the truth will increasingly be found only in the machines.”


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: