Yottabytes: Storage and Disaster Recovery


December 30, 2016  10:19 AM

‘Rogue One’ Data Storage — a Cautionary Tale

Sharon Fisher Sharon Fisher Profile: Sharon Fisher
Encryption, Storage, User interface design

The problem with being a storage nerd is it makes it pretty difficult to enjoy the movies. In this day and age, too many movie plots hinge on computers and data, and moviemakers typically aren’t geeks, so it’s way too easy to lose your suspension of disbelief based on bonehead errors.

Take Rogue One. And yes, here be spoilers; as a good nerd, I saw it opening night, but waited til after Christmas to write this so that most reasonable people had already seen it.

Sneakernet is alive and well. Yes, here we are in a galaxy far, far away, with space travel and holograms and planet destroyers, and we still are exchanging data using tape cartridges, CD-ROMs, USB sticks, and watching a meter as we upload data? And tape cartridges have a handy-dandy loop on them so you can hook them to your belt? Not to mention the fact that, although all the plans are in electronic data storage, there’s no way to gain access to the file other than flying out to that base.

Encryption isn’t a thing? Okay, maybe the reason you couldn’t gain access to the file other than by flying out to the base was for physical security. I’ll buy that. But then they have that entire library stored out there, with all the seeekrit plans for the Death Star, in unencrypted files? See what happens, FBI, when you outlaw encryption?

Data centers powered by renewable energy. Perhaps the Scarif data center was also protected by a moat. With a waterfall. But perhaps the waterfall was there to power the library using renewable energy. It’s nice to know that even the Empire wants to be green.

User interfaces still need work.  Okay, it’s not “It’s Unix! I know this!” but apparently all the user interface designers got killed off early in the war because all the technology seems incredibly hard to use. That tape library is pretty snazzy – but there’s no instructions or intuitive interface, and access to the tapes needs to be done manually? And uploading data means you need to walk out on a catwalk to manually adjust the satellite dish? Or hook up a manual data transmission with a big, fat (yet, still incredibly flexible) cable that nonetheless requires you to walk out to a control panel to flip a master switch? (And has a port that just anybody can walk up and plug said cable into?)  Not to mention, how screwed are you if, while using those instructionless mechanical hands, you happen to drop the tape cartridge down twenty stories?

We still don’t have a good system for filenames. Admittedly, Erso couldn’t call the blueprints Secret_Plan_to_Destroy_Death_Star, but really, we’re reduced to having Jyn read the filenames manually until she finds the one that’s her nickname? So if Jyn hadn’t been around, the Alliance never would have stumbled on the plan? Knowing a secret about the designer worked to find the back door in Wargames, but shouldn’t we have advanced beyond that by now?

Back doors don’t work. If nothing else, perhaps Rogue One will point out to law enforcement and the federal government why encryption back doors are a bad idea. Erso’s back door allows a couple of rebels with bombs and teeny planes to destroy an $852 quadrillion investment. At least it’s reassuring to find out, after almost 40 years, that someone had created that flaw in the design on purpose.

We still don’t have backups? Rebels are at the Scarif archive? No problem, says Governor Tarkin; we’ll just blow it up – a decision that caused no small amount of hand-wringing among librarians. “Did the Empire have a data backup plan?” worries Gabriel McKee, librarian for collections and services for New York University’s Institute for the Study of the Ancient World, who still hasn’t gotten over the destruction of the library at Alexandria. “What else was stored there, and was any of that data backed up elsewhere? Did Tarkin have authorization to, for lack of a better word, deaccession the entire archive? And could anything of Scarif’s archive have survived such apocalyptic weeding?”

If nothing else, perhaps Rogue One can be used as a cautionary tale of how not to set up a storage archive.

December 22, 2016  2:07 PM

Scientists Copy Data to Save It From Trump

Sharon Fisher Sharon Fisher Profile: Sharon Fisher
government, Storage

Co-founder of the Electronic Frontier Foundation John Gilmore was known for his saying, “The Internet interprets censorship as damage and routes around it.” With Donald Trump’s election as President, the Internet is getting a little help.

Concerned that a Trump administration might delete decades of weather data in an effort to make it more difficult to demonstrate climate change, scientists are reportedly frantically making copies of weather databases in Canada, writes Brady Dennis in the Washington Post.

“Something that seemed a little paranoid to me before all of a sudden seems potentially realistic, or at least something you’d want to hedge against,” Nick Santos, an environmental researcher at the University of California at Davis who started copying government climate data onto a nongovernment server after the election — where it will remain available to the public – told Dennis. “Doing this can only be a good thing. Hopefully they leave everything in place. But if not, we’re planning for that.”

Paranoid? Maybe not, writes Weston Williams in the Christian Science Monitor. “This would not be the first time access to climate research was restricted by a US president,” he writes. “During President George W. Bush’s administration, many Environmental Protection Agency libraries were shut down, and there were multiple accusations that government publications on climate change had been edited to change their meaning.”

With Trump, activities such as the appointment of climate change deniers, the attempt to find climate change supporters in federal agencies, and the suggestion that NASA should no longer do weather research leads scientists to believe that a Trump administration could try to alter or dismantle parts of the federal government’s repository of data on everything from rising sea levels to the number of wildfires in the country, Dennis writes.

“To be clear, neither Trump nor his transition team have said the new administration plans to manipulate or curtail publicly available data,” Dennis notes. “The transition team did not respond to a request for comment. But some scientists aren’t taking any chances.”

It all started with a Twitter crowdsourcing request from meteorologist Eric Holthaus, Dennis writes. “What are the most important .gov climate assets? Scientists: Do you have a US .gov climate database that you don’t want to see disappear?”

Within hours, responses flooded in from around the country – enough that Holthaus created a Google spreadsheet to keep track of them all. In addition, investors offered money, attorneys offered legal help, and database specialists offered expertise and storage, Dennis writes. There’s now a GitHub repository as well.

“Within two days, more than 50 key data sets had been identified, and six of them have already been archived on publicly available nongovernment servers,” reports Holthaus. “Complementary efforts at the University of Pennsylvania and the University of Toronto are merging resources to attempt to avoid duplication of effort, and the Penn Program in the Environmental Humanities put the data refuge online Tuesday afternoon. On Twitter, the most common response to the project was, ‘I can’t believe it’s come to this.’”

In some cases, they’re even making a party out of it. At the University of Toronto, researchers held a “guerrilla archiving” event to catalogue key federal environmental data ahead of Trump’s inauguration, Dennis writes.

Presidential Transition

The work is associated with the End of Term Presidential Harvest 2016, an effort by the Internet Archive to ensure that useful federal government data isn’t lost during the transition between Presidents. “With the arrival of any new president, vast troves of information on government websites are at risk of vanishing within days,” writes Jim Dwyer in the New York Times. “The fragility of digital federal records, reports and research is astounding.

The Presidential Harvest project – “a volunteer, collaborative effort by a small group of university, government and nonprofit libraries to find and save valuable pages now on federal websites,” Dwyer  writes – began before the 2008 elections and returned in 2012.

Moreover, the Internet Archive, which purports to make a copy of everything on the Internet, is also hoping to set up a copy of itself in Canada lest something happen to its American data, writes Michael Hiltzik in the Los Angeles Times. The Internet Archive  includes copies of 279 billion web pages, 2.2 million films and videos, 2.5 million audio recordings and 3 million digital books, as well as software and television programs, he writes.

“The Internet Archive has stepped up its plans to back up its entire data hoard in Canada, out of reach of what might be efforts under a Trump administration to block public access to the material,” Hiltzik writes. Internet Archive founder and chairman Brewster Kahle is seeking donations to cover the estimated $5-million cost of the project by Jan. 20, Inauguration Day, he adds. Kahle is doing this because he is concerned by some of the things Trump said while campaigning.

On the other hand, the EFF is calling for the opposite tactic: The computer civil liberties organization ran a full-page ad in Wired encouraging sysadmins to, among other things, delete log files so they couldn’t be used in the future against people. “EFF’s open letter outlines four major ways the technology community can help: using encryption for every user transaction; practicing routine deletion of data logs; revealing publicly any government request to improperly monitor users or censor speech; and joining the fight for user rights in court, in Congress, and beyond,” the organization explains in a press release. “President-Elect Trump has promised to increase surveillance, undermine security, and suppress the freedom of the press. But he needs your servers to do this. Join us in securing civil liberties in the digital world, before it’s too late.”


December 11, 2016  11:05 PM

Amazon Snowmobile Issues Need Solving

Sharon Fisher Sharon Fisher Profile: Sharon Fisher
Amazon, Glacier, Storage

Amazon has upped the ante on that old question of the bandwidth of a station wagon full of backup tapes hurtling down the highway: For one thing, instead of a station wagon, it’s a semi.

The data storage device hauled by the semi is a 45-foot shipping container called Snowmobile, and it could let people send up to 100 petabytes of data to Amazon Web Services – much, much faster than it would take to upload it. A petabyte is 1 million gigabytes. The data can be stored in either Amazon’s regular S3 service, or its “Glacier” cold storage service, which is less expensive.

The notion of doing the initial upload of data to a cloud service by shipping a physical hard drive isn’t new; the major cloud vendors have all supported this for a while. This is, in fact, considered an upgrade to Amazon’s “Snowball” service, which uses a mere 80TB suitcase.

But a semi? That’s new.

Amazon plans to drive Snowmobiles to its customers’ offices, extract their data, then cruise to an Amazon facility where the information can be transferred to the cloud-computing network in far less time than it would for so much data to travel over the web,” write Jay Greene and Laura Stevens in the Wall Street Journal. “Ten Snowmobiles would reduce the time it takes to move an exabyte from on-premises storage to Amazon’s cloud to a little less than six months, from about 26 years using a high-speed internet connection, by the company’s calculations.”

Amazon announced the new service at its annual customer conference. It is actually already available and costs half a cent per gigabyte per month of use, or about $500,000 a month to use its full capacity, Greene and Stevens write.

Physically, Snowmobile attaches to your network and appears as a local, NFS-mounted volume, writes Jeff Barr, AWS chief evangelist, in a blog post (which includes all sorts of awesome Legos showing how it works). “Snowmobile is a ruggedized, tamper-resistant shipping container 45 feet long, 9.6 feet high, and 8 feet wide,” he adds. “It is water-resistant, climate-controlled, and can be parked in a covered or uncovered area adjacent to your existing data center. Each Snowmobile consumes about 350 kW of AC power; if you don’t have sufficient capacity on site we can arrange for a generator.”

So how do you attach it to your network? “Each Snowmobile includes a [fiber] network cable connected to a high-speed switch capable of supporting 1 Tb/second of data transfer spread across multiple 40 Gb/second connections,” Barr writes, adding that you can use your company’s existing backup or archiving tools. “Assuming that your existing network can transfer data at that rate, you can fill a Snowmobile in about 10 days.”

Of course, the question of security comes up. You don’t want your company’s entire data record to be hijacked by some guy with a CDL. “Snowmobile uses multiple layers of security designed to protect your data including dedicated security personnel, GPS tracking, alarm monitoring, 24/7 video surveillance, and an optional escort security vehicle while in transit,” writes Amazon. “All data is encrypted with 256-bit encryption keys managed through the AWS Key Management Service (KMS) and designed to ensure both security and full chain-of-custody of your data.”

Exactly when and where this encryption is done isn’t said; can you load encrypted data onto Snowmobile, so that even Amazon doesn’t know what it is, or is it Amazon that encrypts the data, meaning the company has access to the data at some point?

There’s also the matter of securing the trucks themselves while in transit, writes Daniel Stoller in Bloomberg. “Researchers recently said that trucks, much like the ones Amazon would use, are prone to the same kind of hacking attacks that have disabled some connected cars,” he writes. “The researchers showed that there is a real possibility of ‘safety critical attacks that include the ability to accelerate a truck in motion, disable the driver’s ability to accelerate and disable the vehicle’s engine brake.’”

That also raises the question of what happens to the Snowmobiles full of data after it’s uploaded. How do you wipe a Snowmobile, how long does it take, and what assurance do you have that Amazon actually does this? Assuming it does; Amazon doesn’t talk about it.

The question also arises, why bother transferring the data at all? Why not just truck it to Amazon, plug it in, and leave it? Though having all the data on a physical device would sort of defeat the purpose of having it in the cloud.

Getting the data back out again may be more of an issue (a problem Amazon has also had with Glacier. “The initial launch is aimed at data import (on-premises to AWS),” Barr writes, though he adds, “We do know that some of our customers are interested in data export, with a particular focus on disaster recovery (DR) use cases.”

It will be interesting to see whether other cloud vendors, such as Google and Microsoft, follow suit, or if Amazon will have the long-haul data-trucking field to itself.


November 30, 2016  6:34 PM

Microsoft-DoJ Irish Data Case Headed for Supremes

Sharon Fisher Sharon Fisher Profile: Sharon Fisher
government, law, Microsoft, privacy, Security

As you may recall, in July the Second Circuit Court of Appeals ruled that Microsoft did indeed not have to turn over data it owned that was stored in Ireland, in response to a Department of Justice (DoJ) search warrant. At the time, supporters were glad but said it was possible that the DoJ would appeal the decision, which would mean it would go to the Supreme Court.

That has indeed happened, as of October. Sort of.

Technically, the appeal is to the Second Circuit Court of Appeals, and asks for an en banc hearing. In other words, the DoJ wants to present the information to everybody, not just a subset of judges, in hopes that they can find enough judges to agree with them. In practice, though, the Second Circuit rarely grants en banc hearings, and the previous ruling was unanimous anyway, so even if they did grant it, the verdict would likely be the same, writes Jeff John Roberts in Fortune.

The upshot, then, is that the case is likely to go to the Supreme Court.

This all gets pretty down in the weeds. “In filing for the appeal to the US Supreme Court, the DoJ has claimed that the Court of Appeals misinterpreted the law as to when companies are obliged to disclose data stored on servers in foreign jurisdictions,” writes William Fry in Lexology. “The Court of Appeals ruled that in order to rebut the presumption against extra-territoriality of legislation, the statute under which the warrant was issued (the Stored Communications Act) would have to contain an ‘affirmative indication’ of an intention to apply outside the US. The court determined that enforcement of the warrant constituted an unlawful extra-territorial application of the Stored Communications Act.”

In other words, because the Stored Communications Act didn’t specifically mention international communications, it shouldn’t apply, the Second Circuit ruled.

What some in the U.S. would like to do is, instead of making law through the court system, make it through legislation that specifically addresses the issue of international data searches, Fry writes. And that is indeed the mechanism the appeals court suggested in July. The Congressional sponsors of the ICPA have also written to the DoJ asking it to work with them on fine-turning the ICPA legislation.

The ICPA is an updated version of the Electronic Communications Privacy Act, which dates from the 1980s, and includes such things as a free pass to search for data as long as it’s more than six months old, writes Eric Peters in InsideSources. “The International Communications Privacy Act has been written to deal with ECPA’s shortcomings — including rescinding the ‘180-day loophole’ for data mining without a warrant — and to tamp down the international kerfuffle over whose laws apply,” he writes, calling on the current lame-duck Congress to pass the bill before the next session.

There hasn’t been much indication of this happening, and it seems unlikely that the incoming Congress and Administration are likely to do much about protecting users from government data search.

If the case does end up in the Supreme Court, that’s a whole new kettle of fish. Recall that, at least for the time being, there are only eight justices. What that means is, if they have a tied decision, it would apply only to this single case, not as a precedent.  On the other hand, now that the election is over and the Senate is staying in Republican hands, it’s conceivable that there could be a ninth Justice by October, when the new Supreme Court year starts. In any event, Fortune’s Roberts believes that the Supreme Court would agree to hear the case because of its importance.

Another nuance is that different Internet companies have different policies for how they store their data, with some of them stashing it wherever’s convenient at the moment, anywhere in the world, and some of them choosing a location in or out of the country where the user resides. The Second Circuit’s decision makes it difficult for the DoJ to follow any procedure to get that data, writes Orin Kerr in the Washington Post. “ I didn’t expect that major domestic providers would respond to a ruling that they can’t be compelled to disclose foreign-stored emails pursuant to a warrant by refusing to disclose foreign-stored contents voluntarily when the target was domestic and the only reason that particular e-mail was foreign-stored at that instant was the fluid nature of the network’s architecture,” he writes.

In case this is all a blur to you, Microsoft reportedly had email messages from one of its customers stored on a server in Ireland. The DoJ wanted access to those email messages while pursuing an unspecified case, claiming that since the email messages were controlled by Microsoft, an American country, the DoJ had jurisdiction over them even though they were stored in Ireland.

This viewpoint was fraught for a number of reasons, as I described in July.

  • Because so many computer companies are American, it would mean an awful lot of data worldwide would be subject to access by the U.S. government.
  • Computer companies worried that worldwide customers would stop using them because they were afraid they’d get their data accessed.
  • Having the data subject to U.S. access could mean that the company – Microsoft in this case, but any company – could be violating data privacy laws in force at the second country. (For that reason, dozens of companies and civil liberties organizations – as well as the government of Ireland itself — filed amicus curiae briefs supporting Microsoft.)
  • If this precedent was set with the U.S., all the other countries in the world could declare that, in that case, all their data laws could apply to any company doing business in their countries, which could be an incredibly complicated, contradictory mess.

For its part, the DoJ  said that if users could stash their data overseas, it would make it hard for the DoJ to catch bad guys. While there were other methods that would give the U.S. government the ability to request the data stored in the foreign country, the DoJ said they were hard to do,

At this point, it’s up to the Second Circuit – and after that, the Supremes.


November 29, 2016  12:29 AM

What Will be the Trump Effect on Data Storage?

Sharon Fisher Sharon Fisher Profile: Sharon Fisher
Encryption, government, privacy, Security, Storage

The election of Donald Trump as President of the United States could have some really interesting implications for data storage, data sovereignty, and encryption.

If by “interesting” you mean Really Scary and Bad.

The Internet Association, a group of about 40 Internet vendors including Amazon, Google, and Netflix, have already called on Trump to support Internet technologies. “Some of the policy goals stated in the letter may align with Trump’s priorities, including easing regulation on the sharing economy, lowering taxes on profits made from intellectual property and applying pressure on Europe to not erect too many barriers that restrict U.S. internet companies from growing in that market,” writes Reuters’ Dustin Volz. “Other goals are likely to clash with Trump, who offered numerous broadsides against the tech sector during his campaign.”

Here are some possible aspects of the Trump effect on data storage.

Storage manufacturing. Trump has proposed imposing a 35 percent tax on companies that manufacture their products outside the U.S. While he originally said this in the context of Apple making iPhones, he has since extended it to any company that lays people off in the U.S. to outsource manufacturing to other countries. Because the vast majority of storage hardware is manufactured outside the U.S., this could have a significant impact – either by raising prices, reducing availability, or, perhaps, having vendors move their manufacturing to the U.S. This is especially true because he has also said he intends to lower the U.S. corporate tax rate from 35 percent to 15 percent and provide a moratorium for companies like Apple, which have large offshore bank accounts, to bring their money back to the U.S.

Encryption. Remember when Apple refused to help the FBI break into an encrypted iPhone? At the time, Trump reportedly called for a boycott of Apple products, according to Reuters. But as President, and based on his Cabinet picks thus far, he could do much more. Reuters also pointed out that Senator Richard Burr – who spearheaded last year’s effort to mandate encryption “back doors” – had also been re-elected, and as chair of the Senate intelligence committee was likely to reintroduce his legislation next year.

Trump’s pick for Attorney General, Jeff Sessions, also argued that government must have access to the phone, writes Anita Balakrishnan for CNBC. “Rep. Mike Pompeo, nominated for new CIA director, has called the use of strong encryption in personal communications a ‘red flag,” she adds. “Michael Flynn, tapped as national security adviser, supported the Digital Security Commission Act, whose sponsors supported a commission to examine cases like Apple’s.”

In response, vendors of end-to-end encrypted messaging applications, such as Signal, have seen a 400 percent increase in their downloads since the election, write Stephanie Hughes and Ben Johnson for Marketplace. “Signal uses end-to-end encryption, so that no one — not even the people at Signal — can read the messages you send to others,” they write. “It’s not the only encrypted app that’s seen an uptick in use — another messaging app, Wickr, also told us they have also seen a noticeable increase in downloads.”

Data sovereignty. Trump campaigned against the Trans-Pacific Partnership (TPP), a trade deal with 11 Asian countries – in fact, he called it “rape” — and with his election it is assumed to be dead. However, it included a number of components having to do with exchanging data between countries.

For example, the TPP would have banned  forced localization –  “the discriminatory requirement that certain governments impose on U.S. businesses that they place their data, servers, research facilities, and other necessities overseas in order to access those markets,” according to the White House. Countries that have implemented these laws, such as Russia, have blocked social networking sites like LinkedInb for storing data on citizens outside the country.

It also would have reserved free international movement of data, “ensuring that individuals, small businesses, and families in all TPP countries can take advantage of online shopping, communicate efficiently at low cost, and access, move, and store data freely,” the White House writes. “On the most fundamental level, TPP grants data, for the first time, the same legal protections in international trade law as goods,” writes Carter Dougherty for the International Business Times. “If it’s from a TPP member country, it’s treated as data flowing within the country would be treated.”

Surveillance. Trump has already indicated his interest in setting up a registry or database of Muslims or Muslim immigrants. Some people are concerned that this presages a general increase in government surveillance, Reuters writes. And Trump said during the campaign that he was interested in restoring provisions of the Patriot Act to allow for bulk data collection, writes The Hill.

Trump’s inauguration is scheduled for January 20.


November 24, 2016  12:58 AM

Bumps in the Road to Self-Driving Car Storage

Sharon Fisher Sharon Fisher Profile: Sharon Fisher
Storage

Everybody’s all excited about self-driving cars, but not too many people are talking about self-driving cars’ data. What sort of data will be collected, how much will be stored on the car, where it will be sent from the car, and what sort of security that data will have from hackers, law enforcement, and the government are all open questions.

There isn’t any question that, one way or another, self-driving cars are going to generate and process a lot of data – on the order of 1 GBps, according to Intel, which is investing $250 million in self-driving car technology. That would be more than 4 TB an hour.

As someone who has trouble keeping up with the data collected by the dashboard cam, this seems like a lot of data. Where is the car going to put it?

“Some data will have to be stored online,” writes Paresh Dave in the Los Angeles Times. “That means automakers have to prepare for boosting spending on storage, said Intel’s Doug Davis, senior vice president and general manager of a division dealing with cutting-edge mobile devices.”

But the vast majority of it will be used in real time and then thrown away, automakers reassure us. The vehicle itself might store just a terabyte or so. Otherwise, it’ll be just like our phones — what’s kept will mostly go up in the cloud, according to Richard Barrett, senior product marketing engineer, automotive wireless technology at Cypress Semiconductor, writes Ann Steffora Mutschler in Semiconductor Engineering.

I don’t know about Barrett, but my phone has 54 gigabytes of storage, and I’ve used almost half of it.

But okay, fine. Most of the data will go into the cloud.

Hold on. Which data will go into the cloud? And who will get it, exactly? To what degree will you know, and how much can you limit this?

“Automakers already collect and store location and driving data from millions of cars on the road today,” writes Pete Bigelow in AutoBlog. “In the approaching era of self-driving vehicles, privacy advocates fear the data collection will grow more intrusive. The more sensitive the data, the more lucrative it might be for a company like Google, says John Simpson, an advocate with California-based Consumer Watchdog. ‘Once this technology is widely adopted, they’ll have all sorts of information on where you’re driving, how fast you’re going, and there’s no control over what Google might do with it,’ he said.”

In fact, automakers are reportedly salivating over the amount of data that self-driving cars could generate and how that data could be sold, Dave writes. “The $100-billion app economy built on data from smartphones would look small compared with the $750 billion in revenue produced around cars,” he writes – an estimate from McKinsey for 2030. “The forecast has automakers buzzing. As they accelerate spending on developing self-driving cars, they’re devoting enormous attention on what to do with data that those high-tech devices generate — beyond making the drive automated. Among the possibilities: selling details about driving patterns to real estate developers or using it in personalized insurance calculations.”

Oh goody.

Note that some of this data is already being collected now. A number of auto loan companies, insurance companies, and car rental agencies have GPS units on the vehicle that collect information ranging from location to acting as a “black box” about your driving. The Government Accountability Office did a report on this in 2013.

Similarly, the state of Oregon is piloting a program where, instead of paying a gas tax, you’d pay a “mileage tax” based on how many miles in the state you drive. This is important, the state explains, because as cars become more efficient and are more likely to use electricity, they use less gas, and the gas tax no longer adequately funds road maintenance, repair, and construction. Which makes sense, but it still means you have a little box attached to your car keeping track of where you’ve been.

Not to mention, how else might this data be used? Bigelow writes. “How often do you happen to drive your car to a liquor store, and will that information be provided to your insurance company?” he quotes Consumer Watchdog executive director Carmen Balber as saying. “Will information on where you spend your Saturday nights be subpoenaed in a divorce proceeding?”

Keep in mind that in 2011 people had conniptions because they realized their smartphones were saving their location data. Now we’re going to let cars do it?

And those are just the legitimate data users. Can your car get hacked to get a nice list of your past movements for blackmail purposes? Can the real-time data be monitored so the burglars know when to hit your house?

That brings us to the courthouse. Will the government have access to the data – with or without a warrant – to know if you’ve been visiting a house associated with terrorist activities, or a neighborhood associated with drugs or prostitution?

Self-driving cars offer a lot of potential – but a lot of potential problems as well.


November 18, 2016  6:21 PM

These 5 SMART Statistics Predict Disk Failure

Sharon Fisher Sharon Fisher Profile: Sharon Fisher
Backblaze, Failure, SMART

When will a disk drive fail? The people at BackBlaze, who use 67,814 disk drives, have been looking at Self-Monitoring, Analysis and Reporting Technology  (SMART) disk drive statistics to try to predict this.

They’ve been looking at this for a while, at least since 2014. As I wrote then:

“SMART defines — and measures, for those vendors that support it — more than 70 characteristics of a particular disk drive. But while it’s great to know how many High Fly Writes or Free Fall Events a disk has undergone, these figures aren’t necessarily useful in any real sense of being able to predict a hard drive failure.

“Part of this is because of the typical problem with standards: Just because two vendors implement a standard, it doesn’t mean they’ve implemented it in the same way. So the way Seagate counts something might not be the same way as Hitachi counts something. In addition, vendors might not implement all of the standard. Finally, in some cases, even the standard itself is…unclear, as with Disk Shift, or the distance the disk has shifted relative to the spindle (usually due to shock or temperature), where Wikipedia notes, ‘Unit of measure is unknown.’”

At that point, BackBlaze had determined that out of the 70 statistics SMART tracked, there was really only one that mattered: SMART 187, or Reported_Uncorrectable_Errors. At the time, BackBlaze wrote: “Drives with 0 uncorrectable errors hardly ever fail. Once SMART 187 goes above 0, we schedule the drive for replacement.”

Since then, the company has been looking at the SMART statistics some more, and it’s now added four other statistics that it’s determined have a correlation to failed drives, writes senior marketing manager Andy Klein:

  • SMART 5 Reallocated Sectors Count
  • SMART 188 Command Timeout
  • SMART 197 Current Pending Sector Count
  • SMART 198 Uncorrectable Sector Count

The company didn’t say why it started looking at other SMART statistics if it had already determined that there was one statistic that was correlated with failure. (It also makes its raw statistics available in case you want to play with correlations yourself.) He also points out that not all vendors report all the statistics, or in the same way — an issue two years ago as well.

Another factor is the period of time in which the errors occur, Klein writes. “For example, let’s start with a hard drive that jumps from zero to 20 Reported Uncorrectable Errors (SMART 187) in one day,” he writes. “Compare that to a second drive which has a count of 60 SMART 187 errors, with one error occurring on average once a month over a five year period. Which drive is a better candidate for failure?” He doesn’t actually say, though he implies that it’s the first one.

Incidentally, BackBlaze has even started looking at High Fly Writes as a possible indicator of future disk failure. “This stat is the cumulative count of the number of times the recording head ‘flies’ outside its normal operating range,” Klein explains, noting that while 47 percent of failed drives have a SMART 189 value of greater than zero, so do 16.4 percent of drives that work. “The false positive percentage of operational drives having a greater than zero value may at first glance seem to render this stat meaningless. But what if I told you that for most of the operational drives with SMART 189 errors, that those errors were distributed fairly evenly over a long period of time?” he asks. “For example, there was one error a week on average for 52 weeks. In addition, what if I told you that many of the failed drives with this error had a similar number of errors, but they were distributed over a much shorter period of time, for example 52 errors over a one-week period. Suddenly SMART 189 looks very interesting in predicting failure by looking for clusters of High Fly Writes over a small period of time.”

That’s not to say that any of the statistics, or even a combination of them, is a perfect predictor of when a disk drive is going to fail – or when it isn’t. The organization points out, “Operational drives with one or more of our five SMART stats greater than zero – 4.2 percent. Failed drives with one or more of our five SMART stats greater than zero – 76.7 percent,” writes Klein. “That means that 23.3 percent of failed drives showed no warning from the SMART stats we record.” But if nothing else, it sounds like starting to see these particular SMART errors is a way to bet.

Disclaimer: I am a BackBlaze customer.


October 31, 2016  3:31 PM

Future of eDiscovery: ‘Things’

Sharon Fisher Sharon Fisher Profile: Sharon Fisher
ediscovery, Internet of Things

Technology-assisted review electronic discovery products are growing more sophisticated. On the other hand, as with any other industry, big data is giving attorneys much more data to deal with, including the Internet of Things as everyday items such as televisions and refrigerators are collecting information about their users. That’s according to a recent session on the future of ediscovery, held partly to commemorate the tenth anniversary of ediscovery. The session was held in late October as part of the Masters Conference for Legal Professionals at the Capital Hilton in Washington, D.C.

For example, predictive coding enabled Sidley Austin partner Robert Keeling to determine that, in one particular Foreign Corrupt Practices Act investigation, participants used “chocolate” whenever they meant “cash” in the context of bribes, writes Zoe Tillman in Legaltech News.

(Apparently equating chocolate with cash in bribes isn’t even all that uncommon; it has happened with a Malaysian company as well as with the Mexican company Keeling was likely referring to. “A subsidiary paid routine bribes referred to as ‘chocolates’ to Mexican officials in order to obtain lucrative sales contracts with government hospitals,” wrote the Securities and Exchange Commission about the latter 2012 case. “The ‘chocolates” came in the form of cash, laptop computers, televisions, and appliances that were provided directly to Mexican government officials or indirectly through front companies that the officials owned. The bribery scheme lasted for several years and yielded nearly $5 million in illegal profits.”)

Such predictive coding tools fall in general into the realm of technology that can help attorneys more easily do their jobs, Tillman writes. Other examples include:

  • Programs that can analyze audio files and images, pieces of information that lawyers typically had to review manually
  • Tools that allow lawyers to more easily redact records
  • More sophisticated coding that goes beyond text and uses other internal data such as pixels to categorize data

In addition to finding ways to deal with the firehose of data from the Internet of Things, attorneys also have to pay attention to the security implications of using particular categories of data, Tillman writes. Whether the accumulation of such data is considered helpful or creepy depends on how convenient it is, panelists noted.

“It can be a bit off-putting, the panelists quipped, when a married couple is merely discussing the possibility of having children and coupons appear in the mail for diapers,” writes Tera Brostoff in Bloomberg BNA. “Your phone, your car or even your refrigerator is creating a ‘pattern of life’ on you. But the question of who is looking at that data and how it’s being stored is something you might not consider until you face litigation.”

IoT data is particularly useful in labor and employment law, Brostoff writes, including data collected from sensors such as the global-positioning system (GPS) in a car, a FitBit or other wearable technology, or the RFID chip in a employee’s ID card.  “For example, we now have your GPS data saying that you were at McDonald’s for four hours during the middle of the work day,” she writes.

But as always, the panelists said, the biggest challenge in ediscovery is convincing organizations to use the tools, Tillman writes. “A big challenge for practitioners continues to be convincing corporate clients to get on board with new [technology aided review] (TAR) products,” she writes. In particular, some are concerned about the security implications of sending data to the cloud, she adds. Vendors are looking to forestall these concerns by integrating ediscovery tools into their existing cloud-based products, she writes.


October 29, 2016  11:09 AM

LDiscovery Acquires Kroll

Sharon Fisher Sharon Fisher Profile: Sharon Fisher
ediscovery, Kroll

Apparently OpenText’s June acquisition of Recommind has reminded everyone that the e-discovery M&A market is alive and well: LDiscovery has just acquired Kroll Ontrack for a reported $410 million.

The all-cash transaction is expected to close in the fourth quarter. The combined organization is said to have 12,000 clients worldwide, “including some of the world’s largest financial institutions, Fortune 1000 companies and Am Law 100 firms,” with offices in 19 countries comprising 1,300 employees. Kroll CEO Mark Williams will be CEO of the combined companies.

As you may recall, Kroll had been dropped from the Leaders to the Challengers quadrant in the 2015 Gartner E-discovery Magic Quadrant, due to what Gartner felt was a lack of vision.

And just who is LDiscovery? Identified as “a provider of eDiscovery and Information Governance solutions,” what’s interesting about it is that the company has typically been associated with e-discovery vendor kCura, which never fails to mention that it was named a Leader in the 2015 Magic Quadrant.

But if LDiscovery was in the market for an e-discovery vendor, what kept it from acquiring the one it already worked with? As recently as early this month, LDiscovery was named a finalist for the Relativity (kCura’s product) Innovation Awards for Best Service Provider Solution, adding that “LDiscovery is the only provider to be a finalist three years in a row.” So what will happen to its relationship with kCura? Especially after LDiscovery CEO Chris Weiler called OnTrack “best in class”?

(Incidentally, has anyone else noticed that Gartner hasn’t yet released its E-discovery Magic Quadrant for 2016? Typically it comes out in early summer. Could it be that they’re frantically rewriting it to accommodate these acquisitions?)

On the other hand, the acquisition may not be that weird, writes Gabe Friedman for Bloomberg Law. “Both companies operate in the review space, rather than the software space,” he writes. “Both companies are partners with kCura, and employ dozens of certified experts and administrators of its Relativity software review platform.”

And it’s not out of the question that LDiscovery – or, rather, its parent, the Carlyle Group – might pick up kCura in the future. “The Carlyle Group, working with Revolution Growth, purchased LDiscovery for $150 million this past January,” Friedman writes. “ LDiscovery itself had purchased six companies in the past two years. Will Darman, the managing director at Carlyle who led the purchase of LDiscovery, has predicted there will be continued consolidation in the e-Discovery review market.”

Weiler had noted during the January acquisition that it intended to use the money for further acquisitions, particularly in other geographic areas. Kroll is based in New York, while LDiscovery is based in Tysons Corner, Va.

Interestingly, Kroll’s parent company, Corporate Risk Holdings, was said by Moody’s to have “concerns regarding the company’s weak liquidity and the sustainability of its capital structure given its very high debt leverage.’ Moody’s plans to re-examine Corporate Risk’s ratings in light of the acquisition.


October 25, 2016  10:39 PM

DNA Data Storage Hopeful

Sharon Fisher Sharon Fisher Profile: Sharon Fisher
research, Storage

Scientists and researchers love finding new and interesting ways to store data, especially as the traditional spinning disk and even flash drive technologies are starting to run into the laws of physics. The newest one? DNA data storage.

Actually, the notion started out as a joke, writes Andy Extance in Nature. In 2011, biological information scientists were trying to figure out how they could afford to store the amount of genome sequences they had, and were so frustrated by the expense and limitations of conventional computing technology that they started kidding about sci-fi alternatives like using DNA. “Then the laughter stopped,” he writes. “It was a lightbulb moment.”

Scientists at the European Bioinformatics Institute and at Harvard University each demonstrated a proof-of-concept of the idea in 2013, of just 0.74 megabytes. What’s different now is that Microsoft, which has been taking the lead in DNA storage research, was able to save 200 megabytes in DNA, or more than a 270 times improvement over 2013.

In fact, at one point, you could even buy a DNA storage device on Amazon, which stored 512 KB—enough for a small photograph or a document, according to the product description.

The downside? The cost, writes Tobi Ogunnaike in SingularityHub. “The current cost of DNA data storage is not attractive,” he writes. “Storing digital data in DNA involves both reading and writing DNA. While the price of reading DNA (DNA sequencing) has fallen sharply, the price of writing DNA (DNA synthesis) currently remains prohibitively high for data storage.”

The Microsoft experiment might have cost on the order of $150 million, writes Andrew Rosenblum in MIT Technology Review. “Microsoft won’t disclose details of what it spent to make its 200-megabyte DNA data store, which required about 1.5 billion bases,” he writes. “But Twist Bioscience, which synthesized the DNA, typically charges 10 cents for each base. Commercially available synthesis can cost as little as .04 cents per base. Reading out a million bases costs roughly a penny.” On the other hand, costs are dropping rapidly, he adds. “It would have cost about $10 million to sequence a human genome in 2007 but close to only $1,000 in 2015.”

Another downside? The speed, Extance writes. “DNA storage would be pathetically slow compared with the microsecond timescales for reading or writing bits in a silicon memory chip,” he writes. “It would take hours to encode data by synthesizing DNA strings with a specific pattern of bases, and still more hours to recover that information using a sequencing machine.”

But DNA storage offers one big advantage: Its density, or the amount of data it could store in a small space, Extance adds. “By 2040, if everything were stored for instant access in, say, the flash memory chips used in memory sticks, the archive would consume 10–100 times the expected supply of microchip-grade silicon,” he warns. Similarly, a data center holding an exabyte (one billion gigabytes) on tape drives would require $1 billion over 10 years to build and maintain, as well as hundreds of megawatts of power, he writes. “If information could be packaged as densely as it is in the genes of the bacterium Escherichia coli, the world’s storage needs could be met by about a kilogram of DNA.”

For example, the 200-megabyte Microsoft example occupied a spot in a test tube much smaller than the tip of a pencil, the company writes, which performed the experiment with the University of Washington.

DNA storage could also last longer and be more stable than existing data storage methods, writes John Markoff in the New York Times. “Magnetic disks, tape and even optical storage systems safely store information at most for only a handful of decades,” he writes. “The recent advances suggest there may be a new way to store the exploding amount of computer data for centuries rather than decades.” Keep in mind that readable DNA has been found in fossils thousands of years old.

Of course, DNA storage isn’t a panacea. There are a number of open questions about a new kind of storage system, Ogunnaike writes, such as what sort of security and user interface it will have. “Will we all have DNA sequencers, DNA synthesizers, and algorithms that translate digital data into biological data in our phones, our homes, or our local community biohacker spaces? Or will these capacities be restricted to companies?” he writes. “In either scenario, how easily we can interact with DNA data storage technology might affect how quickly we adopt this technology.”

Other complications include error correction, retrieving just portions of the data, and avoiding the sort of protein sequences that causes DNA strings to fold up, Extance writes.

And what if something you save accidentally creates life? Hmm. It could be the next Jurassic Park movie.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: