The Department of Homeland Security announced, in a very low-key way, on November 19 that it was planning to delete “Master files and outputs of an electronic information system which performs information technology infrastructure intrusion detection, analysis, and prevention.” It gave people until December 19 to ask for copies of the plan, following standard National Archives and Records Administration protocol. After requesters receiver their copies, they have 30 days to comment.
According to Nexgov, what the agency is looking to delete are records more than three years old from its Einstein network monitoring system, which is intended to help DHS cybersecurity experts look for malware such as Heartbleed in government networks. This is making some security people happy, because they are concerned about the government keeping all these records. At the same time, it is making some security people sad, because they wonder if the government is trying to hide something by deleting the records.
“As a general matter, getting rid of data about people’s activities is a pro-privacy, pro-security step,” Nextov quoted Lee Tien, senior staff attorney with the Electronic Frontier Foundation, as saying. But “if the data relates to something they’re trying to hide, that’s bad,” he continued.
DHS says it wants to delete the data because since it’s three years old, it’s not useful anymore. (The agency still keeps incident reports.) Others disagree.”Some security experts say, to the contrary, DHS would be deleting a treasure chest of historical threat data,” writes Nextgov’s Aliya Sternstein. “And privacy experts, who wish the metadata wasn’t collected at all, say destroying it could eliminate evidence that the governmentwide surveillance system does not perform as intended.”
What’s causing some people to feel suspicious is that the rationale the agency is using to delete the data is the cost, which it estimates at $50 per month per terabyte. Given that you can get a 1-terabyte drive from Staples for less than that these days (yes, we know, there’s more to it than the hardware cost), this seems…excessive. On the other hand, some people are wondering just how much data DHS has that it’s a significant amount of money.
Data to be deleted includes email, contact and other personal information of federal workers and public citizens who communicate concerns about potential cyber threats to DHS; intrusion detection data; intrusion prevention data; analysis data such as files from the U.S. Computer Emergency Readiness Team (CERT); and a catch-all “information sharing” including data from white papers and conferences, Nextgov reports.
So what is Einstein? It is the result of automated processes that collect, correlate, analyze, and share computer security information across federal U.S. civilian agencies, according to BiometricUpdate. “By collecting information from participating federal government agencies, ‘Einstein’ builds and enhances cyber-related situational awareness,” writes Rawlson King. “The belief is that awareness can assist with identifying and responding to cyber threats and attacks, improve the government’s network security, increase the resiliency of critical, electronically delivered government services, and enhance the survivability of the Internet. The program provides federal civilian agencies with a capability to detect behavioral anomalies within their networks. By analyzing the data and detecting these anomalies, the ability to detect new exploits and attacks in cyberspace are believed to be greatly increased.”
That said, this is all happening against a background of other changes in DHS involving cybersecurity that are making some people nervous.
- Brendan Goode, the director of the Network Security Deployment division in the Office of Cybersecurity and Communications (CS&C) who built the Einstein system, announced earlier in November that he was leaving for the private sector, according to Federal News Radio. While his last day was scheduled to be November 21, he hadn’t yet announced where he was going, nor has he updated his LinkedIn page.
- After its initial setup in 2004, Einstein is now on its third implementation and has agreements with 15 out of the 23 agencies expected to sign up for it (out of nearly 600 agencies, according to RT.com), and implementations with 9 of them, all at a cost of hundreds of millions of dollars.
- Due to incidents such as Heartbleed — where DHS had to wait up to a week for agency approvals, all while news of the vulnerability was out in the wild — the DHS now has the authority, as of October, to proactively monitor federal networks for vulnerabilities without having to wait for agency permission. “Agencies must provide DHS with an authorization for scanning of Internet accessible addresses and systems, as well as provide DHS, on a semiannual basis, with a complete list of all internet accessible addresses and systems, including static IP addresses for external websites, servers and other access points and domain name service names for dynamically provisioned systems. Agencies must give DHS at least five days advanced notice of changes to IP ranges as well. Further, agencies must enter into legal agreements for the deployment of DHS’s EINSTEIN monitoring system, provide DHS with names of vendors who manage, host, or provide security for Internet accessible systems, including external websites and servers, and ensure that those vendors have provided any necessary authorizations for DHS scanning of agency systems,” summarized FedWeek.
- On the other hand, contractor vendors aren’t exactly leaping to be included.
It isn’t clear how much DHS was hoping that this would all be lost in the shuffle around the holidays. Presumably organizations such as the EFF and Nextgov have filed requests for the plans, and will follow up. If it’s the sort of thing you might feel the need to comment on, however, it might be a good idea to make your own request, if comments are limited to people who request the documents.
You may recall that in June, the Interwebs were burning up with the story about former director of exempt organizations for the IRS Lois Lerner, and how something like two years’ worth of email messages — conveniently covering a period of time under Congressional investigation — were unavailable because employees could only store 500 mb of email, backup tapes were only saved for six months, and her computer had crashed, wiping out her hard disk drive. While not everyone thought it was a coverup on the order of the missing 18 minutes on the Watergate tapes, few would argue that it was no way to run a railroad.
Now, it turns out that the IRS might have backup copies of the email messages after all — but retrieving them is likely to take a lot of time and money.
We’re not going to get into the politics of the investigation. As before, we’re just interested in this as a government IT problem — and it’s a dilly.
In the fine tradition of Taking Out the Trash Day — and like the original announcement of the missing email messages itself — this news was released on the Friday afternoon before Thanksgiving.
“The U.S. Treasury Inspector General for Tax Administration (TIGTA) informed congressional staffers from several committees on Friday that the emails were found among hundreds of “disaster recovery tapes” that were used to back up the IRS email system,” reports the Washington Examiner. As many as 30,000 email messages could be found.
Finding them might take a while, and technical details of exactly what’s going on are sketchy. Most of the coverage is in the mainstream or right-wing media, which isn’t necessarily all that tech-savvy to begin with. Moreover, they’re also quoting Congressmen and their staffs, who aren’t exactly technical experts either. And while there may be technical people explaining more detail in comments on the stories, finding those comments among the hundreds railing about “libtards” and “Obummer” and “Benghazi” is more difficult than finding Lerner’s messages on the tapes themselves.
So here’s what’s happened since the original story in June.
In August, a representative from a watchdog organization called, appropriately, Judicial Watch told Fox News that it had heard from a Justice Department official that there were backup tapes “in case something terrible happened in Washington” and that Lerner’s email messages might be on those tapes. Congressional representatives wrote to the IRS in September asking about those. But court documents filed in October said there was no such thing beyond the standard disaster recovery tapes that were overwritten every six months, although it did agree there were server backups, which were being examined by TIGTA.
(Lerner also had a Blackberry that was replaced in February 2012, and while Judicial Watch felt that some of the email messages might be on that older Blackberry, it had been destroyed when it was replaced.)
Now, apparently some backups have been found. Where exactly these tapes came from is not clear. Are they different from the tapes that are supposedly recycled every six months? If so, where did they come from? Or did that recycling not occur? If not, why not?
Wherever the tapes themselves came from, here’s some of the problems in finding the missing messages.
- The 30,000 email messages are scattered among 250 million email messages on 744 disaster recovery tapes, according to the Washington Examiner.
- Moreover, finding the actual messages could take a while because it could take weeks to learn their content “because they are encoded,” according to Fox News, quoting Frederick Hill, a spokesman for Republicans on the Oversight committee. Does “encoded” mean “encrypted”? Or is this simply referring to the encoding the email messages have to work with the email program?
- Before the messages can be released, any personally identifiable information in them about individual taxpayers has to be redacted.
- Even when the messages are tracked down, investigators may find that they’re simply duplicates of the 24,000 messages they already have already located, such as by getting copies from the people with whom Lerner had exchanged email, reports The Hill.
Ironically, what might have saved the messages was budget cuts. The Washington Examiner reported in September that some 760 “exchange servers”[sic; do they mean Microsoft Exchange email servers?] — which were supposed to have been destroyed two years previously — might have been spared due to budgetary constraints. It isn’t clear whether these tapes come from those servers, or if the examination of those servers is complete; there could be further revelations forthcoming.
Anyone who’s had a hard drive fail just as they were about to do a backup on it, honest! will understand how much we’d all like to know when our hard disks are about to fail.
Some time ago (between 1995 and 2004, depending on how you count), a standard was developed called Self-Monitoring, Analysis and Reporting Technology (SMART, get it?) that was intended to help with this problem.
Unfortunately, like many other technologies, its user experience was not the best. SMART defines — and measures, for those vendors that support it — more than 70 characteristics of a particular disk drive. But while it’s great to know how many High Fly Writes or Free Fall Events a disk has undergone, these figures aren’t necessarily useful in any real sense of being able to predict a hard drive failure.
Part of this is because of the typical problem with standards: Just because two vendors implement a standard, it doesn’t mean they’ve implemented it in the same way. So the way Seagate counts something might not be the same way as Hitachi counts something. In addition, vendors might not implement all of the standard. Finally, in some cases, even the standard itself is…unclear, as with Disk Shift, or the distance the disk has shifted relative to the spindle (usually due to shock or temperature), where Wikipedia notes, “Unit of measure is unknown.”
That’s not going to be helpful if, for example, one vendor is measuring it in microns and one in centimeters.
There have been various attempts at dealing with this problem of figuring out which of these statistics are actually useful. One in particular was a paper presented at 2007 Usenix by three Google engineers, “Failure Trends in a Large Disk Drive Population.” What was interesting about Google is that it used enough hard drives to be able to develop some useful correlations between these 70-odd (and some of them are very odd) measurements and actual failure.
Now there’s sort of an update to that paper, but it uses littler words and is generally more accessible to people. It’s put out by Brian Beach, an engineer at BackBlaze; we’ve written about them before. Like Google, their insights into commodity hard disk drives are useful, simply because they use so darn many of them.
What BackBlaze has done this time is look at all the drives they have that have failed, and then go back and look at all their SMART statistics, and then correlate them. The company also looked at how different vendors measure these different statistics, so they have a good idea about which statistics are relatively common across vendors. This gives us a better idea of which statistics we should actually be paying attention to.
As it turns out, there’s really just one: SMART 187 – Reported_Uncorrectable_Errors.
“Number 187 reports the number of reads that could not be corrected using hardware [Error Correcting Code] ECC,” BackBlaze explains. “Drives with 0 uncorrectable errors hardly ever fail. Once SMART 187 goes above 0, we schedule the drive for replacement.”
Interestingly, this particular statistic isn’t even mentioned in the Google paper, nor is it called out in the Wikipedia entry for SMART as being a potential indicator of imminent electromechanical failure.
BackBlaze also discusses its results with several other statistics, and explains why it doesn’t find them useful. Finally, for the statistics wonks among you, the company also published a complete list of SMART results among its 40,000 disk drives. (And for some, that’s still not enough; in the comments section, people are asking BackBlaze to release the raw data in spreadsheet form.)
In addition to giving us one useful stat to look at rather than 70 un-useful ones, this research will hopefully encourage hardware vendors to work together to report their statistics more meaningfully, and for software vendors to develop better, more useful tools to interpret the statistics.
Disclaimer: I am a BackBlaze customer.
While courts are still arguing back and forth about whether people can be compelled to give up the encryption key for their laptops and other devices, it looks like they may have decided that it’s okay to force you to use your fingerprint to unlock smartphones with that capability.
Judge Steven C. Frucci, of Virginia Beach, Va., ruled that David Baust, who was charged in February with trying to strangle his girlfriend, had to give up his fingerprint so prosecutors could check whether his cellphone had video of the incident.
The distinction that courts draw in general is that a physical thing, like a key to a lockbox, is not protected by the Fifth Amendment. But the “expression of the contents of an individual’s mind,” such as the combination to a safe, is protected. Courts have been debating for a couple of years now about whether an encryption key is something you have or something you know. A fingerprint, however, is something you have, similar to the way that you can be compelled to give up a blood sample to test for alcohol, ruled the judge.
Phones that include fingerprint detectors include the Apple iPhone 5S and the Samsung Galaxy S5, according to the Wall Street Journal. In fact, when phones with fingerprint capability came out last year, organizations such as the Electronic Freedom Foundation and other legal experts warned that this could happen. “It isn’t hard to imagine police also forcing a suspect to put his thumb on his iPhone to take a look inside,” Brian Hayden Pascal, a research fellow at the University of California Hastings Law School’s Institute for Innovation Law, told the Journal last fall. Ironically, fingerprint scanners were supposed to make the phones more secure.
This also fits in with recent moves from companies such as Apple to make encryption the default on smartphones so the companies can’t be compelled to reveal information on the phones. If the phone is protected only by a fingerprint, then police could use the fingerprint to decrypt data on the phone. “One of the major selling points for the recent generation of smartphones has been that many of them don’t save their data in a way accessible to anyone without the phone itself,” writes Eric Hal Schwartz in In the Capital. “It’s something that has annoyed law enforcement like FBI director James Comey, but it chips away at some of that much-touted privacy if police can get into a phone with your fingerprint without your permission.”
Actually, Frucci made a distinction between Baust giving up his fingerprint, which he could be forced to do, and not having to give up a password for the phone, which the judge said he could not be forced to do. In other words, if the smartphone was protected by both a fingerprint and a password — such as, if the phone had been turned off — prosecutors would still be out of luck. If you’re concerned about this, some people are recommending turning off your phone when police approach, or by messing up the fingerprint unlocking multiple times, to force the phone to require you to enter a password.
With this being the centennial of the start of World War I, and with what’s going on in the storage industry lately, it isn’t surprising if you’re also being reminded of the decline and fall of the Ottoman Empire.
Well, okay. Maybe only if you’re a history buff.
In case you were dozing in the back row during world history class in tenth grade (or, if, like me, your history teacher was actually a repurposed Latin teacher and you spent all but the last two weeks of the school year on Greece and Rome, meaning you covered a millennium a day those last two weeks), the Ottoman Empire lasted in one way, shape, or form for more than 500 years. It spanned three continents — Europe, Asia, and Africa — and contained 29 provinces and many other states. But it fell during World War I, and nations such as Britain and France carved up the pieces willy-nilly into ways that made sense to them, without paying much attention to cultural boundaries or what the people in those states might actually want to do. (In fact, some of the current conflict in the Middle East dates directly back to those actions. But I digress.)
Any of this ringing a bell yet?
So at this point, in the storage and e-discovery industry that this blog covers, we have not one but three Ottoman empires potentially in the process of dissolving, with a bunch of people on both the outside and the inside watching and speculating about how the pieces might all eventually fit together.
We’ve already talked about EMC, which is under pressure from shareholders to break itself up so the pieces can be worth more — a case of the whole being worth less than the sum of the parts. It isn’t clear yet exactly what’s going to happen with EMC, though there’s been plenty of speculation. (To further complicate things, EMC and Cisco are breaking up their partnership, which resulted in the software-defined networking joint venture VCE, with EMC taking control of it. More pieces to juggle.)
In the meantime, both HP and Symantec have announced their intentions to split in two. HP’s pieces are going to be one for its printer and PC business, and one for its corporate computer hardware and software business. Symantec’s pieces are going to be one for its security management products and one for its information management products.
And while the Britains and the Frances of the computer industry are arguing over the bigger pieces and how they will best fit together, other people — especially in e-discovery — are talking about some of the other pieces that haven’t gotten as much love lately and how this could all work out for them.
The HP split, for example, could result in new support for Autonomy, which HP bought for what everyone — including HP — agrees was way too much money. Not only was it not great for HP, but it hasn’t been too great for the Autonomy people either, who are kinda HP’s red-headed stepchildren.
The HP split, in fact, is “probably good news for long-suffering customers of the former Autonomy products,” writes Tony Byrne of Real Story Group. “You know why? Because things couldn’t get much worse for them.”
Meanwhile, Gartner pointed out this summer in its e-discovery Magic Quadrant that although it still positioned Symantec in the Leaders quadrant, its Clearwell product — one of the first big acquisitions in the 2011 e-discovery land grab — had languished under Symantec’s control. Or, as Gartner puts it, “The innovation pipeline for the eDiscovery Platform has slowed during Symantec’s acquisition and integration of Clearwell Systems, resulting in the product’s lack of growth and new releases.”
(Keep in mind that Autonomy and Clearwell had both individually been listed in the Leaders quadrant in the original 2011 e-discovery Magic Quadrant. Almost makes you wish that some company that really had a great vision for e-discovery would buy both pieces, integrate them, and really do it right.)
At the same time, some people are looking at some of the less-loved, neglected pieces of EMC, such as Documentum, and thinking that maybe there’s some way these could get involved, too.
“[Documentum] doesn’t seem to play a role in EMC’s survival,” writes Virginia Backaitis in CMSWire, before going on to suggest that HP buy it and integrate it with Autonomy. “In EMC’s quarterly call with investors last week, neither EMC CEO Joe Tucci nor his lieutenants (David Goulden, CEO of EMC Information Infrastructure and CFO Zane Rowe) uttered the name of its spawn at all.”
It remains to be seen how the various pieces of all three companies will combine (hopefully not in some e-discovery version of Iraq, with different factions battling for control). If nothing else, it could mean that next year’s Gartner e-discovery Magic Quadrant, which has been pretty much of a snore the last couple of years, has the potential to be a lot more interesting.
Periodically, people take the new capacity of storage media — not to mention the new increasing sizes of motor vehicles — and uses it to recalculate that lovely statistic, “what is the bandwidth of a station wagon full of tapes speeding down the highway?” So now we have a new one — how much data goes back and forth to major cities, especially using public transit?
We now have that data courtesy of Mozy, a cloud backup service that describes itself as the “most trusted.” (Exactly how they figured out it was the “most trusted,” they don’t say.) According to the company, when you add up laptops, smartphones, personal hard drives, thumb drives, and so on, you end up with a pretty horrendous amount of data leaving the office every day:
- The average commuter takes 470GB of company data home with them at the end of every day — 2,500 times the amount of data they’ll move across the Internet in the same timeframe
- Every day, 1.4 exabytes of data moves through New York City alone – that’s more data than will cross the entire Internet in a day
- As much as 33.5PB of data will travel over the Oakland Bay Bridge every day
- As much as 49 PB of data will travel through the Lincoln Tunnel each day
- Up to 328PB of data travels in the London Tube network every day
- Up to 69PB of data leaves Munich’s Hauptbahnhof on a daily basis
- The Paris Metro carries as much as 138PB of data every day
(There’s also some really cool maps showing where the data is coming from.)
There is, however, one flaw in the Mozy description, which is that it refers to this phenomenon as a “data drain.” That’s not really accurate. A “brain drain,” for example, typically refers to people leaving an area. Their brains are therefore gone from the area. But this data isn’t actually leaving the area, in the context of it being gone. Instead, the data is copied. This leads to its own issues, such as version control, security, and simply taking up much more storage space than is really required. (Good thing storage is so cheap these days, amirite?)
And certainly one could quibble with the figure. Mozy doesn’t explain the methodology, but presumably it’s adding up the storage in each of the devices that people carry back and forth. And who knows, really, how much of it is actually corporate data, and how much of it is cat pictures? That said, it’s certainly a fun back-of-the-envelope statistic to calculate.
Anyway, it’s the security issue that is particularly catching Mozy’s interest. “With 41.33 percent of people having lost a device that stores data in the past 12 months, huge amounts of business data is put at risk every rush hour,” the company writes. “There isn’t a CIO we know who would risk sending massive volumes of data over the internet without protecting it first.”
Well, we have to say, Mozy must not know very many CIOs. That aside, the company has a point: with all the evidence we have of companies and governments behaving badly with personally identifiable data, there’s an awful lot of data at risk every day.
“A thief holding up a New York subway car at rush-hour capacity could walk away with over 100TB of data,” the company notes. (Which actually sounds like an interesting premise for a movie. Starring Denzel Washington? Jeff Goldblum? Sandra Bullock?)
This commuting data is vulnerable in two ways, Mozy notes. First, bad guys could get access to the data. Second, the person with whom the data is riding could lose access to the data, if that data is the only copy. “It’s also the most-critical data; the edits to the contract that we’ve just worked through in today’s meeting, the presentation that we’re giving tomorrow morning, the tax forms that you’re halfway through filling in,” the company writes. “Losing this data can have an immediate impact on a company’s success.”
Mozy, however, doesn’t go far enough. Let’s go to the root cause: Why are people taking so much data home with them? And if this is something we don’t want to have happen, what is the alternative? There’s already been any amount of hand-wringing over the notion of people setting up Dropbox and similar accounts to make copies of corporate data. Is carrying the data on a device more or less secure, or desirable, than saving it to a public cloud service?
Either way, device or cloud, it boils down to the same issue: People are making copies of the corporate data, by and large, because they feel they need to do that to do their jobs. So either there isn’t a reliable way for them to gain access to the corporate data they need any other way, or, if there is, they don’t know about it.
The point being, if people feel they have to do this to do their jobs, then you need to give them a better way. Simply issuing an edict that Thou Shalt Not is not going to work, even if you put teeth in it. Because, ultimately, they’re not as afraid of you as they are of their boss.
Depending on whom you ask, either everything is fine or we should be locking up the country to protect us all from Ebola. And frankly, at this point we don’t really know for sure. But there is something you can do: Make sure your company is prepared.
Whether it’s Ebola, enterovirus 68, or the flu, it’s possible for an illness to affect your company and the world it operates in. In a sense, preparing for a pandemic of any sort is no different from preparing for any other natural disaster, whether it’s a flood, hurricane, or tornado. (Except that a pandemic can last for weeks or months, while natural disasters are typically over in a few days.)
So, think about the same sorts of preparations you’d make for, say, a blizzard, and adapt them a bit.
- Make sure staff can work from home, whether it’s because they can’t use the roads or because there’s a quarantine. Do they have the access they need? Computer equipment? Passwords? Is there a virtual private network set up to help protect the company when people are dialing in over a public network? Check with everyone and get these things set up now to make sure they’ll be available at a moment’s notice should you need them. And have everyone test their systems periodically to make sure they can still get online.
- Make sure that there isn’t any single point of failure in the daily processes. Are there passwords or procedures that only one person knows? What happens if that person gets sick or can’t make it into the office, for whatever reason? The disaster recovery manual — you have one, right? — also needs to be accessible from a remote location.
- While there’s not really an Ebola vaccine yet, there are vaccines for other illnesses with the potential to become pandemics, such as the flu. To ensure that employees get vaccinated, arrange to have someone from the health department come in to administer vaccines — and if necessary, have the company pay for it, to ensure that everyone gets vaccinated. The cost is minimal compared with the cost of the lost productivity if employees get sick.
- It never hurts to stock up on hand sanitizer and alcohol wipes, and talk with staff — including the custodial staff — about how to keep from spreading germs. And while you’re at it, make sure people know that they should stay home if they or a family member is sick. If your company doesn’t currently offer paid sick leave, it might be a good time to add it. Again, think you can’t afford it? How well could you afford having half the office sick?
Hopefully, none of these plans will be needed. But in case they are, you’ll want to be ready, and in the meantime, it’ll give you something practical to do that could be useful sometime. As the saying goes, prevention is the best medicine.
Now that Apple and Google have announced that they will incorporate encryption in smartphones by default, the question is how long law-abiding Americans will be allowed to continue to have encryption at all.
In case you missed it, Apple announced on September 17 that future editions of the iPhone would have encryption turned on by default in a way that no longer allows Apple to have access to encrypted data. “Apple’s new move is interesting and important because it’s an example of a company saying they no longer want to be in the surveillance business — a business they likely never intended to be in and want to get out of as quickly as possible,” writes Chris Soghoian, Principal Technologist and Senior Policy Analyst for the American Civil Liberties Union’s Speech, Privacy, and Technology Project. “Rather than telling the government that they no longer want to help the government, they re-architected iOS so they are unable to help the government.” The following day, Google announced that future versions of the Android smartphone operating system would have encryption turned on by default as well.
Predictably, the FBI and law enforcement had kittens. “The notion that someone would market a closet that could never be opened – even if it involves a case involving a child kidnapper and a court order – to me does not make any sense,” said FBI director James Comey. He also went on to invoke the notion of the terrorism that could surely befall the U.S. when this happens. “Two big tech providers are essentially creating sanctuary for people who are going to do harm,” agreed Ron Hosko, a former assistant director of the FBI’s criminal investigative division, told Marketplace. And pulling out the big guns, “Apple will become the phone of choice for the pedophile,” John J. Escalante, chief of detectives for Chicago’s police department, told the Washington Post. “The average pedophile at this point is probably thinking, I’ve got to get an Apple phone.”
Yep, terrorism, kidnapping, and pedophiles. They got the trifecta.
Company executives at Apple and Google told the New York Times that the government had essentially brought this on themselves with incidents like Edward Snowden, and that it was increasingly difficult for American companies to compete overseas with the perception that the U.S. government had its fingers in everything.
“The head of the FBI and his fellow fear-mongerers are still much more concerned with making sure they retain control over your privacy, rather than protecting everyone’s cybersecurity,” writes Trevor Trimm, in the U.K. paper The Guardian, after offering a line-by-line critique of Comey’s statement. Security experts pointed out that the government still had many other options by which it could legally request access to people’s electronic data, that the FBI didn’t cite any examples of cases where encryption would have prevented them from solving a case, and that one case cited by Hosko turned out to be irrelevant.
So the next question becomes, at what point might the federal government attempt to outlaw encryption again? Or mandate a back door?
In case you were born sometime after MTV, at one point in time, anything more powerful than 40-bit encryption was actually classified as a munition — you know, like bombs and missiles — and illegal to export from the U.S., and not all that easy to get hold of even inside the U.S. In fact, a guy named Philip Zimmerman got himself into a peck of trouble when he developed Pretty Good Privacy (PGP), intended to be Everyman’s data encryption. While it was fine inside the U.S., copies of it surfaced internationally, and for several years it looked like Zimmerman might face charges, which led to him being a cause celebre in the computer community.
In 1993, the Bill Clinton White House went further and proposed the Clipper Chip, an encryption system that included a back door so that law enforcement organizations could still read any data encrypted by the device. Which, of course, they’d only use if you were a bad guy, of course. But by 1996, partly due to the enormous wave of protest against the notion — and partly due to technical issues, such as bugs that were found in it (by a guy named Matt Blaze, who’s still around these days, commenting on the Apple/Google encryption flap) — the government had dropped the project. At the same time, the Clinton White House relaxed the rules on greater than 40-bit encryption.
These days, encryption is readily available, but generally you have to know about it and how to turn it on. What Apple and Google are doing are selling devices with it already turned on — and, in response to the increasing number of requests from the government for user data, they no longer will even have access to the user’s data.
(This does, of course, mean that if you lose your encryption key, you’re hosed.)
So what other effects can we expect from the Apple/Google decision?
- Courts are still trying to figure out whether an encryption key is like a key or a combination to a safe — something you have or something you know — so they can decide whether you have to give it up. So law enforcement organizations are still taking people to court to force them to reveal encryption keys, and sometimes they win. Conceivably, with encryption turned on by default, this could happen a lot more often.
- Having encryption be the default could also eliminate the “why are you encrypting it if you don’t have anything to hide?” presumption.
- Of course, bad guys have pretty much figured encryption out, even when it’s not the default. To be blunt, Apple and Google’s actions simply mean that regular people will have the same capabilities as the bad guys. And in an era where companies and individuals alike are regularly losing laptops, disk drives, and smartphones with personal data in them — and then getting fined for losing the data and having it not encrypted in the first place — having encryption as the default simply makes sense.
So what would happen if the government were to outlaw encryption, or mandate a back door? The Electronic Frontier Foundation, which lives for this sort of thing, has a nice long list of possible repercussions.
Realistically, though, would outlawing encryption even be practical in this day and age? Look at it this way — if encryption were made illegal, that would mean that all the personal data on all the devices that get lost or stolen would then be accessible . It would make the Target incident look like a picnic. (The ACLU’s Soghoian pointed out that in 2011, the FBI was encouraging people to encrypt their data to keep it out of the hands of criminals.)
And everyone who believes that bad guys wouldn’t continue to be able to use encryption or would have to have a back door to their communications, please poke out your right eye. In the same way that bad guys continue to get access to illegal firearms today, bad guys would still get access to encryption, one way or another. Sorry, FBI, but that genie is out of the bottle.
It isn’t clear whether the government is going to try to outlaw encryption again, or try to mandate a law enforcement back door. There is some talk about Congress enacting a law, but due to the Edward Snowden revelations, few Congressional representatives want to touch it, according to Bloomberg Business Week. Still, it’s something we need to watch out for — but it looks like computer vendors are increasingly unlikely to help, according to Iain Thompson in the Register. “It’s unlikely Apple or Google is going to bow down to the wishes of government and install backdoors in their own products,” he writes. ” This would be disastrous to sales if found out, and there are increasing signs that the tech sector is gearing up for a fight over the issue.”
You may recall that an activist investor called Elliot Management Corp. bought a $1 billion chunk of EMC earlier this summer — about two percent, big enough that it could start throwing its weight around and make suggestions about how EMC could make even more money for it, such as by selling VMware. EMC CEO and chairman Joe Tucci — who’s scheduled to retire in February for the nth time, said he’d meet with the company but wouldn’t make any promises.
Now it’s starting to look as though the EMC we all know and love may not survive Tucci’s departure.
As nearly everyone suspected, it all started out with Tucci saying he wouldn’t sell VMware, of which EMC owns 80 percent. Why exactly he’s so enamored of the virtualization company isn’t clear, other than the fact that it in turn comprises up to 75 percent of EMC’s value these days.
Instead, it’s looking like Tucci would rather sell EMC itself — or, at least, break it up and sell some of the parts. In the process, the storage company is being partnered (or, as my teenage daughter would say, “shipped,” as in “relationship”) with just about every major other computer vendor out there. And some of them are willing and some of them aren’t.
Cisco. While there was talk about a potential merger with Cisco, Cisco Chief Executive Officer John Chambers himself said there was nothing to it, and if there had been anything, it would have been a year ago.
HP. Because, after all, HP acquisitions always go so well. Look at Autonomy. And Compaq. Anyway, reportedly the notion of a “merger of equals” with HP went pretty far, with the notion that HP’s Meg Whitman would be CEO and Tucci would be chair, “but there were disagreements over price and the next layer of management,” according to Bloomberg. Incidentally, according to the Wall Street Journal, EMC and HP had been talking for the past year. The merged companies would be bigger than IBM, notes Business Insider. But it isn’t clear what HP would gain from the deal, according to Mike Wheatley of Silicon Angle. “Its storage organization is unlikely to relish the prospect of taking on EMC’s people and products, and EMC’s VNX and VMAX arrays compete with HP’s 3PAR products,” he writes. “In addition, HP’s various object, backup and virtual SAN products all overlap with equivalents from EMC.”
Dell. In addition to HP, EMC had also been talking to Dell about buying at least part of the company, the Journal added. But sites such as Silicon Angle are dubious due to the disparity in size between the two companies, as well as overlap between their product lines. “For example, Dell’s Compellent and EqualLogic storage arrays compete with EMC’s VNX line,” Wheatley writes.
Oracle. Forbes‘ Peter Cohan is pushing Oracle, noting that the company would likely pay more than HP and that Oracle doesn’t have much of a storage presence. This was actually rumored in 2010. But some other analysts now find it unlikely.
IBM or SAP. Even these companies were mentioned by Forbes, but IBM probably doesn’t have the budget for it and SAP probably wouldn’t make such a big non-European acquisition, writes Sarah Cohen.
This is all in the context of to what degree massive storage array companies like EMC even have a future, in an era of cloud computing. Yes, mainframe company IBM survived the PC era, but most of the other mainframe companies didn’t, and it’s speculated that EMC might have similar troubles surviving the cloud era. “The question is why anyone would want EMC gear at all,” Jon Toigo, of IT consultancy Toigo Partners, told Silicon Angle. “With VMware and Microsoft pushing server-side direct attached storage topologies to replace centralized SAN storage, I think of the monolithic storage products of EMC as niche players in a shrinking market.”
In fact, Arik Hesseldahl of Re/code thinks that EMC has already run out of potential suitors, and nothing is going to happen at this point. Bloomberg thinks that EMC should be acquiring more companies, and helpfully provided a laundry list.
In any event, EMC stock has been hitting new one-year highs, which has to make Elliott happy.
EMC has traditionally been a fairly drama-free company, but it looks like it’s making up for it now.
It’s said that one of the major advantages of cloud storage is that you can add storage on the fly as you need it. But what if it went further? What if cloud storage was even more disintermediated than it is today?
An electric utility, for example, typically offers power generated several different ways, with each way having a particular cost associated with generating it and a certain amount of time it takes to crank it up and shut it down again. If the utility suddenly needs a lot of power, it can buy power – typically more expensive – on the spot market, but it ensures that the utility doesn’t have a brownout or a blackout. Having a utility also means that individuals and businesses don’t have to worry about generating their own power and making sure they have enough when they need it; they just trust the utility to provide it.
So what if cloud storage was like that? What if you just used it as you needed it, didn’t buy it and hold it, and you contracted with a provider who might get it from Amazon, Microsoft, or some other vendor at any moment, depending on who had it available at a particular time?
That’s a notion being discussed in connection with a couple of recent announcements. First is the announcement by Accelion that its kiteworks content connectors would now include Box and Dropbox, as well as Google Drive and Microsoft OneDrive. Kiteworks reportedly provides the management capabilities, such as providing the same interface regardless of which cloud service is being used, as well as security settings and access control. Second is the announcement that cloud company 6Fusion had signed a deal with the Chicago Mercantile commodity exchange, after originally signing a non-binding agreement last fall, and is expected to offer a beta product later this year.
“If all works out, the deal will mean that buyers and sellers of cloud computing services can do business on a spot exchange and, in a few years, trade derivatives too,” writes Jeff John Roberts at GigaOm. “The exchange will be a place to buy hours of ‘WAC,’ a term invented by 6Fusion that stands for Workload Allocation Cube. The idea behind the WAC is to create a standard unit of cloud computing infrastructure that can be bought and sold by the thousands.”
Basically, 6Fusion hopes that the WAC will become the cloud storage equivalent of the watt of power or the barrel of oil, Roberts writes. That is, of course, predicated on whether the rest of the industry accepts it, he warns.
Eventually, users and organizations might not even know what company is providing its storage, in the same way that we aren’t typically aware of whether our power is coming from hydroelectric, natural gas, coal, solar, or other sources – which will be easier with developments like Accelion’s, which provide the same interface to multiple providers’ storage. Moreover, because 6Fusion is working with the commodities markets, people could invest in “storage futures,” in the same way that they buy pork bellies now.
“The IT infrastructure of a company like JP Morgan could soon consist of private cloud servers for sensitive data, supplemented by public cloud supplies purchased from an ever-changing roster of third party cloud computing providers,” Roberts continues. “At the same time, such purchases of cloud computing ‘by the bushel’ would also mean lower prices as traders, rather than vendors, start to set the price of key ingredients of IT infrastructure.”
The concept isn’t exactly new; Forbes points out that such “spot markets” have been discussed, tried before and failed. Even 6Fusion itself had been talking about the notion publicly since last spring.
Just think – if they expand it to compute power itself, they could reinvent timesharing.