Government officials have been using recent terrorist attacks to try to justify limiting the use of encryption. You may recall, for example, that the Federal Bureau of Investigation (FBI) recently attempted to force Apple to develop a different version of the iPhone operating system to make it easier for the agency to break into encrypted phones thought to be owned by the perpetrators in last December’s San Bernardino attack.
Similarly, states such as California and New York have attempted to put forth bills that would outlaw the sale of cell phones with unbreakable encryption, while agencies such as the FBI have been recommending a mandated “back door” for law enforcement into encrypted phones.
These efforts have been continuing even though there’s been very little indication that terrorists are actually using encryption. For example, the FBI used last fall’s terrorist attacks in Paris to justify their long-held position that governments should mandate a “back door” into encryption, even though there’s no evidence the attackers used encryption — and, in fact, quite a lot of evidence that they didn’t.
One of the most recent incidents was the March bombings in Brussels. Rep. Adam Schiff, the California representative who’s the top-ranking Democrat on the House Intelligence Committee, suggested the same day they occurred that encryption might have been involved, writes Cory Bennett in The Hill.
Since then, law enforcement has been studying the laptop of one of the suicide bombers, Brahim El-Bakraoui, who blew himself up at Brussels airport, writes Lucy Clarke-Billings in Newsweek. “The bomber referred to striking Britain, the La Défense business district in Paris, and the ultra-conservative Catholic organization, Civitas, in a folder titled ‘Target,’ written in English, according to the source,” Clarke-Billings writes. “The laptop was found in the trash by police in Brussels shortly after the suicide bombings on March 22 that killed 32 people at the city’s airport and on a Metro train.”
So let’s get this straight. The data was not only unencrypted, but in English. And on top of that, it was located in a file folder. Labeled TARGET.
That’s right up there with Jurassic Park’s “It’s a Unix system! I know this!”
Security experts who are following the incidents believe there’s no indication that terrorist organizations have some sort of overarching encryption plan. “The clear takeaway from this list is that: 1) ISIS doesn’t use very much encryption, 2) ISIS is inconsistent in their tradecraft,” writes an information security researcher known as “the grugq” in Medium. “There is no sign of evolutionary progress, rather it seems more slapdash and haphazard. People use what they feel like using and whatever is convenient.”
The laptop discovery fits in with what appears to have been the strategy used thus far, writes Quartz. “ISIL’s strategy in last year’s Paris attacks and others was simple: avoid trackable electronic communications like email and messaging apps in favor of in-person meetings and disposable devices, or ‘burner phones,’ that are quickly activated, used briefly, and then dumped,” the organization writes. “Communications from the Paris attacks were reportedly (paywall) largely unencrypted, and investigators have found much of their intelligence through informants, wiretaps, and device-tracking rather than by trying to decipher secret messages. That’s not to say that terrorists won’t use encryption to carry out heinous acts. They will. But encryption is by now a fact of life: your apps, credit cards, web browsers and smartphones run encryption algorithms every day.”
Of course, to some people, the TARGET folder discovery was almost too good to be true. Skeptics on social media have been suggesting that the folder was planted by a group such as the CIA, that the folder was a decoy, and so on.
On the other hand, there doesn’t seem to have been much of a question about confirming who performed the Brussels attacks, especially since they were suicide attacks. If the folder was really planted, wouldn’t it have made for sense for the government agency involved to have used some sort of encrypted – though easily breakable – code? That way, the agency could have used it to justify its attempts to outlaw encryption. If the FBI planted the TARGET folder, it missed an opportunity.
It turns out that the reason people keep poking USB sticks into things isn’t necessarily because they’re stupid. It’s because they’re nice.
A recent study by researchers at the University of Illinois discovered that almost half of the people who found USB sticks scattered by the researchers ended up sticking them into their computers. This isn’t new. What was new is the reason – ostensibly, so the people who found them could return them to their owners.
“We dropped nearly 300 USB sticks on the University of Illinois Urbana-Champaign campus and measured who plugged in the drives,” writes Elie Bursztein, one of the researchers, who heads Google’s anti-abuse research team. “We find that users picked up, plugged in, and clicked on files in 48 percent of the drives we dropped,” he writes. “They did so quickly: the first drive was connected in under six minutes.” The full study will be published in May 2016 at the 37th IEEE Security and Privacy Symposium, he adds.
“We dropped five types of drives on the University of Illinois campus: drives labeled ‘exams’ or ‘confidential,’ drives with attached keys, drives with keys & return address label, and generic unlabeled drives,” Bursztein writes. “On each drive, we added files consistent with the drive’s label: private files for the sticks with no label, keys or a return label; business files for the confidential one; and exam files for the exam drives.”
In fact, researchers found that they could make people even more likely to perform this altruistic behavior by personalizing the stick, Bursztein adds. “Attaching physical keys to elicit altruistic behavior was most effective,” he writes. “Keys with an attached return label were the least opened, likely because people had another means to find the owner.”
So what makes all this a problem?
- USB sticks can spread viruses and malware.
- In particular, USB sticks can include code that reprograms any other USB device, including the keyboard.
- And this isn’t just ransomware. We’re talking about malware that can actually set your computer on fire.
For that matter, the researchers’ USB sticks essentially had malware on them. “All the files were actually HTML files with an embedded image on our server,” Bursztein writes. “This allowed us to detect when a drive was connected and a file opened without executing any unexpected code on the user’s computer. When a user opened the HTML file, we asked them if they wanted to opt out or to answer a survey about why they plugged in the drive in exchange of a gift card. 62 users (~20 percent) agreed to respond.”
And that’s where the being nice part comes in. “When asked why did they plugged the drive most survey respondents claimed it was for the altruistic purpose of returning the drive to its owner (68 percent),” Bursztein writes. “Only 18 percent said they were motivated by curiosity.”
Of course, that’s what they said. “The self-reported motivation is not consistent with which files were accessed,” Bursztein notes. “For example for the drives with physical keys attached, users clicked on winter break pictures more often than on the resume file, which would have contact information of the owner. Interestingly the same behavior is observed for the drives with a return label, but not for the drives with no marking.”
The other interesting aspect is how quickly this all happened. “Not only do many people plug in USB devices, they connect them quickly,” writes Bursztein. “20 percent of the connected drives were connected within the first hour and 50 percent were connected within 7 hours.” What makes this a problem is that if such sticks did contain a virus, it could spread before anyone could deal with it. “The windows of time available to detect that this attack is occurring is very short,” he warns. In fact, the first report of the presence of “weird USB keys” on the campus only started to surface on Reddit roughly 24 hours after the first wave – which still didn’t keep people from continuing to plug them in, he writes.
What’s particularly interesting is that this behavior is universal. “We found no difference between the demography, security knowledge and education of the users who plugged USB drives and the general population,” Bursztein notes.
Between this and all the other security flaws inherent in USBs – even Captain America, ATMs, and the International Space Station are vulnerable — Bursztein is actually suggesting getting rid of USB sticks altogether. “You can enforce a policy to forbid the use of USB drives,” he writes. “On Windows this can be done by denying users access to the Usbstor.inf file. With the advent of cloud storage and fast internet connections, this is policy is not as unreasonable as it was a few years back.”
What’s going to be interesting is what sort of ramifications there’s going to be about this research. For example, it sounds like telling people not be stupid isn’t a very successful strategy, because they don’t think they’re being stupid. “Why being more security savvy is not negatively correlated with being less vulnerable is everyone’s guess,” Bursztein writes. “It raises the question of the effectiveness of security education at preventing breaches.”
Not to mention, what other kinds of things might people be convinced to do because they think they’re being nice?
Despite all the attention paid to Hillary Clinton’s email server, it’s easy to overlook the fact that a number of federal agencies have been looking for ways to delete email messages.
The Central Intelligence Agency (CIA) recently decided that it was dropping a plan it had made in 2014 that would have deleted the email messages of everyone in the agency – other than the top 22 people – within three years of their departure from the agency, “or when no longer needed, whichever is sooner.”
“A representative for the National Archives confirmed to The Hill on Monday that the agency backtracked on its proposal last month, following efforts to reorganize its structure,” writes Julian Hattem.
According to The Hill, the restructuring, announced in October, involved creating a fifth “directorate” at the CIA, Digital Innovation, tasked specifically with cybersecurity issues.
The CIA wasn’t the only agency to come up with a plan to scrub old email messages. The Department of Homeland Security announced a similar plan in November, 2014. That proposal covered more than just email messages, but also included surveillance data, so there was some security rationale for deleting it.
The theory behind the deletion was that any important CIA email messages would have been retained in some other way, such as by being sent to or from one of the 22 senior officials, wrote Hattem in The Hill at the time. In addition, the original request had noted that email messages not from senior staff that were in fact implementing programs on behalf of senior staff were intended to be retained.
The CIA plan was swiftly criticized by a number of transparency organizations and Congressional representatives, including the heads of the Senate Intelligence and Judiciary committees. This led to the National Archives temporarily changing its mind. The CIA has now permanently withdrawn the program. Opponents of the CIA’s proposal also pointed out that the organization had already destroyed 2005 records, such as waterboarding videotapes.
So what’s leading to all these agencies wanting to delete information? In the case of DHS, it was supposedly due to storage costs, a justification that some found disingenuous due to the low cost of storage these days. Overall, it was a request from the Obama administration to help keep track of just the important stuff.
“The National Archives has been pushing all federal agencies for better management of the avalanche of email they generate daily,” writes David Welna for NPR. “The Obama administration has issued a directive giving those government entities until the end of 2016 to propose policies to winnow out important email, store it electronically, and discard the rest.”
That proposal is called Capstone, for a program that helps retain email messages without requiring user input. Based on a 2011 White House directive, Capstone is intended to make it easier for find federal government email messages.
For example, the CIA had previously been preserving email messages by printing them out and filing them, wrote Ali Watkins for HuffPost Politics. “The CIA’s current system involves printing and filing away emails that are deemed important, a determination that is left largely to the discretion of individual agency employees,” she writes. “It is not clear what the timeframe is for how long those printed emails and any remaining electronic archives are supposed to be retained, though it appears there is currently no official requirement.”
What’s surprising is that the National Archives approved the plan in the first place. One would think that the Archives, of all places, would feel strongly about preserving records such as email messages, knowing that, in many cases, the value of a particular message might not be recognized until years later.
You may not know it, but if you’re a computer technician, you may have the same obligation to report child pornography as a doctor, day care facility, or film processor.
Utah just passed the Reporting of Child Pornography law, HB155, which puts this into place. It joins what is said to be 12 other states, as well as the province of Manitoba in Canada, in passing such legislation, which include Arkansas, Illinois, Missouri, New Jersey, North Carolina (where there has been at least one such case), Oklahoma, South Carolina, and South Dakota. Oregon and Florida have also at least considered such laws.
Under the Utah law, if computer technicians encounter what they consider to be child pornography in the course of their jobs, they are required to report it to a state or local law enforcement agency, the Cyber Tip Line at the National Center for Missing and Exploited Children, or an employee designated by the employer to receive and forward any such reports to one of the aforementioned agencies, writes the law firm of Fabian Van Cott.
It isn’t clear whether the legislation is also intended to apply to, say, sysadmins or technicians who work on computer storage, but it’s a safe way to bet. The law also allows technicians to disregard confidentiality agreements between clients to protect them from being sued by the company or clients as a result of the report, writes the Daily Universe.
The legislation is derived from similar legislation that had been enacted for film processors in a number of states. Back when there was such a thing as Fotomat, film technicians were also obligated to report it when they encountered child pornography in film left for them to develop.
As with film technicians, computer technicians are specifically told that “baby in the bathtub” pictures shouldn’t count. And technicians are told they aren’t obligated to go hunting for child porn on computers under their purview.
On the other hand, under the Utah law, they could themselves have charges filed against them for failing to report. “Computer technicians could face a $1,000 fine or six months in jail if they don’t report child pornography to police,” warns Fox 13. “Proving that the technician saw something illegal, but didn’t report it would fall on the shoulders of the prosecutor.”
Plus, the bill provides “immunity against civil lawsuits for technicians who report in good faith but make a mistake, writes the Salt Lake Tribune. The combination certainly seems intended to encourage people to err on the side of reporting.
What’s most surprising, actually, is how many Utah legislators – 13 out of 75 — actually debated and voted against the bill. Critics called its language “fuzzy” and were concerned about the implications.
“The idea of somebody from Geek Squad or somebody who is helping someone else at home with their computer, if you find something that you find offensive or that you think is pornographic, you must report that? That concerns me,” Rep. Johnny Anderson, R-Taylorsville, who owns several child care facilities, told City Weekly. “I don’t want us going down the road of telling someone that if you think your neighbor is doing something wrong, you are required by law to go to the police about it. I don’t want to immediately compare it to Nazi Germany, but it feels that way.”
Some computer technicians were also worried about the ramifications of the bills. Though several of them pointed out that they already report such findings, they were concerned about the legal penalties should prosecutors decide that they had seen the images.
Lawmakers also expressed concerns about the possibility of false accusations, writes the Deseret News. “Rep. Ed Redd, R-Logan, said he could see people being blackmailed and their reputations sabotaged. ‘Anyone of us in this room could be accused of this,’ he said.”
And indeed, at least one computer technician in Missouri has already attempted to blackmail one such client, instead of reporting the files to the authorities.
This wasn’t the only pornography legislation passed by the Utah state government during its most recent session. The state became the first in the country to name pornography a public health crisis through a bill that was passed unanimously by both houses of the Utah Legislature.
The organization behind the bill, the National Center on Sexual Exploitation – which considers the American Library Association a “pro-pornography organization” — is reportedly writing similar legislation for eight states so far. The intention behind the bill is said to be to encourage the federal government to more decisively enforce its anti-obscenity laws.
“This ought to be seen like a public health crisis, like a war, like an infectious fatal epidemic, like a moral plague on the body politic that is maiming the lives of our citizens,” Elder Jeffrey R. Holland, member of the Quorum of the Twelve Apostles of the Church of Jesus Christ of Latter-day Saints, reportedly told some 2700 attendees at the 14th annual conference of the Utah Coalition Against Pornography. “We do need to see this (pornography) like avian flu, cholera, diphtheria or polio.”
Ironically, despite its reputation, Utah has been considered to be a hotbed of porn use, ranking #1 in the country in 2009 for subscriptions to porn sites, according to one study. While some have criticized the methodology of that statistic, pointing instead to a different one where Utah ranks 40th, that study, too, has its flaws, as it’s limited to only a single site, PornHub.com, and measures “pageviews per capita.” Not to mention, “Mormon porn” is apparently a thing, as are other ways of getting around the porn restriction.
And despite Rep. Steve Eliason, R-Sandy, telling the Salt Lake Tribune that the bill “makes clear that in Utah, we don’t stand for child abuse,” Utah is also ranked highly on another statistic: Actual sexual abuse of minors, according to U.S. Department of Health and Human Services’s 2013 report. One study last November found Utah first in the country for child sexual abuse, though one organization claimed this was because Utah had “tougher laws than other states and may pursue child abuse more vigorously.”
With child pornography being one of those sure-fire issues that few people would be caught dead opposing, the computer technician legislation is likely to spread to other states. So be aware.
It sounds like a noble cause: A company, Ambry Genetics, is making a database of information it’s collected about 10,000 people with breast and ovarian cancer freely available, in the hopes that other researchers can use it to help develop preventions and cures for such diseases. But while the company no doubt has great intentions, release of medical data like this can create health data privacy concerns.
“The 10,000 people all have or have had breast or ovarian cancer and were tested by Ambry to see if they have genetic variants that increase the risk of those diseases,” writes Andrew Pollack in the New York Times. “Ambry returned to the samples from those customers and, at its own expense, sequenced their exomes — the roughly 1.5 percent of a person’s genome that contains the recipes for the proteins produced by the body. Since proteins perform most of the functions in the body, sequencing just that part of the genome provides considerable information, and is less expensive than sequencing the entire genome.” The company spent $20 million on the project, he adds.
What makes this whole story particularly poignant is that Ambry founder and CEO Charles Dunlop suffers from cancer himself, which he attributes to a genetic mutation, and recently stepped down as CEO. “I would not be resigning if it weren’t for having stage four prostate cancer, which is now in remission,” he writes. “Cancer sucks. The stress of the job coupled with my gene mutation leaves a high likelihood of bringing the cancer back.”
This isn’t the first time databases of such anonymous medical data have been collected. Icelandic company deCODE is working to develop a database of health data for as much as two-thirds of the population of the country. Because the Icelandic population is relatively insular, this was a treasure trove for researchers, writes Emma Jane Kirby for the BBC.
“With little significant immigration since the Norsemen first settled here in the 9th Century, Iceland is among the most homogeneous nations on earth,” Kirby writes. “With so little background noise to filter in the small population of just 320,000 people, it’s much easier for scientists to isolate faulty genes than it is in larger multi-ethnic countries such as Britain or the US. Iceland also has a database containing the genealogy of the entire nation dating back 1,100 years.”
The Ambry Genetics database, known as AmbryShare, is nominally anonymous, Pollack writes. “AmbryShare will not contain the actual exome of each person, because that would pose a risk to patient privacy,” he writes. “Rather it will contain aggregated data on the genetic variants. For example, a researcher could look up how frequently a particular mutation occurs among the 10,000 people. Ones that occur frequently in these 10,000 patients, but not among healthy people, could raise the risk of developing those cancers.”
But health data privacy research has shown that “anonymous” medical data isn’t necessarily so and that individuals can be identified by a remarkably short list of data. In fact, just knowing a gender, birthdate, and zip code is unique for 87 percent of the U.S. population, wrote Seth Schoen for the Electronic Frontier Foundation in 2009.
“The notion of “anonymized” or “sanitized” data is then problematic; researchers habitually share, or even publish, data sets which assign code numbers to individuals,” Schoen wrote. “There have already been conspicuous problems with this practice, like when AOL published “anonymized” search logs, which turned out to identify some individuals from the content of their search terms alone.”
Also recall that law enforcement agencies have been doing what they can to mine genetic information from various private companies that collect it, such as 23andme. While the Ambry database includes only people with breast or ovarian cancer, it doesn’t necessarily mean that it could only help law enforcement track down people with those conditions. Certain components of DNA are passed down through the father and mother. It could happen that a relative of a criminal would be tested and in the database, which would help narrow down the search.
Health data privacy is likely to become even more of an issue in light of President Barack Obama’s Precision Medicine Initiative, which is intended to create a database of medical information for a million people and is expected to cost as much as $1 billion over the next four years.
“When information from one million people is brought together, it would make an attractive target for a hacker working to link the data back to individuals,” writes Dina Fine Maron in Scientific American. “Such a breach could rob both patients and their families of their privacy. Data for research are typically scrubbed of identifying factors like a patient’s name and birth date, but someone with enough information about an individual’s family tree may be able to connect some dots.”
In fact, health data privacy concerns have been enough to keep some people from participating in studies, Maron notes. But the PMI database could also include existing databases with participants who didn’t consent to this specific sort of aggregation, but who agreed that their data could continue to be used for research.
The downside of such privacy concerns is that not making the data accessible is a loss to research. “Admittedly, there’s not much loss to society if IMS Health can’t sell prescription data to marketers,” wrote the late tech journalist Steve Wildstrom in 2011, in response to a legal case on the issue of “anonymous” health databases that turned out not to be. “But there could be a considerable loss if researchers lose access to great masses of aggregated data. We are just at the point where the collection and analysis of vast amounts of data is becoming routinely practical. While there may be considerable risks in assembling that data, there is also a wealth of information about ourselves and our society that could be obtained from them. The debate must weigh both benefits and risks.”
Remember a couple of years back, when people realized that their Android phones actually stored their location data, and how uptight everyone got? It turns out that private companies and governments are doing the same thing with your car using license plate readers, building gigantic databases of everywhere you’ve been. It’s perfectly legal, they don’t need a warrant, and they can even make money selling the data.
“These readers, which are situated at intersections, scan license plates and cross-reference them with state, federal and Department of Motor Vehicles records,” writes Jaxon Axelrod in American City & County of one such system, in Freeport, N.Y. “Police are alerted at a command center that is open 24 hours a day, seven days a week when a plate is connected with an infraction.”
The town of 43,000 paid $750,000 for the system, which has tracked 17 million plates in three months. In that time, Freeport has impounded more than 548 vehicles, issued 2,008 summons, returned 15 stolen vehicles to their owners, and arrested 28 people, Axelrod writes.
In fact, the system is so successful that the police chief wants to expand his staff of 95 by seven more officers to keep up, after overtime costs increased by 20 percent. Dissenters say the officers are being kept busy by writing up minor offenses such as expired tags.
In Pennsylvania, the state plans to eliminate registration stickers entirely in favor of license plate readers. Ironically, in that state, some police officers are against the idea, primarily due to cost concerns.
In other cases, private companies are collecting the data, writes Conor Friedersdorf in The Atlantic. “Throughout the United States—outside private houses, apartment complexes, shopping centers, and businesses with large employee parking lots—a private corporation, Vigilant Solutions, is taking photos of cars and trucks with its vast network of unobtrusive cameras,” he writes. “It retains location data on each of those pictures, and sells it.”
As of January, Vigilant Solutions has taken roughly 2.2 billion license-plate photos to date, and adds about 80 million more each month, Friedersdorf writes, noting the company has 3,000 law enforcement agencies, comprising approximately 30,000 police officers, among its clients.
Between 2007 and 2012, the U.S. Department of Homeland Security distributed more than $50 million in federal grants to law-enforcement agencies for automated license-plate readers, write Julia Angwin and Jennifer Valentino-Devries in the Wall Street Journal, adding that a 2010 study estimated that more than a third of large U.S. police agencies use automated license-plate readers.
It’s a lot of data that can infringe on people’s privacy by recording their comings and goings about sensitive locations, and which is readily accessible. “Police can generally obtain it without a judge’s approval,” Angwin and Valentino-Devries write. “By comparison, prosecutors typically get a court order to install GPS trackers on people’s cars or to track people’s location via cellphone.”
These systems are catching the attention of civil liberties organizations such as the American Civil Liberties Union and the Electronic Frontier Foundation. Aside from the whole issue of whether it’s a violation of our civil rights to use license plate readers to collect this data in the first place, the organizations are concerned about the safety of the data. In 2015, EFF learned that more than a hundred automated license plate reader cameras were exposed online, “often with totally open Web pages accessible by anyone with a browser,” the organization writes.
In response, some states are considering legislation, such as limiting how much data can be stored or the length of time it can be stored. The vendors, for their part, claim that such laws are a violation of their rights to free speech.
As with other privacy efforts such as the FBI’s attempt to get Apple to develop software to break into an iPhone owned by the San Bernadino shooters, supporters of the systems point to their abilities to fight crime. But only a tiny fraction of the captured plates are actually associated with a crime, according to the 2013 ACLU report, You Are Being Tracked.
“In Maryland, for every million plates read, only 47 (0.005 percent) were potentially associated with a stolen car or a person wanted for a crime,” write James R. Healey, Greg Toppo and Fred Meier in USA Today. “In one Sacramento shopping mall, private security officers snapped pictures of about 3 million plates in 27 months, identifying 51 stolen vehicles — but that’s a success rate of just 0.0017 percent.”
People who aren’t criminals are also concerned. “Through the “stakeout” feature, the NYPD may learn who was at a political rally, at an abortion clinic, or at a gay bar,” writes the ACLU. “Through the predictive analysis, the NYPD may learn that a person is likely to be near a mosque at prayer time or at home during certain hours of the day. Through the ‘associate analysis,’ the NYPD may come to suspect someone of being a ‘possible associate’ of a criminal when the person is simply a family member, a friend, or a lover.”
On the bright side, technology is already being developed to solve the problem: “license plate reader blocker” has almost 100,000 hits on Google.
The rest of the world has the Sports Illustrated swimsuit issue. The storage world has the Backblaze annual hard drive status report, but which is drooled over and argued over just as passionately, ever since the company started releasing the data a couple of years ago.
To give you some idea of the scale we’re talking about here, Backblaze had 56,224 spinning hard drives containing customer data as of the end of 2015, located in 1,249 storage pods. In comparison, a year ago the company had 39,690 drives running in 882 pods. That’s an increase of about 65 petabytes, more or less, the company writes.
The company uses 18 different types of hard drives in its data center, ranging from 45 HGST 8TB to 29,084 Seagate 4TB. It also still has 222 Seagate 1.5TB, which is the smallest drive it still uses.
Failure rates range from .44 percent for some models of HGST 4TB to 10.16 percent for the aforementioned Seagate 1.5TB models, which are also the oldest with an average age of more than 68 months. Not terribly surprising that they’re the ones most likely to fail. In fact, the company had said last year that it intended to migrate away from those Seagate drives.
Altogether, Seagate makes up 56 percent of the drives in the data center, compared with 41 percent for HGST and 3 percent for Western Digital. (Backblaze uses only a smattering of Toshiba drives.)
On the other hand, the company notes, when you look at the number of days drives have been in use, that statistic flips – HGST is 56 percent while Seagate is 41 percent. Why the distinction? “The HGST drives are older, as such they have more drive hours, but most of our recent drive purchases have been Seagate drives,” writes Andy Klein, director of product marketing. “Case in point, nearly all of the 16,000+ drives purchased in 2015 have been Seagate drives. Of the Seagate drives purchased in 2015, over 85 percent were 4TB Seagate drives.”
Consequently, Backblaze has largely migrated over to 4TB drives, with them comprising 75 percent of the hard drives the company uses, for a total of 42,301. 70 percent of them are Seagates, 30 percent are HGST, and Western Digital and Toshiba make up a sliver of less than 1 percent.
Of those 4TB drives, the HGST ones have less than a third of the failure rate of the other ones. So why not use just them? Because they’re not around any more. “The HGST 4TB drives, while showing exceptionally low failure rates, are no longer available having been replaced with higher priced, higher performing models,” Klein writes. “The readily available and highly competitive price of the Seagate 4TB drives, along with their solid performance and respectable failure rates, have made them our drive of choice.”
Klein also notes that while the Seagates do have a higher failure rate than HGST, it was possible to predict impending failures through SMART statistics, unlike disk drives from other manufacturers.
Backblaze has also begun using 6TB drives, which it began testing in 2014, and now uses nearly 2400 of them, Klein writes – 1882 from Seagate and 485 from Western Digital. However, the Western Digital ones have a failure rate more than five times higher than the Seagate ones, he notes.
In fact, Backblaze would love to buy more 6TB drives, but they are more expensive to buy and operate, and not nearly as available as the 4TB ones. “There was a time during our drive farming days when we would order 50 drives and be happy, but in 2015 we purchased over 16,000 new drives,” Klein writes. “The time and effort of purchasing small lots of drives doesn’t make sense when we can purchase 5,000 4TB Seagate drives in one transaction.”
The company also has just a few 5TB Toshiba drives and 8TB HGST Helium drives, but didn’t say why it had such a small number – most likely because they were being tested until such time as they were cost-effective.
On the other hand, there are some models Backblaze no longer uses:
- 1TB drives, having replaced them all with 4TB and 6TB drives to increase the capacity of its pods. It now uses the 1TB drives to “burn in” storage pods. “The burn-in process pounds the drives with reads and writes to exercise all the components of the system, Klein writes. “In many ways this is much more taxing on the drives then life in an operational Storage Pod.”
- Seagate 2TB drives, because the company didn’t have very many, their failure rate was higher (10.1 percent), and they chose to upgrade the pods to 4TB drives. However, the company is still using more than 4,500 HGST 2TB drives because their failure rate is only 1.55 percent. Eventually they will be upgraded to 4TB or 6TB drives.
- Seagate 3TB drives, which had a failure rate ranging from two to three times that of the closest other drive of that size. The company had said last year that it intended to migrate away from these drives, as well as the Seagate 1.5TB drives, due to their high failure rate.
If you’re just dying to get your hands on the raw data itself, it’s available online.
Disclaimer: I am a Backblaze customer.
A cloud company has finally figured out a way to get people to read its terms of service: Put a zombie reference in there.
“It’s hard to imagine the kind of person who would read all the way through Amazon Web Services’ massive terms of service agreement,” writes Jacob Brogan in Slate. “At more than 26,000 words, the document is denser and more digressive than Tristram Shandy, a veritable post-apocalyptic wasteland of legalese that dictates how users can and cannot employ products from the e-retailer’s massively profitable cloud computing division. Formidable as it is, however, someone managed to make it through.”
So where do the zombies come in? The service now includes a new Lumberyard gaming engine, which is free, open-source software intended to help developers write games. Normally, users are barred from integrating it “with life-critical or safety-critical systems,” including medical or military equipment,” Brogan writes. “Basically, that means you can’t use the software to program robot doctors or control weaponized drones,” which he says is pretty darn unlikely anyway. But just in case, all bets are off in the event of a zombie apocalypse, Amazon writes:
“However, this restriction will not apply in the event of the occurrence (certified by the United States Centers for Disease Control or successor body) of a widespread viral infection transmitted via bites or contact with bodily fluids that causes human corpses to reanimate and seek to consume living human flesh, blood, brain or nerve tissue and is likely to result in the fall of organized civilization.”
In the process, the zombie reference also helped promote Amazon’s new gaming engine. That’s likely to help the company’s cloud service, because the gaming engine must be hosted on either Amazon’s Web servers or the developers’ own, notes Elizabeth Weise in USA Today. In other words, no cloud service competitors, writes Chris Morris in Fortune. “And there’s no zombie apocalypse exemption there, so if the worst does happen, and Amazon’s servers go offline too, you’ll be out of luck,” notes Samuel Gibbs in The Guardian.
It’s not the first time that zombies have come up in the context of data protection and disaster recovery. In 2011, the CDC issued its own emergency preparedness and response circular about zombies.
“You may laugh now, but when it happens you’ll be happy you read this,” the circular warned. It went on to describe the zombie threat, and what people could do to be prepared in the event of a zombie apocalypse.
No, it wasn’t issued on April Fool’s Day, and no, it wasn’t a joke. Well, sort of. Did the CDC really expect a zombie apocalypse anytime soon? No, probably not. But it got attention, it made people laugh, and if it got people to read the circular, the information in it worked just as well protecting them against hurricanes and floods as it did against zombies.
As it turns out, the CDC may have been inspired by the Defense Department, which in 2011 released its own plan for “Counter-Zombie Dominance.” “Planners … realized that training examples for plans must accommodate the political fallout that occurs if the general public mistakenly believes that a fictional training scenario is actually a real plan,” quotes Foreign Policy in 2014. “Rather than risk such an outcome by teaching our augmentees using the fictional ‘Tunisia’ or ‘Nigeria’ scenarios used at [Joint Combined Warfighting School], we elected to use a completely-impossible scenario that could never be mistaken for a real plan.”
Cornell University also used zombies to help study the spread of disease, and in the process, figured out the safest places to be in the event of a zombie apocalypse. Similarly, last December, the British Medical Journal published a peer-reviewed study on the upcoming zombie apocalypse, to call attention to preparing for infectious diseases.
“Using zombies in lieu of real diseases gives researchers, public health professionals, policy makers, and laypeople the ability to discuss these heavy issues without getting bogged down in one specific outbreak or pathogen, because many of the problems we’d face during the zombie apocalypse are similar to those that come up in any serious epidemic: coordination. Funding. Communication. Training. Access to treatment or prevention,” writes Tara Smith, the author of the paper. “In short, it’s way more fun for the average person to shoot the shit about zombies than to have a more serious discussion about influenza, or Ebola, or whatever the infectious disease du jour may be–and maybe even learn a bit of science and policy along the way.”
In fact, the CDC zombie circular worked so well that the agency expanded the program into a variety of other content, including a graphic novel. The Amazon reference to the CDC may have been a nod to the program.
Considering that people have actually given up their first-born children through not reading terms of service carefully, vendors can’t be blamed for putting all sorts of weird things into them, just to get people to pay attention.
Newsweek, for example, pointed out that Tumblr’s community guidelines state: “While you’re free to ridicule, parody, or marvel at the alien beauty of Benedict Cumberbatch, you can’t pretend to actually be Benedict Cumberbatch.” And Tumblr also tells children younger than 13 in its terms of service to “ask your parents for a Playstation 4, or try books” instead of using Tumblr, writes David Goldman in CNN.
In that context, the amount of attention paid to the zombie reference in the Amazon terms of service worked pretty well – according to Google, it resulted in about 400 articles.
Users of some of Verizon’s cloud services were left with two months to move their virtual servers to another, more expensive, cloud platform after the company told them it was shutting the services down.
Verizon Public Cloud and Reserved Public Cloud services will be shut down on April 12. The company told Bloomberg it intends to sell those businesses, which it bought through an acquisition of Terremark for $1.3 billion in 2011, and a later acquisition of Cloudswitch. Reuters had reported in November that the company had retained the services of Citigroup to help it sell the assets.
However, Verizon says it will keep its on-site Verizon Private Cloud (VPC) and Verizon Cloud Storage services active, writes Leo Sun for The Motley Fool.
Sun blamed two factors for Verizon’s decision. First, the company was having trouble competing on size with larger public cloud vendors such as Amazon and Microsoft. Second, it was having trouble competing on price with those vendors, as well as Google, which have been dropping costs for a couple of years now. “That move flushed many second-tier players out of the market,” he writes.
“It has become almost impossible to compete with AWS, Azure, and to a lesser extent with Google Cloud Platform in the market for renting virtual compute power over the internet and charging by the hour,” concurs Yevgeniy Sverdlik in Data Center Knowledge. “In competing with each other, these giants have made the cost of using cloud [virtual machines] so low and built out global infrastructure so big, no one can really manage to keep up.”
Because Verizon said it remains committed to supporting enterprise and government customers, Sun speculates that the company intends to provide more-lucrative private cloud services that don’t require it to support its own infrastructure.
Verizon government customers use a different cloud service platform, according to Frank Konkel in Nextgov. Verizon Enterprise Cloud Federal Edition is a public, private and hybrid cloud platform that has met the Federal Risk and Authorization Management Program’s standards, which are the government’s standardized cloud security requirements, he writes.
This isn’t the first time that a cloud provider has shut down with little notice, leaving its customers scrambling to find other options – as well as the logistical challenge of getting the data from one cloud provider to another. Cloud-based disaster recovery provider Nirvanix gave its users just two weeks when it shut down. Vendors such as HP have also announced that they are shutting down public cloud services.
In this particular case, Verizon is at least giving its users options, reports Barb Darrow in Fortune. “Customers on Verizon Public Cloud Reserved Performance and Marketplace can move their work to the company’s Virtual Private Cloud (VPC), which—according to Verizon, offers ‘the cost effectiveness of a multi-tenant public cloud but includes added levels of configuration, control, and support capabilities …’.”
On the other hand, these options are typically more costly. “These are dedicated, physically isolated cloud environments,” Sverdlik writes. “They are usually a lot more expensive than public cloud services, where many customer VMs run on shared physical servers.”
And in any event, moving virtual machines (VMs) takes a lot of work, Darrow quotes one user as saying. “It’s ‘a total pain’ that can take minutes to hours per VM because of a dearth of good migration tools,” she writes. Moreover, the hardware and application programming interfaces (or APIs) of the two kinds of cloud service are different, she adds.
Coincidentally, Terremark hit the news again this week, this time in connection with a post-mortem report on the botched Obamacare launch, for which it was a contractor. Five days before the launch, the company was ordered to double capacity within three days, but it proved not to be enough.
Okay, it’s another government vs. encrypted smartphone situation. But this one is different.
Syed Rizwan Farook and his wife Tashfeen Malik, who last December killed 14 people and injured 22 others in San Bernadino, had an Apple iPhone 5c. The Federal Bureau of Investigation (FBI) wants to see what’s inside the phone, and it’s asking Apple for help.
So far, this sounds like your standard encryption case – Apple says it doesn’t have the password, and can’t decrypt the phone, so the FBI is out of luck.
What’s different in this case is that that’s not what the FBI is asking for. Instead, the FBI is asking Apple to write a new version of the phone’s operating system that will make it easier for the FBI to break into the phone.
The iPhone in question has several security features to help protect it against attacks, such as wiping the phone after 10 incorrect password attempts in a row, forcing passwords to be entered via the phone screen, and implementing a pause in-between password attempts. The FBI wants Apple to write software for that phone – and, it claims, only that particular phone – to eliminate those restrictions, so the FBI can more easily implement a brute-force attack against the device.
(Now, if the shooters had used a fingerprint rather than a passcode to encrypt the phone, the FBI would be in the clear. In fact, they could have even used the fingerprint from the dead shooter to open his phone.)
For the policy wonks, the FBI is using an ancient law called the All Writs Act of 1789, which is intended to compel a third party to help with a criminal investigation. Let’s say you stole something and put it in my safe. All Writs can be used by law enforcement to make me open up my safe to retrieve the stolen property.
Apple, though, is refusing, claiming that were it to write such an operating system hack, it could get out into the wild and be applied against any iPhone. (Including, Apple now says, against more modern iPhones that have even more security features built in.) “World War II, especially in the Pacific, turned on this sort of silent cryptographic failure,” writes Ben Thompson in Stratechery. “And, given the sheer number of law enforcement officials that would want their hands on this key, it landing in the wrong hands would be a matter of when, not if.”
Moreover, Apple is concerned about the implication of using the All Writs law in this fashion. “If the government can use the All Writs Act to make it easier to unlock your iPhone, it would have the power to reach into anyone’s device to capture their data,” writes Apple CEO Tim Cook in an open letter. “The government could extend this breach of privacy and demand that Apple build surveillance software to intercept your messages, access your health records or financial data, track your location, or even access your phone’s microphone or camera without your knowledge.”
Also, having once let the genie out of the bottle, what’s to keep the FBI from coming back and requesting this software hack again, in a different case? Or even, writes Farhad Manjoo in the New York Times, prophylactically? “Once armed with a method for gaining access to iPhones, the government could ask to use it proactively, before a suspected terrorist attack — leaving Apple in a bind as to whether to comply or risk an attack and suffer a public-relations nightmare,” he writes.
Apple could also be subjected to the same pressure from other governments, Columbia University computer science professor Steven M. Bellovin (who has just been appointed the first technology scholar for the NSA’s Privacy and Civil Liberties Oversight Board) told CNN.
Naturally, the FBI is using one of the more heinous recent cases of record to force the issue. Terrorism is right up there with child pornography in terms of being one of those crimes that of course you don’t want to be seen supporting. “For the administration, it was perhaps the perfect test case, one that put Apple on the side of keeping secrets for a terrorist,” writes Matt Apuzzo in the New York Times.
One could even speculate that the FBI doesn’t actually need the information on the iPhone, but is simply using this case to establish the precedent.
But having once established the precedent, the software could be used again. Already, notes the New York Times in an editorial supporting Apple, another federal magistrate judge in New York is considering a similar request to unlock an iPhone, this time in a narcotics case. The editorial also pointed out that Apple had already given the FBI data from the phone’s iCloud backup, and that the All Writs Act has a provision against unreasonable burdens. (Manjoo also notes that future versions of the iPhone could potentially close any such loophole.)
At this point, the usual suspects are all lining up on one side or the other on the situation, with some agreeing with Apple and others saying that the company is overreacting. For example, Apple is calling the FBI’s request a “back door,” but is it really? It depends on the definition you use, Thompson writes. “Cook is taking a broader one which says that any explicitly created means to circumvent security is a backdoor,” he writes. But to some, a back door is a way to bypass encryption specifically, which is not what the FBI is asking for, he explains.
Some observers believe that, thus far, Google is equivocating in its support for Apple. What makes it interesting is that Google, along with Apple, was the other company that announced in 2014 that it was turning encryption on in phones by default. Does that mean, if criminals used a Google phone, Google might be more likely to cooperate in breaking the phone’s encryption?
Apple may also feel freer to take a stand on the issue because, unlike Facebook, Google, and Twitter, its business model isn’t as strongly predicated on gathering data from users, write Nick Wingfield and Mike Isaac in the New York Times. In addition, Apple has fewer government contracts that could be at risk than do some of its competitors, they added.
Apple has received an extension from the original February 23 deadline and now has until February 26 to agree to comply.