People used to joke about the notion of “write-only memory,” where data could be written to it but not retrieved again. To at least one user, that’s what’s happening with Amazon’s Glacier service.
As you may recall, Glacier was announced in August, 2012, as low-cost storage for long-term archiving in return for customers being willing to wait several hours to retrieve their data. That resulted in a cost of $10 per terabyte of data.
But a fellow who used the service found out that retrieving his data wasn’t nearly as easy as putting it in there. Marko Karppinen writes in Medium that he used the service to back up some 150 CDs, or 63 giagabytes, soon after the service became available. Recently, he decided to migrate the data.
“The culprit was the same neat freak tendency that had me toss all those CDs in 2012,” Karppinen writes. “I simply no longer wanted to have that 51¢ AWS bill appear, each and every month, in my email inbox and on my AmEx statement. Here in present-day 2016 I’m paying for a one-terabyte Dropbox account and, as a part of my Office 365 subscription, a 1TB OneDrive. Why would I keep a convoluted 51¢-a-month archival setup when I already have all the cloud storage I could need, on two diverse–yet–incredibly–convenient providers?”
But Karppinen found out it wasn’t as easy – or as cheap – as he might have thought. First of all, it was technically complicated to do, with limited tools to support it. Moreover, it is – as advertised – glacial, he writes. “Before you try it, it’s hard to appreciate how difficult it is to work with an API that typically takes four hours to complete a task.”
(Kind of like working with punch cards in the old days, grasshopper.)
Karppinen writes that he ended up spending most of the weekend trying various tactics – with a requisite four-hour wait after each new attempt.
Second, it was expensive. “Here I was, working on a full retrieval of the archive, something that Glacier was explicitly not designed for,” Karppinen writes. “Glacier’s disdain for full retrievals is clearly reflected in its pricing. The service allows you to restore just 5% of your files for free each month. If you want to restore more, you have to pay a data retrieval fee.”
When Karppinen had originally researched the fee, he noted that the description said it “started at” $0.011 per gigabyte, and assumed that that was what he would be charged, for a total of 86 cents. But as it turns out, it ended up costing him more than $150.
“Glacier data retrievals are priced based on the peak hourly retrieval capacity used within a calendar month,” Karppinen explains. “You implicitly and retroactively ’provision’ this capacity for the entire month by submitting retrieval requests. My single 60GB restore determined my data retrieval capacity, and hence price, for the month of January, with the following logic:
- 8GB retrieved over 4 hours = a peak retrieval rate of 15.2GB per hour
- 2GB/hour at $0.011/GB over the 744 hours in January = $124.40
- Add 24% VAT for the total of $154.25.
- Actual data transfer bandwith is extra.
Had I initiated the retrieval of a 3TB backup this way, the bill would have been $6,138.00 plus tax and AWS data transfer fees.” [All emphasis his.]
Remember, to add insult to injury, Karppinen still hadn’t gotten his music back – but he did eventually figure out how to do that. And he includes all the gnarly details.
Interestingly, when we wrote about Glacier in 2012, we noted two points:
- “The service is intended not for the typical consumer, but for people who are already using Amazon’s Web Services (AWS) cloud service. Amazon describes typical use cases as offsite enterprise information archiving for regulatory purposes, archiving large volumes of data such as media or scientific data, digital preservation, or replacement of tape libraries. ‘If you’re not an Iron Mountain customer, this product probably isn’t for you,’ notes one online commenter who claimed to have worked on the product. ‘It wasn’t built to back up your family photos and music collection.'” [Emphasis mine.]
- “There is also some concern about the cost to retrieve data, particularly because the formula for calculating it is somewhat complicated.”
Not to say “I told you so” or anything, of course. And Karppinen sounds like he’s figured that out already – and has a lesson for all of us as well. “More and more, we expect cloud infrastructure to behave like an utility,” he writes. “And like with utilities, even though we might not always know how the prices are determined, we expect to understand the billing model we are charged under. Armed with that understanding, we can make informed decisions about the level of due diligence appropriate in a specific situation. The danger is when we think we understand a model, but in reality don’t.”
California and New York are each attempting to pass bills that they claim will help protect people against crime, but in reality would likely simply eliminate a source of sales tax revenue from the state: Each is putting forth a bill that would forbid smartphones with unbreakable encryption to be sold within their respective states, levying a fine of $2,500 on each infraction.
As you may recall, this all started in the fall of 2014, when Google and Apple each released smartphones with encryption that even the respective vendors couldn’t break. Much handwringing on the part of law enforcement ensued, warning us of dire consequences such as pedophilia, terrorism, and so on. After the furor died down a bit, it has come up again in light of recent terrorist attacks and the concern (likely incorrect, as it turned out) that terrorists were using encrypted smartphones.
Now, one of the handwringers, Manhattan District Attorney Cyrus Vance Jr., has encouraged 62 New York district attorneys to ask the New York Assembly to address the issue, because the federal government has failed to do so, writes Seung Lee in Newsweek.
Democratic Assemblyman Matthew Titone, of Staten Island, put forth such a bill last June, but because the Assembly didn’t address it then, he has re-introduced A8093 during this legislative session. The goal of the legislation is to encourage the federal government to act on the issue, a spokesman for the Assemblyman told Lee.
Interestingly, according to some reports, it’s retroactive to January 1 of this year, meaning that Apple and Google could theoretically be on the hook for fines for smartphones they’ve sold legally.
At the same time, California Assemblymember Jim Cooper (D-Elk Grove) put forth a similar bill, Assembly Bill 1681. It is specifically intended to help law enforcement investigate and prosecute suspected criminals and criminal organizations that are found to be involved in human trafficking and other serious crimes, writes Hannah Albarazi for CBS. (One difference – the California bill doesn’t take effect until January 1 of 2017.)
Keep in mind what these bills actually purport to do. They don’t keep people from using encrypted smartphones in New York and California. They simply specify that vendors can’t sell (or lease) those phones in New York and California. The implication is that we could soon expect encrypted smartphone stores, like fireworks stands, to pop up around the borders of whatever states enact such regulations.
“I never thought I’d see an Apple Store in Newark (or Hoboken), but legislation to ban sales of secure smartphones will do exactly that,” noted one Twitter commenter.
Also recall that case law on whether people can be forced to surrender their phone’s encryption key is not yet settled. It all depends on whether an encryption code is something that is the expression of one’s mind, like the combination to a safe, which is protected under your Fifth Amendment rights not to incriminate yourself, or a physical key, something you possess, which is something you can be forced to produce. Courts have been going back and forth on the issue.
Meanwhile, members of the technology community ranging from Apple CEO Tim Cook to security expert Bruce Schneier have pointed out that encryption back doors don’t just open for the government or law enforcement, and would weaken security for everyone. “You can’t build a backdoor that only the good guys can walk through,” Schneier wrote when Apple first announced its policy. “Encryption protects against cybercriminals, industrial competitors, the Chinese secret police and the FBI. You’re either vulnerable to eavesdropping by any of them, or you’re secure from eavesdropping from all of them.”
“There have been people who suggest we should have a backdoor,” Cook reportedly told 60 Minutes. “But the reality is, if you put a back door in, that back door is for everybody, for good guys and bad guys.”
Ironically, while all this is going on, National Security Agency (NSA) Director Adm. Michael Rogers was telling the Atlantic Council, an international affairs think tank, that encryption was here to stay and that attempts to legislate it, as in California and New York, were misguided. “Spending time arguing about ‘Hey, encryption is bad and we ought to do away with it,’ that’s a waste of time to me,” Rogers said, writes Cory Bennett in The Hill.
Of course, to the paranoid, that simply is confirmation that the NSA already knows how to read our encrypted smartphones.
Whether it’s arguing about the definition of “is” or just what constitutes a “clean” room, kids and politicians love to debate about definitions. Now the people taking over a federal research station in Oregon are doing the same thing.
When the group – – and it’s up to you whether to call them “militia” or “terrorists” or something in-between – – first took over the Mather National Wildlife Refuge near Burns, Oregon on January 2, concern was expressed about the government computers and data on them located at the site.
Not to worry, the group said; they weren’t using the computers. Honest, Mom. Even though Oregon Public Broadcasting staff saw them doing so, as well as having access to other personally identifiable information in the station.
“After [member of the occupying group’s security team LaVoy] Finicum realized he shouldn’t have allowed OPB to access the room, he quickly picked up lists of names and Social Security numbers by the computers, and hid government employee ID cards that were previously in plain sight, “ John Sepulvado of OPB writes. “We haven’t touched any of the computers, we haven’t tried to log on – – we haven’t done anything, “ group leader Ryan Bundy told him.
As it turns out, their not using the computers is only sort of true, depending on your definition of “using” – – and the technique involved is something we should all be aware of should we also find ourselves hosting uninvited guests around our hardware.
According to another article by Sepulvado and Amanda Peacher, the group is indeed using the computers, via thumb drives with Linux on them. This is through occupier David Fry, who said he drove in from Ohio to help the group and that he knows “a little bit” about computers. But what he was doing was okay because he wasn’t actually using the computers or the data on them, he told OPB.
“I am using any computer I can use, “ Fry told OPB. “Their data is perfectly preserved… you can’t access any of that, it’s got encryption on it. “
Thank goodness for small favors.
What are the occupiers using – – for whatever definition of “using” you prefer – – the computers for? Making a website for the occupation, OPB writes, which Fry identifies in one of his videos as defendyourbase.net. In fact, he’s even posted videos of himself “using” the computers and explaining the use of the thumb drives.
Reuters reporters also observed the occupiers using office printers. And whether the data came from paper files or the computers, employees who live in the area report that they have been harassed in the vicinity of their homes by people they didn’t recognize, which they attribute to the occupiers releasing their personal information. (Perhaps the next job the office might take on is digitizing their paper files and putting them, as well, under encryption.)
But remember, the occupiers aren’t really “using” the computers.
It remains to be seen if the law will agree. Continued »
The last couple of years have been pretty quiet ones in the world of e-discovery. No multimillion-dollar sanctions for people not following the rules, a few small acquisitions (we’re still dealing with the HP-Autonomy one), a couple of changes in the market but nothing radical. Zzz.
Next year, though, is likely to be different, because of something that actually happened last year. So you can think of 2015 as the calm before the storm.
In case you’ve forgotten, new regulations about the federal rules of civil procedure, which govern e-discovery, were passed in September, 2014, and were slated to take effect on December 1.
“Twitter is abuzz with messages about today’s effective date for the changes to the Federal Rules of Civil Procedure that read more like birth announcements (‘It’s finally here!’),” writes Karin Jenson in Discovery Advocate. “But figuring out what to do once you get that baby home is another matter – despite having a long time to prepare. Moreover, while there is as much commentary about the rules changes as there are parenting books, it’s hard to really figure out what to do until you are doing it.”
“When the 2006 FRCP amendments came out that initially addressed e-discovery, Windows Vista was only a few weeks old, and Apple released its MacBook Pro with a 17-inch screen, 2 gb of RAM, and a 160gb hard drive for $2,799,” notes Jeff Bennion at Above the Law. “Facebook was a few weeks old, and the idea of storing thousands of photos online was still years away. And that was when lawmakers were concerned that electronic data was exploding. It makes it scary to think about how much more data will be created between now and the next update.”
In particular, the new FRCP rules are expected to address many of the ediscovery issues that came up the most in court in 2015, notes Kroll Ontrack’s report, Year in Review: Top Ediscovery Cases in 2015. “As courts grapple with the impact of the new rules, 2016 has the potential to be momentous,” Michele Lange, Kroll’s director of thought leadership, told Amanda Ciccatelli of Inside Counsel. “Will courts hold parties accountable when they only hold ‘drive-by’ meet and confers without raising real, substantive discovery issues at the scheduling conference? To what extent will the new emphasis on proportionality limit the scope of discovery above and beyond current practices? How will courts define reasonableness and good faith when determining if sanctions are appropriate?” Ciccatelli writes. “These are meaty issues – with real impacts for requesting and producing parties – that will need to be worked out in judicial opinions in the coming months and years,” Lange told her.
Other e-discovery issues that are likely to crop up? One of them, as always, will be new sources of data – in this case, sensors used in the Internet of Things, writes Edward Sohn for the Association of Corporate Counsel. We’ve already seen some of that with attorneys subpoenaing people’s FitBits and so on; imagine having to provide all the data from all the sensors in a manufacturing plant?
Another continuing issue from 2015 is the matter of data sovereignty and what control various governments have over companies that operate in their regions, even if they aren’t located there. Microsoft has continued fighting efforts by the U.S. government to force it to turn over data located in Ireland, for example, and with the European Union trying to control data exchanges with the U.S., the potential for it affecting other legal cases is high.
The interesting thing about looking over an entire year’s worth of stories at once is that things that seemed like a single event at the time turn out to be part of a larger arc.
Yes, the storage industry is like a Game of Thrones season. Who knew.
If you’ve been around reporters at all, you’ll know that they have a rule of thumb: Anything that happens three times is a Trend. By that measure, here’s a number of trends in the storage industry from 2015, and where they might go in 2016.
Storage meets public policy. The whole concept of storage has really gone beyond the basic functionality of keeping track of bits and bytes. It now extends into public policy and business operations as well. What data can companies store? How can they exchange it? In particular, what kind of data can governments store or access? And, most important of all, how can that data be protected from hackers?
We saw many examples of governments directly or indirectly building databases, ranging from faces to spit. What will happen once the government has a database of what everyone looks like so they can use it to scan faces at a demonstration or compare DNA collected from a protest against a database of genetic material? Especially if the police have their own database of everyone who simply knows criminals, under the theory that friends of criminals must be criminals too? Not to mention Republican Presidential candidate Donald Trump’s potential database of Muslims.
At the same time, governments are gaining access to other organizations’ databases through the courts, whether it’s Facebook’s collection of data, data stored offshore by a cloud vendor, or Dropcam video files. Courts are being used against individuals, too, such as by accusing people of hiding data simply by clearing their browser or using encryption – assuming, of course, that governments don’t succeed in their quests to require back doors and allow surveillance.
Courts are also still trying to decide whether people are required to submit their electronics for search within 100 miles of the border, give up their encryption keys, or let law enforcement search their phones. Moreover, governments themselves are increasingly making it harder for citizens to look at government data, such as by setting up private email servers, “losing” data, or protecting police body camera footage.
What it means for 2016: It seems likely that law enforcement organizations, courts, and governments are going to continue using every opportunity they can to collect data, and yet at the same time not do a great job protecting it nor giving individuals the tools they need to protect it themselves. Expect invocations of Paris and 9/11 to be used to justify ever more government access to our data.
What it means for 2016: It’s never a bad time to migrate data on old media into a more up-to-date storage system. And while you’re at it, make sure that the data is readable by modern software as well. Migrating all the old spreadsheets to a new hard disk isn’t going to help if they’re still all in VisiCalc format.
What it means for 2016: They’re still bad. Stop picking up thumb drives and sticking them into your computer. You don’t know where they’ve been.
Finally, there’s acquisitions. Obviously, the biggest one of these is Dell buying EMC (assuming the deal actually gets finalized), but we also had Western Digital buying SanDisk in the ongoing trend of storage company consolidation. And HP-Autonomy is still hanging in there – in court, at least, even if HP did write down most of the acquisition.
What it means for 2016: You can always count on companies buying and selling each other in the industry. In particular, it will be interesting to see how the Dell-EMC acquisition shakes out, especially if Dell ends up having to divest itself of some pieces to be able to keep the other pieces.
A U.S. appellate court has recently ruled that violating rules about the use of databases at work isn’t subject to criminal penalties, which opens the potential for all sorts of interesting possibilities.
Of course, just this particular case was interesting enough: A police officer who used a police department database to look up information about women he wanted to kill, cook, and eat.
“Former New York City Police Officer Gilberto Valle was found guilty at trial in March 2013 of conspiring to kidnap women and illegally accessing a police database to collect information on potential victims,” writes Joseph Ax for Reuters. But the 2nd U.S. Circuit Court of Appeals in New York vacated his conviction for using the database, “finding that federal law does not prohibit individuals from accessing a computer they are normally authorized to use, even if they do so for an improper purpose,” he continues.
“As an NYPD officer, Valle had access to the Omnixx Force Mobile (“OFM”), a computer program that allows officers to search various restricted databases, including the federal National Crime Information Center database, which contain sensitive information about individuals such as home addresses and dates of birth,” the court writes. “It is undisputed that the NYPD’s policy, known to Valle, was that these databases could only be accessed in the course of an officer’s official duties and that accessing them for personal use violated Department rules. In May 2012, he accessed the OFM and searched for Maureen Hartigan, a woman he had known since high school and had discussed kidnapping with Aly Khan. This access with no law enforcement purpose is the basis for the CFAA charge.”
Prosecutors also used Valle’s illicit research in the database as evidence that he was actually planning to carry out some of his fantasies, for which he was also charged and which the appellate court also threw out because they felt he was simply expressing fantasies. “Valle was not accused of harming any women,” Ax writes. “Instead, prosecutors said he discussed with other online enthusiasts his intention to abduct, torture, cook and eat women.”
Not men, though, because, you know, that would be weird.
The computer charge hinged on the Computer Fraud and Abuse Act (CFAA), and the court reversed the conviction because it was concerned that upholding it would give the government too much power, writes Justin William Moyer in the Washington Post. “While the Government might promise that it would not prosecute an individual for checking Facebook at work, we are not at liberty to take prosecutors at their word in such matters,” he quotes from the opinion. “A court should not uphold a highly problematic interpretation of a statute merely because the Government promises to use it responsibly.”
The problem with the prosecutor’s initial argument, writes Orin Kerr for the Volokh Conspiracy, is that various parts of the law and other courts had used the CFAA to make largely artificial distinctions between the notion of illegal “access” vs. illegal “use” – which, taken to their logical extreme, could make playing Freecell or using Facebook on a work computer a criminal offense. “Playing solitaire or using Facebook plainly satisfies this element,” he writes. “When you play solitaire, you enter in commands to see cards. You therefore obtain information about your cards from the computer accessed. And when you spend time on Facebook, you’re constantly seeing new text, pictures, and videos that you hadn’t seen before you logged in. You are ‘obtaining information’ for purposes of the statute.”
Valle did have access to the National Crime Information Center database in the normal course of his job, and the way the CFAA is written, he could only be charged under it if he was gaining access to information he was not entitled to in any way, writes the Electronic Frontier Foundation in its amicus curiae on the case. (If you’re just dying to read the argument yourself, it’s pages 28-38 in the court’s ruling, and 24-34 in the dissent.)
Consequently, Kerr didn’t feel that Valle was guilty under that charge. “If violating a written restriction on a computer is an unauthorized access, then pretty much everyone is a criminal,” he writes. “That includes me, as I have even testified to Congress about one of my many violations of written restrictions on computers: My Facebook account says I live in Washington, DC, although I actually live in Arlington, VA.”
On the other hand, one wonders, what sorts of shenanigans with work computers are now considered legal due to this ruling? Are there people (other than the mom who created a fake MySpace account for the purposes of harassing one of her daughter’s classmates) who have been charged under this who should now go free? Is there any activity with a work computer that can now be considered criminal, or is it at this point on only a matter of workplace discipline?
Ultimately, the case could go to the Supreme Court, writes Noah Feldman, a professor of law at Harvard University and a columnist for Bloomberg View. “This issue has split the federal courts of appeal, with four adopting the government’s view, and now three saying that under the rule of lenity, an ambiguous criminal statute ought to be read restrictively and in favor of the defendant,” he writes. “The 2nd Circuit’s worry is that a broad reading of the statute turns every violation of an employer’s computer rules into a violation of federal law. That would certainly be an overreading of the statute, not to mention bad policy. The split means the Supreme Court should resolve this issue — possibly even in an appeal in this case.” It also seems likely that the CFAA should be modified to be more clear.
First it’s the cassette. Now it’s Betamax. What’s a tech geezer to do?
Sony – which, surprisingly, was still making them, more than ten years after it quit making Betamax players – recently announced it was going to stop manufacturing Betamax tapes no later than March, after introducing the machine in 1975.
“Assumed already dead by many, the final Betamax cassette will roll off the production line in March 2016 as its maker concedes defeat to the march of time, 20, maybe 30, years late,” writes Samuel Gibbs in the Guardian.
Now, to all you whippersnappers out there who don’t recognize the significance of this action, Betamax – essentially the first popular consumer video recording device – is what gave you the privilege of still watching The Walking Dead or Game of Thrones if you’re busy when it airs. It was a seminal Supreme Court case, filed against Sony and Betamax, that determined that making recordings of tv shows for yourself was fair use.
“For the first time ever TV fans weren’t tied to their local stations’ schedules,” reminisces Douglas Perry in the Oregonian. “No longer would kids have to run home from the school bus to avoid missing the first five minutes of Scooby-Doo. No longer would their parents have to cut out of the neighbor’s cocktail party after only two drinks so they could catch Hill Street Blues. They could watch their favorite shows whenever they wanted to.”
The court ruled that, even if some copyright infringement might occur, the right of the consumer was paramount. “Even if it were deemed that home-use recording of copyrighted material constituted infringement, the Betamax could still legally be used to record noncopyrighted material or material whose owners consented to the copying,” says the decision. “An injunction would deprive the public of the ability to use the Betamax for this noninfringing off-the-air recording.”
“The ruling opened the door for TiVo and other digital gadgetry in the home, then helped defend an assortment of Web-based services with both infringing and non-infringing uses, such as YouTube and other user-generated content sites and Dropbox and other online storage services,” wrote the Los Angeles Times in an editorial on the 30th anniversary of the case.
Of course, even if the Supreme Court had ruled the other way, the horse was out of the barn by then, writes the Museum of Broadcast Communications. “Although the U.S. Court of Appeals reversed the lower court’s decision in October 1981, the decision, if it were to stand, would have been impossible to enforce,” the organization writes. “The home video market had been expanded enormously since the start of the case; VCR sales had increased from 30,000 sets a year in 1976 to 1,400,000 a year in 1981.”
Sadly, Sony never got to enjoy the fruits of its success. A year after the Betamax was introduced, JVC introduced VHS (for Video Home System, who knew?), which within another year was eating the Betamax’ lunch, and eventually killed it. This was even though Betamax reportedly offered better features – “more accurate colour replication, superior resolution and smaller tapes,” writes Stuff. (In fact, the format is still used in the industry, such as for making aircheck tapes.)
“In its first year of sales, VHS took 40% of Sony’s business,” writes Marc C. Scott in The Conversation. “By 1987, 90% of the US$5.25 billion VCR market sales were VHS. Furthermore by 1988, 170 million VCRs had been sold worldwide, of which only 20 million (12%) were Betamax.”
Sales of Betamax videocassettes reached their peak in 1984 (coincidentally, the year the Supreme Court ruled on the case), when Sony shipped 50 million units, writes Robert Hackett in Fortune. “Soon after they became obsolete, living fossils.” By 2002, the company stopped making the machines, after having manufactured 18 million of them, he adds.
What was the fatal blow? Some people like to blame the porn industry.
There’s two theories. The first is the claim that, as with the Blu-Ray format (also produced by Sony), the pornography industry killed Betamax by preferring the VHS format. The second is that Sony indirectly killed Betamax by not allowing porn producers to use the format.
As it turns out, it’s more a correlation than a causality. First of all, some porn was manufactured for the Betamax, though it is true that Sony frowned on it. Second of all, to the extent that porn manufacturers picked VHS, they did so for the same reason that the majority of consumers were choosing it – and it wasn’t because consumers preferred lousier pictures.
A number of factors contributed:
- Like the PC vs. Apple battles, Sony was more restrictive on its licensing policies, so there were fewer Betamax than VHS manufacturers, meaning machines were more expensive and less common. (Especially with the first model, where it was actually built into the television and cost $2,295, while a one-hour cassette cost $15.95.)
- Because VHS machines were more plentiful, providers were also more likely to make their content available on VHS, and consumers became more interested in renting or buying content than recording it themselves.
- Because content was more available on VHS, stores (remember Blockbuster?) tended to stock more VHS than Betamax tapes for rental or purchase.
And this became a vicious circle. The more people bought VHS systems, the more content was available for them, and consequently the less space was devoted to Betamax systems and content.
“VHS became a more open and widely adopted format for the video cassette, which resulted in a larger economy of scale, allowing VHS to beat Betamax on price,” Gibbs writes. “That greater adoption and lower cost saw the pornography industry pick VHS as the format of choice for its home videos, which is largely considered the turning point that propelled VHS to victory.”
Moreover, while Betamax tapes offered better quality, that quality came at a price: The tapes would only last 90 minutes, at most, at a time. This meant you couldn’t go out while recording a two-hour movie or sporting event, which defeated the whole purpose of having a VCR in the first place.
“Betamax was the first successful consumer video format, and at one time it had close to 100% of the market,” writes Jack Schofeld in the Guardian. “All of the video machines in use and all of the pre-recorded movies were Betamax. It had a de facto monopoly, and an element of lock-in (because of tape incompatibilities). It lost because, at the time, it could not do what consumers wanted: record a whole movie unattended.”
Meanwhile, VHS tapes continue to be produced, according to Gibbs, even though they, too, have been superseded by other technology.
Still have a Betamax machine for which you need blank tapes or recorded content? Or do you still have Betamax content you’d like to be able to watch? Cheer up. There’s always eBay.
Republican Presidential candidate Donald Trump recently indicated that he would at least consider setting up a database to track Muslims. While he’s since to some degree backed away from it, it still makes an interesting thought experiment in the context of database design and public policy – if only to point out how very, very fraught such a thing would be.
Needless to say, the whole notion of such a database is problematic. Any student of American history, ranging from the Japanese internment to McCarthyism, can explain this. But simply as a technical issue, here’s all the reasons it’s impractical.
- How do we define “Muslim”? Self-defined? Your parents were? What if one parent was? How devout do you have to be to “count”? (Theoretically, the U.S. could use the definition of Jewish that the Nazis used, but that might be politically unpopular.)
- Similarly, which people “in” the U.S. would need to register? Citizens? Students? (That should go over well with the colleges and universities that count on foreign student tuition.) Visitors? How long do you have to be in the country before you sign up? Do they get removed from the database when they leave?
- Just what information is going to be tracked? And how does it get updated when it’s changed? Keep in mind how challenging it is even to ensure that voter rolls are kept up to date.
- How do you get people to sign up? If it’s voluntary, do we really think that people with terrorist leanings are going to meekly put their names on a database? If not, how do you enforce signups? Where do you get the data to begin with to find the people you want to sign up? The census, for example, no longer tracks religion.
- On the other hand, how do you keep non-Muslims from signing up as an “I am Spartacus” act of protest? Following the (sadly untrue) belief that World War II’s King Christian of Denmark wore a yellow star to show solidarity with Jews, a number of people have already indicated that they plan to identify as Muslim should any such system be implemented. Do we just shrug and say ok, if you want to say you’re Muslim, you are?
- If you don’t just register yourself, how do you deal with false positives? Remember that even Senator Ted Kennedy has been put on a terrorism no-fly list.
- Who’s going to provide this database? While companies such as IBM reportedly worked with the Nazis during World War II, many vendors these days consider themselves progressive. It’s difficult to believe, for example, that Google or Facebook would cooperate with such an effort.
- Similarly, who’s going to set this up and work on it? Presumably this would be a government effort, perhaps through the Department of Homeland Security. But how many techies are actually going to consent to work on this? It doesn’t seem like the sort of project where outsourcing is going to be a good idea, you know?
- More to the point, how do you ensure that protesting techies don’t sabotage it in some way? Does anyone think that Anonymous – which is doing its own work to help reduce terrorism – is going to let this database stay up and functioning properly for more than ten seconds? Won’t an effort like this spawn a dozen Edward Snowdens who want people to know what their country is doing?
- Aside from the politically motivated hackers, how is the database going to be secured, both for the amount of personally identifiable information it would have and from the people who might decide to use it to take out their Muslim neighbors?
- How much is this going to cost? And where’s the money going to come from? Michigan, for example, has paid HP $33 million to develop a replacement for its Secretary of State’s system. The state’s population is about 9 million, right in the 5 to 12 million range estimated for the number of Muslims in the U.S.
- How long is this going to take? Going back to Michigan, the state is now suing HP for $49 million after the company took more than ten years and still didn’t deliver a working product.
- Finally, keep in mind that every organization from the ACLU to the EFF would be taking the government to court on this, which would mean development would take even more time.
In short, even if this database were a good idea, it would be years before the data could be used. Hopefully, by then, we’ll have wised up.
Smile. You’re on the government’s camera.
Increasingly, governments are able to identify people using facial recognition software, and are collecting databases of people’s faces – not just criminals, but regular, ordinary people. Generally, there’s no laws against it.
And what’s more, we’re helping them do it.
Collectors of such images range from border security to law enforcement organizations to even retailers. “Before taking her away, Officer Rob Halverson paused in the front yard, held a Samsung Galaxy tablet up to the woman’s face and snapped a photo,” writes Ali Winston, of a program in San Diego. “Halverson fiddled with the tablet with his index finger a few times, and – without needing to ask the woman’s name or check her identification – her mug shot from a previous arrest, address, criminal history and other personal information appeared on the screen.”
Photos used in the system come from the statewide law enforcement database, which includes 32 million driver’s license photos, Winston writes. The county is also looking at using mug shots from statewide gang and parolee databases, he adds.
Similarly, Australia announced earlier this year that it was spending $18.5 million to create a database of facial photos – including photos from Facebook and Instagram — for use in federal law enforcement. “The images can come from drivers’ licences, passport photos or security cameras in your local shopping center,” write Margot O’Neill and Amy Sherden for ABC Australia.
The FBI has a similar program. Incidentally, the system has a 20 percent failure rate in terms of identification.
There’s also the security aspect. “If your passport, credit card, PIN or tax file number are compromised due to a security breach, they can be replaced fairly easily,” writes Adam Molnar in The Conversation. “Not so with your facial features. If a biometric database is hacked, the information can potentially be abused by criminals over your entire life.”
Coincidentally, there’s suddenly a swarm of games out there that seem to have the goal of collecting facial photographs. Earlier this year, Microsoft’s “How Old Do I Look” analyzer swept through Facebook. Were the results right? Were they wrong? Who cares? The point is, within a few hours, Microsoft had tens of thousands of new facial photographs.
“Proposed uses include verifying whether two faces in separate photos belong to the same person, or using one person’s photos to find him or her in multiple other photos,” writes CBC News. You know, like searching photos of a demonstration to identify protesters.
For what it’s worth, the developers now say the site doesn’t save the photo. “No we don’t store photos, we don’t share them and we only use them to guess your age and gender,” write Corom Thompson and Santosh Balasubramanian, Engineers in Information Management and Machine Learning at Microsoft, who wrote a blog post about it. “The photos are discarded from memory once we guess. While we use the terms of service very common in our industry, and similar to most other online services, we have chosen not to store or use the photos in any way other than to temporarily process them to guess your age.”
But even assuming that’s true, how many people even thought about that aspect before trying to find out how young they looked? Even without saving the pictures, the database now has a lot more practice identifying people. And just because this app doesn’t save photos, how about other apps?
More recently, there’s the “My Most-Used Words on Facebook” app, which not only looks at the words you’ve posted in the past year but every picture that’s been posted – which most people didn’t even notice, writes Paul Bischoff. “Over 16 million people have agreed to give up almost every private detail about themselves to a company they likely know nothing about just to play a quiz,” he writes. In addition to a boatload of information about yourself and your friends, it also has access to all the photos you’re tagged in.
Like Microsoft, the word cloud app vendor, Vonvon, said it didn’t save the data, and later allowed people to edit the permissions for their personal information. But again – how many people even thought to look at the permissions?
(And now there’s a new one, Which is Your Most-Liked Photo On Facebook?)
Or there’s the recent trend toward “gigapixel” super-high resolution photos of enormous sporting events, where the more than 100,000 attendees are not only perfectly identifiable, but are encouraged to helpfully tag themselves and their friends. It takes only 2 minutes and 40 seconds to photograph an entire stadium, and the company specializing in the process says it typically has eight such projects every weekend.
It may be that all these apps are perfectly innocent. But we don’t know. And until we do, it behooves us to be careful – at least til we find out who’s on the other side of the camera.
The British government is pushing for a law that would require Internet service providers to keep for a year a list of all the websites that their users visit – an action that has already been ruled a violation of privacy by the European Court of Justice. And this new law was in response to the last set of Paris terrorist attacks, let alone the most recent ones.
The Investigatory Powers Bill would order communications companies, such as broadband firms, to hold basic details of the services that someone has accessed online, explains the BBC. “This duty would include forcing firms to hold a schedule of which websites someone visits and the apps they connect to through computers, smartphones, tablets and other devices,” the BBC writes. “Police and other agencies would be then able to access these records in pursuit of criminals — but also seek to retrieve data in a wider range of inquiries, such as missing people.”
While the government already has some of these powers, it doesn’t have historical information about the websites people visit, reports the BBC.
“This isn’t a license for the police to simply prowl over everything you have been doing, but I quite accept that a lot of data is being kept by these service providers and under the government’s proposals it would be kept for a very long time,” David Anderson, described as the “government’s terror watchdog,” told the BBC.
Predictably, some members of the UK government are using the most recent Paris attacks to justify accelerating adoption of the Investigatory Powers Bill. “Lord Carlile says Theresa May’s Snooper’s Charter should be rushed through Parliament within the next month, to prevent terrorist attacks in the UK,” writes Mikey Smith for the Mirror. “Speaking in the wake of the Paris terror attacks, the Lib Dem peer warned: ‘It could have been London.’”
What might end up stopping the whole plan is less a matter of privacy or personal liberty and more a matter of money. Though the cost of performing universal surveillance has gotten a lot more affordable lately, thanks to cheaper storage, tracking all these websites still adds up, reports the BBC. The British government had allocated 175 million pounds – about $267 million – but that might not be enough, the BBC writes.
Part of the cost, of course, is protecting all that data. It could end up being a treasure trove for hackers, after all, because it could provide all sorts of juicy blackmail material such as which porn sites people visit. “Making sure there’s no way the hackers can get in is a challenge for any company, and that is hard work,” Adrian Kennard, director of Andrews & Arnold, a Bracknell-based internet provider, told the BBC. “This is sensitive personal information, even if you are just holding the websites people went to and not the specific pages. That makes it a very valuable target for criminals to go after — they may even try to infiltrate employees into companies to try to access it.”
Ironically, this is all happening despite findings that such broad-based surveillance actually doesn’t do much to help prevent terrorist attacks. “Court documents lodged in the US and UK, as well as interviews with involved parties, suggest that data-mining through Prism and other NSA programmes played a relatively minor role in the interception of the two plots” that governments claimed were prevented, writes Ed Pilkington and Nicholas Watt for the Guardian. “Conventional surveillance techniques, in both cases including old-fashioned tip-offs from intelligence services in Britain, appear to have initiated the investigations.”
That said, other law enforcement organizations such as the FBI are also using the Paris attacks to justify their long-held position that governments should mandate a “back door” into encryption, even though there’s no evidence the attackers used encryption — and, in fact, quite a lot of evidence that they didn’t.