Five years ago this month, I wrote about Western Digital announcing a 10-terabyte (TB) hard disk drive. Which makes it appropriate that, this month, Western Digital has now announced a *20*-TB hard disk drive.
“Western Digital announced that it would ship 18 TB conventional magnetic recording (CMR) and 20 TB shingled magnetic recording (SMR) HDDs in the first half of 2020,” writes Tom Coughlin in Forbes. “This helium-sealed 9-disk HDD platform is said to have ‘…leveraged energy-assisted recording technology to deliver areal density leadership at the highest capacity available.’”
Well, at least until five years from now.
Incidentally, 10-TB hard disk drives are now considered to be “commodity” hard disk drives, at least according to Backblaze, which is now using 10-TB Seagate drives – as well as other hard disk drives up to 14 TB – in its storage pods. It doesn’t use the Western Digital ones because it has trouble getting them in quantity, director of compliance Andy Klein wrote earlier this year.
Western Digital said it will sample the 18TB Ultrastar DC HC550 CMR HDD and the 20TB Ultrastar DC HC650 SMR HDD to customers by the end of 2019, Coughlin said.
Like the 10-TB hard disk drive, the new 20-TB hard disk drive uses shingled magnetic recording technology, which puts more data in the same space though it’s is slower. For the 10-TB hard disk drive, Western Digital came right out and said it was mainly intended for “cold storage” facilities, and that is presumably true of the new one as well.
That means “slow,” because you put stuff in cold storage that you don’t need all the time, and so you spin down the drives rather than keep them running all the time because it saves energy. So every time you retrieve something from cold storage, you have to go kick the drive to start it up again. It’s like keeping the beer in the fridge out in the garage. You can put a lot more beer out there, but you have to traipse out to the garage every time you want a beer.
Coughlin did cite Western Digital as saying that the drives are targeted at data center applications and that the company estimated that 50 percent of its hard disk drive exabytes shipped will be on SMR by 2023. The press release also quoted the vice president of engineering at Dropbox.
Another big customer is likely to be DDN, which Western Digital recently announced would be buying the company’s IntelliFlash business as part of its plan to exit from storage systems. At the same time, though, the two companies agreed to a multi-year strategic sourcing agreement, under which DDN will ncrease its purchase of Western Digital’s HDD and SSD storage devices, according to a Western Digital press release.
According to Coughlin, Western Digital holds 37 percent of the hard disk drive market, while Seagate holds 40 percent and Toshiba holds 23 percent.
The company didn’t say how much the new hard disk drives would cost.
Disclaimer: I am a Backblaze customer.
We’ve written more than once about government efforts – in the name of fighting crime, of course – to track down everyone in a particular area where the crime was committed. But there’s a new wrinkle.
The Department of Justice and the Immigration and Customs Enforcement Department are now working to obtain a court order to force Apple and Google to give it the names, as well as phone numbers and IP addresses, of everyone who’s downloaded a particular application since August, 2017, plus whenever they’ve used it. The application in question is called Obsidian 4 and is put out by night-vision specialist American Technologies Network Corp. (ATN). Its purpose is to let gun owners get a live stream, take video and calibrate their gun scope from an Android or iPhone device, according to Thomas Brewster in Forbes, who broke the story.
While other governments have made similar requests in the past – Brewster describes one non-U.S. government that asked one global technology company for the names and addresses of 58 million users of a single app so it could trace a suspected terrorist cell plotting a suicide bomb attack – this is the first time the U.S. government has tried this, he writes.
The issue is illegal exports of ATN’s scope, which is controlled under the International Traffic in Arms Regulation (ITAR), though the company itself isn’t under investigation, Brewster writes.
Likely this would be considered a “third party” request because the government isn’t requesting the information from the users themselves, but from third parties to which the users had given their information. In the Carpenter case, the Supreme Court ruled that such requests about individuals required a warrant.
However, the Supreme Court also ruled that “tower dumps” – in other words, a download of information on all the devices that connected to a particular cell site during a particular interval — are okay. Or, at least, when ruling on Carpenter, they didn’t rule that they weren’t okay. It could be that the court will see these as similar situations, or will at least allow the Department of Justice access to some of the information, with the ability to issue a warrant to get the remaining data for any individual who looks particularly suspicious.
Needless to say, conservative websites are having kittens about this, because they see it as a back door to track down gun owners.
“Allowing this request to go through would create dangerous precedents,” writes Beth Baumann is Townhall, which describes itself as the leading source for conservative news and political commentary and analysis. “The most dangerous aspect though is the government being able to pinpoint every single person and every single firearm a person has. It’s a form of a gun registry…without calling it that.”
According to American Military News, ATN wasn’t contacted about the court order, and intends to protect its customer data to the extent it is allowable under law.
If the idea is actually to track down overseas users of the application, the government should do that, writes Jazz Shaw for HotAir, a sister publication of Townhall. “Why ask for the data for all users?” he writes. “Surely Google and Apple could provide the user data for just those users outside the United States, right? Why not just ask for that if there are no issues with people using the scopes or the app in America? Seems like a reasonable compromise that both the government and the tech giants could see eye to eye on.”
But the problem with this request is actually broader than that. Keep in mind that, at various times, the government has made certain computer things illegal, ranging from public-key encryption to online gambling. What would it be like if the government could then track down everyone who had downloaded such applications and bust them for owning them?
For example, if the FBI succeeds in outlawing encryption, will the government be able to track down everyone who’s ever downloaded WhatsApp, even if they don’t realize it uses encryption and didn’t download it for that purpose? Or, in a hypothetical oppressive U.S. government that outlawed social media or the use of the Internet altogether, what if the government could get the names and contact information for everyone who ever downloaded Twitter or a TCP/IP stack?
A security analyst quoted by Brewster also notes that people don’t necessarily download an app to use it, but could be checking it for security vulnerabilities and other uses. People could also be less willing to use Google and Apple app stores in response to this, the analyst added.
Another concerning aspect is that the court order was supposed to have been sealed, and Forbes apparently managed to get a copy of it beforehand. One would assume future court orders won’t make that mistake, and people won’t even be able to find out about them.
Presumably the various civil liberties organizations, not to mention Google and Apple themselves, are lawyering up to stop this. Because the precedent, if this passes, is pretty scary.
Over the past few years, there have been a number of incidents regarding law enforcement and attorneys wanting access to recordings made by smart devices such as the Amazon Alexa and copies of data from devices such as FitBits. Now, there’s some indication that the government may be asking for such data as well, in the name of preventing mass shootings.
“The proposal is part of a larger initiative to establish a new agency called the Health Advanced Research Projects Agency or HARPA, which would sit inside the Health and Human Services Department,” writes Jacqueline Alemany in the Washington Post. “Its director would be appointed by the president, and the agency would have a separate budget, according to three people with knowledge of conversations around the plan. HARPA would be modeled on DARPA, the highly successful Defense Advanced Research Projects Agency that serves as the research arm of the Pentagon and collaborates with other federal agencies, the private sector and academia.”
The lead scientist on the project emphasized that this would be a voluntary program, but nonetheless, people are having kittens about the concept. Because, seriously, how likely is it that someone who exhibits the tendencies of a person who goes on to become a mass shooter will allow the government to collect data about them?
“The idea is for the agency to develop a ‘sensor suite’ using advanced artificial intelligence to try to identify changes in mental status that could make an individual more prone to violent behavior,” Alemany writes. “The document goes on to list a number of widely used technologies it suggests could be employed to help collect data, including Apple Watches, Fitbits, Amazon Echo and Google Home. The document also mentions ‘powerful tools’ collected by health-care provides like fMRIs, tractography and image analysis.”
Opponents noted that mental illness isn’t necessarily a predictor of being a mass shooter, and expressed concern about what would happen to people who were identified by the system as potentially being violent and whether the consent would be buried in some multipage terms of service document.
In addition, governments may be able to collect this data on people whether they volunteer or not, using a policy known as “third party doctrine,” which the Supreme Court has been deciding in connection with what data government and law enforcement can retrieve from cellphone service providers. “The U.S. Supreme Court has long held that when private individuals surrender personal information to third parties and the government subpoenas that information from the third parties, there’s no Fourth Amendment violation,” writes Frank Camp in the Daily Wire, quoting a Cornell Law School professor.
Recall that in a number of legal cases involving smartphone data, the cases have had to do with crimes such as terrorism and child pornography – you know, things that are so heinous that no reasonable person wants to be associated with supporting them, and of course we all want to do whatever is possible to stop them. No doubt, wanting to prevent mass shootings could be seen as falling into that same category.
As you may recall, in January I wrote a blog post about the large number of small e-discovery company acquisitions in the legal space, and pointed out that this was likely to continue because there were still a lot of dinky e-discovery companies out there.
Indeed, that appears to be the case.
Legility has announced a deal to acquire Dallas-based e-discovery provider iControlESI, writes Frank Ready in Law.com. iControlESI will be rechristened as Legility Data Solutions, though its products will keep their names for the time being.
This isn’t the first Legility acquisition, Ready points out, noting that in September, 2017, the company also acquired DSIcovery. The company also changed its name last fall, he writes. “The company already offers e-discovery related platforms such as Relativity, Catalyst and Everlaw, and just last fall underwent a rebranding that saw it ditch its original name—Counsel on Call—in favor of the more streamlined moniker Legility.”
In addition, information governance and digital forensics provider KLDiscovery announced in July that it had acquired e-discovery providers Strategic Legal Solutions and Compiled, writes Victoria Hudgins in Law.com.
KLDiscovery itself was the product of other mergers, Hudgins writes. “To be sure, rebranding isn’t a new concept for KLDiscovery,” she writes. “Originally, the company operated as KrolLDiscovery after the 2016 merger of LDiscovery and Kroll Ontrack. By January 2018, the company decided to drop the ‘Kroll’ name altogether. Later in 2018, KLDiscovery announced it received a ‘significant’ investment from WestView Capital Partners, The Carlyle Group and Revolution Growth, according to a press release.”
Kroll Ontrack was actually one of the granddaddies of the e-discovery marketplace. It had been dropped from the Leaders to the Challengers quadrant in the 2015 Gartner E-discovery Magic Quadrant, due to what Gartner felt was a lack of vision.
Then, KLDiscovery (which ranked 1832 on the Inc. 5000 list of the fastest-growing companies) announced that it was going public, writes John Jannarone in IPO-Edge. “Meet KLDiscovery, an electronic discovery and data recovery provider which is going public through a merger with Pivotal Acquisition Corp., a special purpose acquisition company or SPAC,” he writes. “Pivotal raised money in an IPO to find a target and recently announced a deal with KLDiscovery that will result in a public company with an enterprise value of $800 million. The deal will be put to a shareholder vote later in the third quarter, after which Pivotal will change its name to KLDiscovery.”
(That’s not the same Pivotal that VMware recently announced it was acquiring. That was Pivotal Software. Hard to keep the players without a scorecard.)
There have been other e-discovery acquisitions in recent months, with Xact Data Discovery (XDD) acquiring fellow e-discovery provider QDiscovery in early July, Ready writes. In January, HaystackID acquired eTERA Consulting and legal services provider Driven Inc. acquired e-discovery company Omnivere, he writes in a different article. In addition, “only a mere two months after being sold, e-discovery company Litera absorbed document manager Workshare in early July,” Hudgins writes.
What’s behind all of this? Venture capital money, apparently. “The pace of acquisitions in that space continues to move at a steady clip, spurred in part by the interest of venture capitalists and a desire by companies operating in the market to bolster scale,” Ready writes,
“An influx of private capital may be behind XDD’s expansion,” agrees Zach Warren in Law.com. Private equity firm JLL Partners acquired XDD in early 2018, in a deal that represented JLL’s first venture into e-discovery, he writes.
Plus, both XDD and QDiscovery already had a history of acquisition, Warren continues. “XDD is certainly no stranger to M&A, having acquired QUiVX’s e-discovery service in early 2019,” as well as F1 Discovery and Orange Legal Technologies in 2016, while QDiscovery had acquired northeast U.S. e-discovery provider Evidox last November.
In the book Fail-Safe, which is a really scary 1960 book about nuclear war, six planes accidentally get sent to the USSR with atomic bombs in them. Except the sixth plane doesn’t have nuclear bombs; it carries a bunch of reflective material that it releases as the planes approach the USSR, which confuses radar systems by creating a whole lot of extraneous data. This is called “jamming” or “chaffing” (as in, chaff, not wheat).
Similarly, if you ever saw the movie Spartacus – also released in 1960, oddly enough — there’s a scene where the Romans threaten to kill all the slaves if Spartacus doesn’t give himself up, and so Spartacus stands and says “I am Spartacus.” But then all the other guys also stand up and say “I am Spartacus,” so the Romans can’t tell who the real Spartacus is. (Sadly, they kill all the slaves anyway. And yes, I just spoiled this movie for you.)
Now, people concerned with license plate surveillance are dealing with it with what’s called “adversarial fashion” – clothes intended to confuse license plate readers by overwhelming them with information.
“The patterns on the goods in this shop are designed to trigger Automated License Plate Readers, injecting junk data in to the systems used by the State and its contractors to monitor and track civilians and their locations,” notes the website of one such company. The clothes are covered with pictures of license plates – and, in one especially poetic touch, the license plates spell out the text of the 4th Amendment. You know, the one about unreasonable search and seizure.
The vendor, Kate Rose – who presented the products at the DEFCON security conference in Las Vegas earlier this year – also released information about how to design such fabrics yourself.
“To an automatic license plate reader (ALPR) system, the shirt is a collection of license plates, and they will get added to the license plate reader’s database just like any others it sees,” writes Alex Hern in the Guardian. “The intention is to make deploying that sort of surveillance less effective, more expensive, and harder to use without human oversight, in order to slow down the transition to what Rose calls ‘visual personally identifying data collection.’”
Rose actually shows ALPRs interacting with the fabric and recording the various license plates into their systems on her website.
Hern also notes that in 2016, Berlin-based artist and technologist Adam Harvey worked with international interaction studio Hyphen-Labs to produce the Hyperface, a fabric printed with an abstract that was intended to trigger facial recognition systems.
It’s similar to “dazzle ships,” a way of paining military vessels during the World Wars with zebra stripes to make them more difficult for radar to pick up.
In fact, one anti-surveillance technology is even called CV Dazzle, and is intended to defeat facial recognition systems, writes Courtney Linder in Popular Mechanics.
“It messes with the pattern of a face that an algorithm may be designed to look for while detecting people,” she writes. “Usually, those algorithms are scanning for the spatial relationship between features. You can block detection, then, by creating what Harvey calls an ‘anti-face.’”
Other techniques including wearing a picture of someone else’s face, or some other picture, to confuse facial recognition systems. Oddly, pictures of umbrellas seem to work, Linder writes. And, of course, there are always masks and such that block a person’s face from the facial recognition system.
In addition, some designers are using shiny materials to reflect such systems, writes Jane Hu in Slate.
Finally, if you’re upset that I spoiled a couple of 59-year-old movies and books for you, Rosebud is his sled.
About a year ago, it was discovered that a DNA database was hacked. At least, sort of. It was just email addresses of the users of the DNA database, not any of the DNA itself. And everyone heaved a huge sigh of relief at that, because losing data like that would be really bad.
Now, some data like that has been stolen.
“BioStar 2 is a web-based biometric security smart lock platform. A centralized application, it allows admins to control access to secure areas of facilities, manage user permissions, integrate with 3rd party security apps, and record activity logs,” writes vpnMentor, an organization that reviews VPNs, particularly their security. “Our team was able to access over 1 million fingerprint records, as well as facial recognition information. Combined with the personal details, usernames, and passwords, the potential for criminal activity and fraud is massive. Once stolen, fingerprint and facial recognition information cannot be retrieved. An individual will potentially be affected for the rest of their lives.”
Well, that’s a bummer.
The security software, produced by a company called Suprema, is used by a variety of companies worldwide, including UK Metropolitan police, defense contractors, and banks, according to Josh Taylor in the Guardian newspaper.
The problem with stealing biometrics, such as fingerprints and faces, as opposed to credit card numbers, is that while people can always get a new credit card if one gets compromised, they can’t get new fingerprints or faces. This is related to the problem of medical identity theft: It’s not something you can change.
The breach was discovered on August 5, reported on August 7, and closed on August 13, the organization writes. Altogether, the company said it was able to access more than 27.8 million records, a total of 23 gigabytes of data. It isn’t clear how long the vulnerability was there, according to Chris Baraniuk of the BBC.
As with many other breaches, this one happened because the security for the system was so bad, the organization writes. Some people had really poor passwords, and even the good passwords were stored in plain text in a database, meaning that anyone who hacked into the database could have access to the data.
“The unsecured manner in which BioStar 2 stores this information is worrying, considering its importance, and the fact that BioStar 2 is built by a security company,” the organization writes. For example, instead of saving a hash of the fingerprint (that can’t be reverse-engineered) the company saved people’s actual fingerprints, which could be copied for malicious purposes, it warns.
So what sorts of things could hackers do with the stolen data?
- Take over a high-level account, with user permissions and security clearances, and make changes to the security settings in a network
- Change user permissions and lock people out of certain areas
- Create new user accounts to give people accessto secure areas
- Change the fingerprints of existing accountsto their own and hijack a user account to access restricted areas undetected
- Gain access to activity logs, so they can delete or alter the data to hide their activities
The one bit of good news, according to one security researcher on Twitter, is that perhaps the data was just test data and not actual data. But it isn’t clear yet, and, not surprisingly, Suprema isn’t talking; there’s been very little new information after the initial report.
Here we go again. Federal governments are talking about encryption back doors.
Oh, excuse me. The latest term of art is “exceptional access,” which actually makes it sound sort of cool. But it’s a back door just the same.
As you may recall, governments have been concerned about encryption for as long as it’s been around. At one point in the U.S., it was actually classified as a munition. It came up again in the fall of 2014, when Google and Apple each released smartphones with encryption that even the respective vendors couldn’t break. Much handwringing on the part of law enforcement ensued, warning us of dire consequences such as pedophilia, terrorism, and so on.
Never mind the fact that plenty of bad guys, including terrorists in Brussels and France, don’t seem smart enough to use encryption in the first place.
Most recently, it came up in late July when US Attorney General William Barr, followed by US Attorneys Geoffrey Berman and Richard P. Donoghue the following day, to again call for government access to encrypted data.
“Although the cast of characters is new, Barr’s arguments echoed the same points Justice Department officials have been making for years: The government needs access to encrypted data, he says, or else devices are ‘law-free zones’ that hinder law enforcement officers,” writes Patrick Howell O’Neill in MIT Technology Review.
It’s not like this is a surprise. People have been expecting this since, oh, mid November 2016.
People with more sense, like German prosecutor Markus Hartmann, disagreed with his US counterparts, pointing that criminals and terrorists will simply turn to different services if a country like the US passes a law to bypass encryption, noting GitHub has plenty of examples, O’Neill writes.
Even former National Security Agency director Michael Hayden weighed in. “Not really,” he Tweeted in response to a Tweet quoting Barr as saying that Americans should accept the security risks of encryption back doors. According to Politico reporter Eric Geller, a number of three-letter government agencies have differing views of the proposal.
The most recent suggestion, from Ian Levy from the U.K.’s equivalent to the NSA, is that an encryption system between two people simply add a “ghost user” – that is, the government – to their conversation, which would give the government access to the conversation should they deem it necessary.
Security expert Jon Callas has a long (four part) series on the American Civil Liberties Union (ACLU) website explaining all the technical issues wrong with the proposal, while other security experts such as Bruce Schneier and Matthew Green have also weighed in on the proposal. The Electronic Frontier Foundation has issued at least three such rebuttals as well.
When the ACLU and Reason are both on the same side of an issue, you know it’s got to have problems.
Security experts such as Schneier have also pointed out that there’s no such thing as a back door that only good guys can use, and that any back door, no matter what you call it, is likely to be exploited by bad guys as well. That argument has worked in the past, and they are trying it on this technique as well, but it is unclear whether it will work this time.
Ironically, a number of government representatives, including President Donald Trump’s son-in-law Jared Kushner, Australian politicians, and members of Britain’s Parliament have all been said to use the encrypted messaging application WhatsApp to conduct government business. It is unclear whether they would continue to be able to do so if WhatsApp were made illegal or given a back door.
But Green people such as Matthew Green, who teaches cybersecurity at Johns Hopkins, pointed out that likely all we need is the cybersecurity equivalent of the Reichstag fire for legal encryption to go bye-bye. “But what they do have is time, and the inevitability that given enough of it, something terrible will happen to America on their watch,” he wrote on Twitter, which is apparently the place people make pronouncements these days. “And they’ll be able to push these proposals without the need for debate. That’s where we are, and it should scare you.”
We’ve talked before about DNA storage, or the ability to store large amounts of data in DNA. Now there’s another company that’s taking a stab at it.
While DNA storage is incredibly dense and is thought to last longer than traditional magnetic storage, it’s expensive and slow.
So now there’s a Boston-based company, Catalog Technologies, that was founded in 2016 and is working on the technology. It got a flurry of attention last year when it raised $9 million from investors.
Most recently, the company said that it had put all 16 gigabytes of Wikipedia onto DNA strands to demonstrate its technology. Since previous demonstrations were 200 megabytes that Microsoft showed in 2016, this was quite an improvement.
“We encoded the English text version of Wikipedia into synthetic DNA molecules using printer technology, our groundbreaking encoding scheme and chemical protocols,” the company writes. “The total amount of data came to 16 gigabytes, significantly more digital information than has ever been captured into DNA previously – not to mention orders of magnitude faster and cheaper than chemical synthesis approaches.”
The company uses a DNA-building enzyme instead of traditional chemical approaches to rapidly synthesize DNA, wrote Jeff Bauter Engel in Xconomy last year. “ The startup says the key to its approach is separating the process of synthesizing DNA molecules from the process of encoding the digital data,” he writes. “Catalog’s method involves purchasing large quantities of small DNA fragments—about 20 to 30 base pairs long—from synthetic DNA suppliers. Catalog designed a machine that can dispense and stitch the DNA fragments together in programmable ways. The idea is that Catalog’s process uses a relatively small number of DNA molecules—fewer than 200—which can be combined in an exponential number of ways.”
“Essentially, it’s like a language: in English, there are only 26 letters, but through various arrangements we can make, theoretically, make an infinite number of different words,” wrote Katherine Ellen Foley in Quartz last year. “Catalog estimates that it will cost less than three thousands of a cent to store one MB of data. For context, on Spotify, a minute of stereo sound is about 2.4 MB at the highest quality.”
The company had said at the time that by next year – that is to say, this year – it would be able to encode 1 terabyte of information per day in DNA, for several thousand dollars, Engel wrote.
Now, 16 gb isn’t 1 TB, but it’s certainly better than 200 mb. “By comparison, a silicon-based portable hard drive with 1 terabyte of storage capacity typically costs less than $100, and the process of saving 1 terabyte of data on it would only take a few hours,” Engel writes. “The bottom line is even if Catalog’s system performs as well as advertised, the company and its rivals are still a long way from being able to compete with the lower costs and faster data transfer speeds of hard drives.”
Nonetheless, it’s a start. The company told MIT Technology Review last year that it would have a commercial system of a single machine or a group of them able to store a petabit of data per day by 2021.
“Let’s face it, this thing is huge,” wrote Antonio Regalado. “It’s no flash drive. The rendering shows a door and room enough inside for a couple of technicians. Inside there will need to be a hundred bags or bottles of ready-made DNA, and then an automated laboratory to mix the strands together and perform billions of reactions. You’ll also have to squeeze in a DNA sequencing machine—maybe a couple of them—to retrieve the data.”
In addition to raising money, Catalog is also said to have done a good job assembling talent. Funny how those two things go together.
Meanwhile, Microsoft hasn’t yet announced the DNA storage engine it promised to have by the end of the decade.
In 2016, the Federal Bureau of Investigation’s (FBI) facial recognition database was big and broken, according to the Government Accountability Office (GAO). Now, it’s not much better, a whole lot bigger, and it’s being used by Immigration and Customs Enforcement (ICE) as well.
Previously, the FBI’s Facial Analysis, Comparison, and Evaluation program (FACE, get it?) had the driver’s license photos of the residents of 16 states, while another 18 states were negotiating with the FBI over the use of driver’s license images, giving it a database of more than 411 million pictures. Now, 21 states give the FBI such access, with access to more than 641 million pictures, writes Drew Harwell in the Washington Post.
More recently, a report from Georgetown Law’s Center on Privacy and Technology revealed that ICE officials requested access to DMV databases in Utah, Washington State, and Vermont, with the intention of using facial-recognition technology to scan drivers’ photos and match them against criminal and residency databases without their knowledge, writes Sidney Fussell in the Atlantic. “Vermont and Utah both complied with ICE’s request, The New York Times reported; in Washington, it’s unclear whether the searches happened after being authorized,” he adds.
What makes those three states significant is that they are among more than a dozen that grant driver’s licenses to undocumented immigrants. Those states provide a driver’s license to undocumented immigrants because they feel it’s safer than having them drive around unregulated. “What may have seemed like an olive branch to allow easier access to driving and identification now could be an invitation for investigation, arrest, or deportation,” Fussell writes.
In fact, in Vermont, undocumented immigrants were apparently targeted after applying for state driver’s licenses, according to Vermont Public Radio. One migrant advocacy organization in Vermont contends that its members were targeted by ICE.
And in Utah, an immigration attorney “noticed what he described as an ‘undeniable statistical pattern’ of ICE agents detaining people after they renewed their state-issued driving privilege cards,” writes Dennis Romboy in the Deseret News. The state received 49 search requests from ICE between October 2015 and November 2017, about 10 percent of which resulted in a positive hit, he writes.
Vermont officials stopped sharing facial-recognition information with federal immigration authorities in May 2017, and in Washington state, as of 2018, all requests must be court ordered, according to CBS News.
According to the National Conference of State Legislatures, twelve states and the District of Columbia — California, Colorado, Connecticut, Delaware, Hawaii, Illinois, Maryland, New Mexico, Nevada, Utah, Vermont and Washington — enacted laws to allow unauthorized immigrants to obtain a driver’s licenses. In 2019, legislators in several more states — including Florida, Kansas, Massachusetts, Minnesota, New Jersey, New York, North Carolina, and Texas — introduced legislation to provide driver’s licenses to undocumented residents. Several of those states already have agreements with the FBI, Harwell writes. In fact, in Florida, 17 federal agencies have access to the driver’s license database, writes Joey Roulette in the Orlando Sentinel.
In addition, the organizations aren’t required to get a warrant or subpoena to perform such searches, Harwell writes. “While some of the driver photo searches were made on the strength of federal subpoenas or court orders, many requests for searches involved nothing more than an email to a DMV official with the target’s ‘probe photo’ attached,” he writes. “The official would then search the driver’s license database and provide details of any possible matches.” Moreover, this wasn’t just to help identify criminal suspects, but also to detect possible witnesses, victims, bodies, and innocent bystanders and other people not charged with crimes, he adds.
The GAO has updated its 2016 report, noting that while the Department of Justice and the FBI had taken some actions to address three recommendations— including the FBI fully implementing one of them—but has not taken any actions on the other three.
So what’s the problem with the FBI, or ICE, using facial recognition databases? First, the state driver’s license databases aren’t full of criminals; they are full of largely law-abiding people who have no reason to be suspected of or investigated about a crime. Second, there are no restrictions on which law enforcement people can look at the databases, or why. Third, facial recognition is no panacea, particularly with minorities, with whom it is more likely to show false positives.
While the accuracy rate has increased from 80 percent in 2016 to 86 percent now, that’s predicated on there being at least 50 pictures for comparison, which doesn’t always happen, Harwell writes. “The FBI said its system is 86 percent accurate at finding the right person if a search is able to generate a list of 50 possible matches, according to the GAO,” he writes. “But the FBI has not tested its system’s accuracy under conditions that are closer to normal, such as when a facial search returns only a few possible matches.”
The result is that someone minding their own business can suddenly find themselves the target of an FBI or ICE investigation because their face happens to match the face of a criminal or an undocumented immigrant.
A 2016 Supreme Court decision based on how much damage it costs a person when information about them is incorrect in a database is continuing to be used as a precedent in other legal cases that are far removed from the original case.
The actual original case revolved around the data aggregation site Spokeo. This site had been around for a while. It uses publically available data to collect information about a person, some of which it provides for free and some of which you pay for. Because of how it collects and aggregates the data, it can sometimes be laughably inaccurate.
But one person, Thomas Robins, didn’t find the inaccuracies laughable. In fact, he said they had caused him harm. They said he had a graduate degree and was married with children, and he was concerned that this inaccurate information would make it harder for him to find a job, though he didn’t have any evidence that had happened or that anyone had even looked at his file in the first place. And so he sued Spokeo, not because their collection of data was creepy and an invasion of his privacy, but because it was inaccurate.
Spokeo supporters warned that finding in favor of Robins would mean that practically anybody could file a class-action suit on practically any tiny technical detail that some company screwed up, potentially costing millions or billions. Robins’ supporters warned that finding in favor of Spokeo would mean that nobody would ever be able to file a class-action suit ever again, unless each member could point to specific, enumerated injuries.
But instead of making either of those two decisions, the Supreme Court ruled that it wasn’t an issue because Robins couldn’t prove any concrete damages caused by the errors in the Spokeo database. The computer industry heaved a great sigh of relief and went on about its business.
That said, courts have continued using Spokeo one way or another as a precedent since then. In fact, because state courts have taken different viewpoints on it since it was decided, this means this case could go back to the Supreme Court again.
And that’s where we stand now: State courts continue to make decisions based on their interpretations of what the Spokeo case actually meant, and they don’t always agree.
For example, the Sixth and the Seventh Circuit Courts have disagreed on two cases that are essentially identical, writes Maurice Wutscher in Lexology: A debtor wanted to sue a debt collector for failing to notify her in its debt validation letter that to trigger the federal Fair Debt Collection Practices Act’s protections she had to communicate a dispute in writing. According to the Seventh Circuit, the only harm the debtor suffered was receiving the incomplete letter. In fact, the first sentence of the decision literally said, “No harm, no foul.”
But according to the Sixth Circuit, the complaint in that case alleged a concrete injury because depriving a consumer of this information put them at a greater risk of future harm, Wutscher writes. A similar case in 2016, with the Eleventh Circuit, found the same, he wrote in a separate Lexology article – though the court’s decision in that case wasn’t nearly as entertainingly written.
Even in spam cases, courts – such as the Second Circuit, earlier this year – have used Spokeo to rule on whether the person receiving the spam was actually harmed by it. In the particular Second Circuit case, the court ruled that the person getting the spam text messages was actually harmed, writes Shari Clare Lewis in the New York Law Journal.
“The circuit court noted that although text messages were different in some respects from the receipt of calls or faxes specifically mentioned in the [Telephone Consumer Privacy Act], they presented the same ‘nuisance and privacy invasion’ problems envisioned by Congress when it enacted the TCPA,” Lewis writes. In addition, the Second Circuit pointed to similar decisions made by the Third and Ninth Circuits, she adds.
Spokeo is also coming into play with a case about whether Facebook users can sue the company in a $30 billion class action suit claiming that their facial data was harvested without their consent in 2015. The district judge said the users had a right to sue and Facebook appealed to the Ninth Circuit. (In another wrinkle, the data was stored outside the state – in this case, Illinois — which the company contended meant it out was out of state jurisdiction.)
On the other hand, state courts haven’t been consistent on whether receipts having too many digits of a person’s credit card number cause actual harm. Earlier this year, the Third Circuit decided that having too many digits wasn’t an actual harm, agreeing with the Second and Ninth Circuits, writes Patrick Ryan in Ahead of the Class, a class action defense blog. On the other hand, the Eleventh Circuit had ruled in similar cases that there could be harm, he added.
It just goes to show how picky some of these cases can be sometimes. How often do you check to see how many of the digits of your credit card number were printed on a receipt, and how likely would you be to try to sue if it were incorrect? But apparently people do.