We’ve had this in the U.S. for a while, but now it’s happening elsewhere: Enter New Zealand, and either be willing to hand over your smartphone and password, or pay a $3200 fine.
“New laws that came into effect in New Zealand on October 1 give border agents ‘…the power to make a full search of a stored value instrument (including power to require a user of the instrument to provide access information and other information or assistance that is reasonable and necessary to allow a person to access the instrument),’” writes Katina Michael for The Conversation. “Those who don’t comply could face prosecution and NZ$5,000 in fines. Border agents have similar powers in Australia and elsewhere.”
A “stored value instrument” includes a smartphone, tablet, or laptop. No word on whether cameras are included.
“As in many countries, customs officers in New Zealand were already able to seize mobile phones and other digital devices for forensic examination they believed contained evidence of criminality,” writes Bernard Lagin in the Sydney Times. “But the law did not previously compel travellers to open their devices for inspection, either by entering a password or using biometric data such as thumbprints or facial scans.” He also believed that New Zealand was the first country to implement a fine for noncompliance.
The new policy immediately caused an outcry.
“The practice of searching electronic devices at borders could be compared to police having the right to intercept private communications,” Michael writes. “But in such cases in Australia, police require a warrant to conduct the intercept. That means there is oversight, and a mechanism in place to guard against abuse. And the suspected crime must be proportionate to the action taken by law enforcement.”
Customs officials quoted by Lagin said that they needed a reasonable cause for suspicion, and that phones were examined in airplane mode, so they didn’t look at data in the cloud. The new policy was implemented in an attempt to fight organized crime, he writes. New Zealand Customs said the number of electronic devices examined is “very low,” 537 out of 14 million travelers in 2017.
The U.S. has had a policy for some time that border agents can demand access to a smartphone within 100 miles of the border – which covers much more U.S. territory than you’d think. According to the American Civil Liberties Union (ACLU), as of 2006, more than two-thirds of the U.S. population lived within 100 miles of the border. Altogether, it meant that anyone in that area with a laptop could have that laptop seized without a warrant, at any time, taken to a lab anywhere in the U.S., have its data copied, and searched for as long as Customs deemed necessary. And despite their objections, the policy has largely been upheld.
New Zealand doesn’t have an American Civil Liberties Union, obviously, but it does have a New Zealand one. “We note that the requirements and procedures in this new law are very lightweight, have no oversight, and compare badly to the procedures that must be followed by our Police and intelligence services,” the organization writes. “Customs originally demanded to be able to perform these searches without restrictions. The law now says they have to have reasonable cause, but they do not have to prove this before confiscating your device, nor is there a way to meaningfully protest or appeal at the time of confiscation.” The policy will also affect people traveling with devices or files from other people that they can’t unlock, the organization adds. (And yes, New Zealand has a Bill of Rights, too.)
To add insult to injury, “Microsoft, Apple and Google all indicate that handing over a password to one of their apps or devices is in breach of their services agreement, privacy management, and safety practices,” Michael writes. “That doesn’t mean it’s wise to refuse to comply with border force officials, but it does raise questions about the position governments are putting travellers in when they ask for this kind of information.”
In the meantime, if you’re going to New Zealand (which is a lovely place, incidentally), be willing to hand over the password, or get a burner phone.
In this we-have-always-been-at-war-with-Eurasia era when websites, audio recordings, photographs, and video can be changed or created, it’s good to know that courts have ruled that stored images of websites from the Wayback Machine part of the Internet Archive can now be introduced as evidence.
It’s not that people haven’t tried using Wayback Machine images before. What’s new is that now they’re succeeding.
The distinction? In the case where it succeeded, prosecutors actually called staff at the Internet Archive to testify on how the Wayback Machine worked, and authenticated the images by demonstrating how the pictures submitted into evidence were the same as what the Wayback Machine was showing at that time.
This was all part of the case U.S. vs. Gasperini. The district court of Eastern N.Y. ruled in 2017 on the case, where prosecutors attempted to prove that Fabio Gasperini created and controlled an army of 150,000 computers around the world to run an auto-click scheme that defrauded online advertisers, according to a description written by his attorney, Simone Bertollini.
“The District Court sentenced Gasperini to 12 months in prison, a $100,000 fine, and 12 months of supervised release,” Bertollini wrote. “Experts confirmed that no one before had been given such an extreme sentence on a misdemeanor computer intrusion charge. Bertollini defined the sentence as ‘unconscionable,’ and indicated that an appeal to the Second Circuit has already been filed.”
Gasperini appealed the original decision partly due to the inclusion of the Wayback Machine images. His attorney pointed out that previous attempts to use Wayback Machine images had been turned down. “In support of his argument, the defendant relied on a 2009 case where the Second Circuit ruled only that the district court did not abuse its discretion by excluding screenshots for lack of authentication,” writes attorney Richard Newman in the blog Pacedm. “Interestingly, the Third Circuit considered the admissibility of Internet Archive records on a similar record in United States v. Bansal (3d Cir. 2011).”
But the Second Circuit Court, in its opinion affirming the original decision, noted the use of the authentication, which is what made the use of the images acceptable.
This decision is important because increasingly businesses need to rely on information posted on a website, writes Stephen Kramarsky in the New York Law Journal. “To get a more accurate picture requires a time machine capable of re-creating the web as it was on a given date,” he writes. “ Luckily, at least for many web sites, such a machine exists. A recent U.S. Court of Appeals for the Second Circuit decision describes how to use it, and how to properly introduce records from it so that they can be accepted as evidence in court. Attempting simply to introduce screenshots from a third-party archive may not meet with approval. Instead, that evidence should be supplemented with witness testimony describing the archive, how it works, and how the records to be introduced into evidence were produced and stored in the ordinary course of the archive’s business. This should address hearsay and authenticity issues, and go a long way towards ensuring that the evidence will be admitted.”
The appeal also referenced two other issues that have come up here at one time or another.
First, Gasperini allegedly sent someone to his office to remove or destroy his hard disk drives. Apparently whoever it was did a good job, because there hasn’t been any indication that the hard disk drives were found or that any data on them had been recovered. “After his arrest in the Netherlands, Gasperini deleted the contents of his Google account, deactivated his Facebook account, and instructed someone to discard the hard drives in his home and erase others,” notes the decision.
Second, one of the grounds by which Gasperini appealed his case was the original Microsoft decision. “A large part of the evidence introduced at trial consisted of emails sent and received by Gasperini,” Bertollini wrote. “Before trial, Bertollini had sought to suppress the emails, arguing that they were seized through to an extraterritorial application of the Storage Communication Act. Last year, the U.S. Court of Appeals for the Second Circuit decided—in the famous Microsoft case—that the SCA does not apply outside the United States.”
But as with the Wayback Machine attempt, Gasperini attempt to use the Microsoft decision didn’t work, either. “Even assuming, arguendo, that the legal analysis in Microsoft was still correct, and that some of the data collected through the SCA warrants was located abroad, the Court nevertheless rejected Gasperini’s argument that such evidence should have been suppressed,” write Jason Vitullo and Harry Sandick in Lexology. “Rather, the Court explained, Gasperini’s challenges were statutory in nature, not constitutional, and the SCA explicitly limits the relief available for any statutory violation to various civil action remedies such as damages and associated legal costs. Accordingly, even if foreign data was collected in violation the SCA, such a violation did not warrant suppressing it in Gasperini’s criminal trial. The Court explained in a footnote that five other Circuit courts have ruled likewise with respect to the unavailability of suppression as a remedy for a nonconstitutional violation of the SCA.”
Oh, this should be fun. Microsoft is warning users that the next Windows 10 update might kill their systems.
Pass the popcorn.
This all came out in a warning issued earlier this month from Microsoft. “On Microsoft Windows 10 systems that have limited storage space (such as thin clients or embedded systems), when you run Windows Update, the update initialization may fail.”
Of course, Microsoft doesn’t define “limited” or say how much storage space the update initialization actually takes, exactly how the update initialization may fail, what the repercussions of that are, or how to recover from it. If you can.
“How much storage space do you need? Microsoft isn’t saying,” writes Kevin Murnane in Forbes, adding that last spring’s update needed 16GB of empty space for 32-bit systems and 20GB for 64-bit.
The company is, however, very clear on what causes it: “Windows Update does not check systems for adequate space requirements before it initializes.”
The note then launches into its Resolution section with seven separate steps detailing how users can delete files from their systems to increase the amount of empty storage space.
How about this Resolution: “We will hold off on this update until we instate the system space check, and in the meantime find out which bonehead authorized a system update without one.”
Murnane savages Microsoft for this move. “Microsoft’s decision to push out a major upgrade without warning the user if they don’t have enough free space to safely install it is unconscionable and outrageous,” he writes. “You would think the company had learned its lesson about arrogant disregard for the needs and desires of its customers after the epic fail of the Xbox One launch. Apparently not. Microsoft’s left you hanging in the wind so check to see how much storage space you have available and make space if you need it.”
The other interesting aspect is that Microsoft is replacing its venerable Disk Check utility with something called Storage Sense, which is a more automated version that puts some of your files into the cloud using Microsoft’s OneDrive. And while that’s a useful function (assuming people know this, can find their files later, and security is taken care of), it’s going to be sad not using Disk Check anymore.
Admittedly, I’ve been using Crap Cleaner for years and usually only use Disk Check afterwards, just in case. But it was something I was used to. Of course, I’m old enough that I still remember running the defrag utility and being mesmerized by the little animation that showed exactly which block was being defragged and watching all the little squares change colors. I miss that, too, even though it would probably take hours to run with the hard disk drive sizes we have these days.
Meanwhile, it seems clear that a number of users won’t find out about the problem in time, won’t take sufficient steps to deal with it, and will end up crashing their systems – at which point, we’ll at least find out what that actually means.
Better get more popcorn.
We’ve talked before about security issues involved with USB drives, but here’s a new one: A vendor alerting us to malware on a USB drive that it’s shipping with its product.
Schneider Electric recently notified users of its Conext Combox and Conext Battery Monitor that USB removable media shipped with the products may have been exposed to malware during manufacturing at a third-party supplier’s facility.
The Conext Combox and the Conext Battery Monitor are both used to monitor solar system harvest and yield of solar power systems, according to the company, which is based in France. This is somewhat concerning in the context of the security of the power grid.
It also isn’t known where the third-party supplier’s facility is, to help determine whether this is a state-sponsored activity. China? South Korea? Japan?
“Schneider Electric has determined that some USB removable media shipped with the Conext Combox and Conext Battery Monitor products were contaminated with malware during manufacturing by one of our suppliers,” the company said in its alert. “Schneider Electric has confirmed that the malware should be detected and blocked by all major anti-malware programs. Out of caution, Schneider Electric recommends that these USB removable media are not used. These USB removable media contain user documentation and non-essential software utilities. They do not contain any operational software and are not required for the installation, commissioning, or operation of the products mentioned above. This issue has no impact on the operation or security of the Conext Combox or Conext Battery Monitor products.”
Instead of using the documentation on the USB drives, Schneider recommends that people download the documentation from the company website.
This isn’t the first time something like this has happened. A year ago, IBM reportedly shipped some USB flash drives, containing the initialization tool for its Storwize storage system, that contained a file that has been infected with malicious code. IBM was similarly tight-lipped about how the malware came to be there.
In fact, there’s a security website (called “Rationally Paranoid”) that tracks such incidents, and it goes as far back as 2000. It doesn’t yet include the Schneider incident, nor any other incident from 2018.
With the Schneider incident, there are still a number of outstanding questions:
- What kind of malware is it?
- Who is the third-party manufacturer and where are they located?
- What was the USB drives’ intended use? Did they get plugged into the solar device itself, or into PC?
- Were these particular USB drives belonging to Schneider Electric targeted, or was it just run-of-the-mill malware? In other words, was someone trying to hack into the power grid this way?
- Who else uses USB drives from that manufacturer? Are there USB drives infected too?
Companies are understandably reticent about such incidents, because they don’t want to give people ideas, nor set themselves up for liability. On the other hand, if we’re going to protect ourselves from such incidents in the future, it’s important to know all we can about them. “Security through obscurity” never works.
Every few years, tape manufacturers get together to remind us that tape is not dead.
And it’s not. You still get the most bandwidth for your buck using a station wagon full of tapes hurtling down a highway. Tape is the Internet’s attic, or basement – a pain in the ass to get to, but it’s nice to not have to trip over the Christmas decorations the rest of the year.
The result is that tape drive manufacturers shipped 108,457 petabytes (PB) of total tape capacity (compressed) shipped in 2017, an increase of 12.9 percent over the previous year. Admittedly, since they’re counting compressed capacity, that certainly reflects improvements in compression technology as much as anything, but it’s still a lotta tape. Though even the vendors had to admit that resulted in fewer unit shipments.
While hard disk drive manufacturers are having to resort to increasingly convoluted measures to continue adding capacity to their drives, tape drive manufacturers keep diligently releasing new versions of the Linear Tape Open (LTO) specification every few years, which typically double the capacity. They’re now up to version LTO-8, and have a roadmap for versions up to 12, which if they keep to their schedule should be announced around 2029.
“A modern tape cartridge can hold 15 terabytes,” writes Mark Lantz in IEEE Spectrum. “And a single robotic tape library can contain up to 278 petabytes of data. Storing that much data on compact discs would require more than 397 million of them, which if stacked would form a tower more than 476 kilometers high.”
Of course, part of the reason that tape still has room to expand its density is because people weren’t using it as much once hard disk drives came along, Lantz admits. “Early on, the areal densities of tapes and hard drives were similar,” he writes. “But the much greater market size and revenue from the sale of hard drives provided funding for a much larger R&D effort, which enabled their makers to scale up more aggressively. As a result, the current areal density of high-capacity hard drives is about 100 times that of the most recent tape drives.”
That also means that every few years, everyone still using tape needs to upgrade all their equipment and write all their data to the new format, because each new LTO version can read back only two generations. You can call that “planned obsolescence” or you can call it helping to ensure that the data survives. Either way, it helps keep the industry going.
(PS, tape organizations: If you want to convince people there’s a future for tape, you might want to redesign your websites and logos so they look like they came from this century.)
Tape manufacturers point out, rightly, that their products can be more secure against intrusion than hard disk drives because they can be “air gapped,” or not on the Internet unless they’re actually in use. “If a cartridge isn’t mounted in a drive, the data cannot be accessed or modified,” Lantz writes. “This ‘air gap’ is particularly attractive in light of the growing rate of data theft through cyberattacks.”
And, using a more recent consideration, they also don’t use energy when not in use, making them more “green.” “Tape is the greenest storage technology available for large amounts of inactive data,” writes the Information Storage Industry Consortium in its report, 2015-2025 International Magnetic Tape Storage Roadmap. “Its removable media consumes no power while not in use. Automated digital libraries consume very little power yet provide access to vast amounts of data. Tape’s footprint is also reduced, minimizing the square footage required.
Those benefits do come with a cost, though. Yes, a tape not in use isn’t as vulnerable and isn’t using energy. But if you do need something on that tape, the tape needs to be located, inserted into a reader (perhaps with a robot, as in the Rogue One Star Wars movie – and we saw how that turned out — but still), and then spun until the data shows up. That takes time. That’s why tape is dandy as a long-term cold storage medium, but not necessarily for data that you’re using right now.
Nobody, not even tape drive manufacturers, is trying to say that tape should be used for all storage solutions. But it can be handy to have. Just remember that when you’re getting the Christmas lights from the attic.
Earlier this summer, we talked about machines that are intended specifically to destroy hard disk drives. But Google does it one better.
It has robots.
That’s according to Joe Kava, Google’s vice president of data centers. “Google first detailed its process for this back in 2011,” writes Yevgeniy Sverdlik. “A company-produced video showed wiped drives get punctured with a steel piston and then thrown into an industrial shredder. The tiny pieces of plastic and metal then got boxed and recycled. What happens to each drive being replaced in the company’s data centers today is still the same. What’s different is who’s doing it. It’s now done by robots in what Google calls a ‘fully-automated disk-erase environment,’ Kava said.”
(Sadly, videos showing this robotic process don’t seem to be available, though a photograph is.)
The advantage of having a robot do the destruction is it reduces the number of people who have to handle a hard disk drive, Sverdlik writes, therefore also reducing the amount of tracking that has to be done for each hard disk drive.
The hard disk drive destruction robots come in particularly handy when Google is doing a forklift upgrade of its hard disk drives, Kava said. This would seem to indicate that other companies with very large quantities of hard disk drives, such as Facebook or Backblaze, might use hard disk drive destruction robots, too.
That said, apparently humans still need to perform the actual disconnection of the hard disk drive from the system, Kava added.
Videos of Google’s data center seem to crop up every couple of years, and destroying the obsolete hard disk drives is always a major part of it.
Actually, an interesting nuance about Kava’s video was his explaining that the only hard disk drives that are destroyed are the ones that can’t be verified as 100 percent wiped. He didn’t explain how Google verifies this, or what would keep a particular hard disk drive from being able to be wiped. Reportedly, the hard disk drives that can be verified are sold to other companies, Sverdlik writes.
Developing a 100 percent way of wiping and verifying hard disk drives would also make it easier to recycle the material from which hard disk drives are made, writes Tom Coughlin in Forbes. There is, in fact, an entire organization — the Value Recovery From Used Electronics Project, organized by iNEMI (the International Electronics Manufacturing Initiative) – that is intended to help develop a more circular economy for hard disk drives, he writes.
“There are three major reason why HDDs are a good candidate for a circular economy: (1) the demand for data storage is increasing rapidly; (2) data storage demand is increasing significantly faster than increases in HDD storage density, and (3) industry output of HDDs (manufacturing capacity) is not expected to increase significantly, according to industry projections,” Coughlin writes. “This leads to a potential gap between estimated data storage needs and the estimated ability of HDD and SSD manufacturers to keep up with demand. There are a number of ways to fill this gap: continued investment in fabs and technologies to increase HDD and SDD storage, increase HDD reliability, and increase the reuse of used HDDs so that they are available to meet some of our global data storage needs.”
But practices such as Google’s make that difficult, Coughlin writes (though he notes that Google is participating in the project). “Some existing practices severely impede the overall value recovered from the products across the reverse chain of commerce,” he writes. “Data destruction demands by the last user, which are not always essential to meet justified data security needs, lead to wholesale HDD shredding, which precludes reuse and reduces material recovery options.” And while shredding does allow for recycling of the raw material, it “precludes reuse and can reduce recovery of trace, but highly valuable, materials (e.g. rare earth metals),” he adds.
In the meantime, shredding robots it is.
When someone breaks into your system, is it fair to go break into theirs?
Sometimes it’s in an “eye for an eye” situation. More often, people want to use hacking techniques to help figure out who hacked them. Either way, it’s called “hacking back” and has been illegal, with sentences of up to 20 years. “Any form of hacking is a federal crime,” writes Nicholas Schmidle in the New Yorker. “In 1986, Congress enacted the Computer Fraud and Abuse Act, which prohibits anyone from ‘knowingly’ accessing a computer ‘without authorization.’” The law was inspired by the 1983 movie WarGames, he adds.
No one has ever been charged under the law, Schmidle writes, reportedly because it wouldn’t look good to charge people with attacking hackers. In fact, people like Shawn Carpenter, a former security analyst for Sandia National Laboratories, was not actually charged with hacking back, but he was fired for it, sued, and won $4.7 million for wrongful termination.
That’s not to say that people don’t do it. “Many cybersecurity firms offer what is called ‘active defense,’” Schmidle writes. “It is an intentionally ill-defined term. Some companies use it to indicate a willingness to chase intruders while they remain inside a client’s network; for others, it is coy shorthand for hacking back. As a rule, firms do not openly advertise themselves as engaging in hacking back.”
“Hacking back” can cover a number of techniques. For example, “honey pots” are sets of enticing-looking files intended to encourage a hacker to download them. Once downloaded, they can be traced. They can include “beacons,” which send messages back to help track the hacker, or “dye packets,” — code can be embedded in a file and activated if the file is stolen, rendering all the data unusable, Schmidle writes.
But Rep. Tom Graves (R-GA-14) wants to change that law. He submitted a bill in 2016, the Active Cyber Defense Security Act, to allow for hacking back, and has updated it a couple of times since then in response to comments, primarily to require reporting to law enforcement if you’re going to do it, as well as a sunset clause.
“Private firms would be permitted to operate beyond their network’s perimeter in order to determine the source of an attack or to disrupt ongoing attacks,” Schmidle writes. “They could deploy beacons and dye packets, and conduct surveillance on hackers who have previously infiltrated the system. The bill, if passed, would even allow companies to track people who are thought to have done hacking in the past or who, according to a tip or some other intelligence, are planning an attack.”
Experts caution against hacking back, because it’s not always as simple as it sounds. For example, hackers often use “hop points,” or go from site to site – as many as 30 of them — to try to hide their tracks. Hacking back could nail an innocent bystander who just happens to be on that path.
People, like Carpenter’s bosses, also worry that hacking back might invite additional attacks or draw attention to the original breach. “if companies weren’t able to defend themselves in the first place, it’s unlikely they’re going to come off best in a digital firefight,” warns Martin Giles in MIT Technology Review. (A number of the arguments resemble those against civilians carrying firearms in public.)
More ominously, particularly in the case of hackers sponsored by states as opposed to “script kiddies,’ this could be more dangerous. In one case, a company that was trying to fight back against hackers found pictures of executives’ children in email from the hackers, Schmidle writes.
Ultimately, the majority of people appear to be against hacking back, writes Josephine Wolff in the Atlantic. “Its critics range from law enforcement officials who worry it will lead to confusion in investigating cyberattacks, to lawyers who caution that such activity might well violate foreign laws even if permitted by the U.S., to security advocates who fear it will merely serve as a vehicle for more attacks and greater chaos, particularly if victims incorrectly identify who is attacking them, or even invent or stage fake attacks from adversaries as an excuse for hacking back,” she writes. (The paper The Ethics of Hacking Back looks at, and dismisses, a number of the reasons why not to hack back.)
Another alternative is to have a list of firms authorized to hack back, which companies could hire. “Department stores hire private investigators to catch shoplifters, rather than relying only on the police,” write Jeremy and Ariel Rabkin in Lawfare about their paper, Hacking Back Without Cracking Up. “So too private companies should be able to hire their own security services. There should be a list of approved hack-back vendors from which victims are free to choose. These vendors would primarily be in the business of identifying attackers and imposing deterrent costs on attackers by providing the threat of retaliation.”
In any event, thus far, Graves’ bill hasn’t gone anywhere. Yet.
Most of us are pretty aware that we need to wipe the memory of a hard drive or other storage device when we get rid of a computer or a cell phone. Some of us are even aware that you may need to do so with a printer, copier, or fax machine.
It turns out that now you have to do it with your car, too.
The Federal Trade Commission (FTC) has put forth a program, with the somewhat unwieldy name of “Be discreet when you delete your fleet,” for making sure people delete personal data from cars when they sell them. That data can include:
- Phone contacts and an address book
- Mobile apps’ log-in information, or data
- Digital content like music
- Location data like addresses or the routes you take to home, work, and favorite places
- Garage door codes
The same can be true when you buy a car – the previous owner, for example, may still have information about how to find or start the car on their phone, the FTC warns.
It’s not just with your own car, but also with rentals and car shares. In fact, it may be even more urgent with them, because you know someone else is going to be using the car after you do. “With cars increasingly asking if to download your phonebook, that have facilities for you to make and receive calls, and to message, browse the internet, and stream media, the trove of data on infotainment systems will only increase,” notes the December 2017 report, Connected Cars: What Happens To Our Data On Rental Cars? from Privacy International. “If you use the GPS in a rental car to get home, for instance, a robber could find your address,” writes Rebekah Sanders in USA Today. “Or a stranger could reveal your identity by matching your device name to profiles on social-media such as Facebook, Instagram or Twitter.”
“People called their phone all sorts of things, including what looks like their actual names,” writes Adam Racusin of ABC 10 News. “When I first saw this I audibly exclaimed that I’m looking at someone’s first and last name and what type of phone they have, whether it’s an iPhone or an android,” he quotes Ted Harrington, executive partner at Independent Security Evaluators, as saying. “That’s a lot of information that’s just free for me to access. No one’s hacking that — the car is giving that information out right now.”
The complication is, how do you do it? The FTC is advising people, when they’re about to sell their car, or turn in a rental, to look for a factory reset option to wipe personal data. Privacy International is calling for a single button for car renters to press before turning the car back in.
At the same time, even a factory reset might not wipe out personal information such as subscriptions to satellite radio or Spotify, so you might have to do that manually, the FTC warns. In addition, even charging your device from the car’s USB port might transfer data automatically to the car. Use a cigarette lighter instead, with an adapter if there is one advises the FTC.
This is going to become even more critical an issue as automated vehicles come into play, especially with “mobility as a service” and other shared transportation alternatives. So it’s probably a good idea to get into the habit now.
In response to incidents such as the Federal Bureau of Investigation (FBI) using material in a genetic database to track down a murder subject, the major genetic testing firms are pledging that they will follow certain best practices before doing so in the future. But don’t cheer just yet.
“Under the new guidelines, the companies said they would obtain consumers’ ‘separate express consent’ before turning over their individual genetic information to businesses and other third parties, including insurers,” write Tony Romm and Drew Harwell in the Washington Post. “They also said they would disclose the number of law-enforcement requests they receive each year.”
Well, that’s nice, except for a few things.
- The agreement doesn’t cover GEDMatch, the open source database used by law enforcement to track down the alleged “Golden State Killer.”
- How long is it going to take before insurers offer either carrots – “We’ll give you this sort of price break to give us access!” – or sticks – “We won’t insure you unless you give us access”?
- What happens when law enforcement puts gag orders on these firms forbidding them to release information about law enforcement requests or releases of information? In other words, how long will it be before we see a “warrant canary” on genetic database sites?
- At this point, it’s something the companies are doing only out of the goodness of their hearts—and their concern that people will stop using their services if they are afraid the information could get out. “Adherence to the rules is voluntary,” Romm and Harwell write. “While the policy offers users of participating sites added new protections at a time of great ‘uncertainty,’ it doesn’t have the force of law, said Justin Brookman, the director of consumer privacy and technology policy at Consumers Union.”
- Having once submitted your data, it’s not at all clear that you can delete it from the databases. “Customers of these DNA testing services would gain some limited rights to have their biological data deleted, but they may not be able to withdraw data that was already in use by researchers,” note Romm and Harwell.
This is all happening at the same time that the genetic database companies are finding new ones to monetize the data. 23andMe recently announced it had struck a research deal with GlaxoSmithKline for $300 million, Romm and Harwell write. “As part of that pact, GlaxoSmithKline can access ‘de-identified’ genetic data about 23andMe users — provided they’ve previously given their consent — so that the firm can ‘gather insights and discover novel drug targets driving disease progression,’ the company said.”
That’s fine – noble, even – except that studies have demonstrated that the so-called “de-identified” data can actually be “re-identified” pretty easily. And under the guidelines, the genetic database testing companies don’t need to inform their users about these efforts, Romm and Harwell write. (And other genetic databases for research may also be subject to police search and not subject to these guidelines, writes Natalie Ram in Slate.)
Another nuance – the genetic databases suffer from a “lack of diversity,” and concern about privacy, particularly from law enforcement, could keep ethnically diverse individuals from submitting their material to the databases, writes Eric Rosenbaum for CNBC. 23andMe has noted that the genetic testing industry remains challenged by a lack of diversity, and to the extent that poverty is intertwined with the criminal justice system, a focus on using these databases to identify criminals will create unease or distrust, especially among historically targeted populations, he writes. In addition, when companies are sold or go out of business, as in Sports Authority or Radio Shack, the new owner may not hold to the same provisions, he notes.
As many as 12 million Americans – 1 in 25 – have had their genetics tested by one of the companies as of 2017, according to MIT Technology Review.
The guidelines themselves are a pretty interesting read, with some fascinating circumlocutions. For example, genetic information is important because, in the document’s words, “It may contain unexpected information or information of which the full impact may not be understood at the time of collection.” In other words, you may unexpectedly find out that your daddy isn’t your daddy or that you were adopted. Not to mention, “It may have cultural significance for groups or individuals,” and that could have any number of meanings.
There’s another offhand sentence in the Washington Post story that’s pretty ominous: “Companies, meanwhile, would have to ensure the person submitting DNA data is the actual owner of that data.” Uh, yeah. You mean they don’t do that now? There’s all sorts of interesting possibilities around that. You think Facebook stalking is bad? How about someone sending off some hair or spit from a prospective partner or job applicant? Or let’s get into science fiction and imagine bounty hunters on the prowl for people with – or without – certain genetic conditions. Remember those “I woke up without a kidney” urban legends?
Social media companies have been reporting the number of law enforcement requests they get, on a semiannual basis, for several years. Genetic testing database companies are also planning to do this, with Ancestry saying it had received 34 requests, 31 of which it had fulfilled, and 23andme saying it had received five requests, none of which it had fulfilled. If the social media companies are any indication, these numbers should zoom up over time.
Here we go again. The European Union is calling for an end to the so-called Privacy Shield agreement by September 1 if the U.S. doesn’t follow through on its commitments, which could make it really difficult for U.S. computer companies to acquire data from European customers.
As you may recall, this all dates back to about two years ago, when the EU and the U.S. finally reached an agreement to replace the Safe Harbor Act, which is what enabled American countries and companies to gain access to data about foreign citizens. After the Snowden case and other breaches, EU countries said they didn’t feel that data about their citizens was safe in the U.S., and the U.S. had to improve security in some of its regions, as well as in the companies themselves.
Soon after President Donald Trump’s inauguration in January, EU members expressed concern about an executive order he signed that could have been interpreted as saying that people who weren’t citizens of the U.S. weren’t protected by the U.S. Privacy Act.
Since then, it’s been pretty quiet, and the European Union has been busy paying attention to its own General Data Protection Regulation privacy standard. But now that that’s finished, EU member states are starting to turn their focus to the U.S. again. And they’re wondering why it’s taking the U.S. so long to do certain things required under the pact, such as hiring an ombudsman to deal with complaints from EU citizens, as well as appoint other officials responsible for overseeing the program.
In particular, EU representatives are concerned about the Facebook data scandal where the personal information from up to 87 million US voters was passed on to Cambridge Analytica, a company employed by Trump’s presidential campaign team, writes Mehreen Khan in the Financial Times.
Vera Jourova, the EU’s commissioner for justice, has written to Wilbur Ross, US commerce secretary, complaining that the White House is stalling. ““Now that the new state secretary is in office and we are almost two years into the term of this administration, the European stakeholders find little reason for the delay in the nomination of a political appointee for this position,” she wrote.
“The Privacy Shield is due for its second review from the European Commission in October,” Khan writes. “Brussels has the power to unilaterally revoke the agreement if Washington is not meeting its commitment to ensure the rights of EU citizens are adequately protected in the US.”
That would be bad. If the EU does end the Privacy Shield, it would mean that each company would need to negotiate individually with each country how it could obtain data about the country’s citizens. That could take a long time and be really complicated. Without the agreement, more than 4,000 European and U.S. companies wouldn’t have been able to exchange data about each other’s citizens as easily, which could make commerce more difficult. That’s currently worth up to $260 billion, writes Mark Scott in the New York Times.
About a month to go.