When someone breaks into your system, is it fair to go break into theirs?
Sometimes it’s in an “eye for an eye” situation. More often, people want to use hacking techniques to help figure out who hacked them. Either way, it’s called “hacking back” and has been illegal, with sentences of up to 20 years. “Any form of hacking is a federal crime,” writes Nicholas Schmidle in the New Yorker. “In 1986, Congress enacted the Computer Fraud and Abuse Act, which prohibits anyone from ‘knowingly’ accessing a computer ‘without authorization.’” The law was inspired by the 1983 movie WarGames, he adds.
No one has ever been charged under the law, Schmidle writes, reportedly because it wouldn’t look good to charge people with attacking hackers. In fact, people like Shawn Carpenter, a former security analyst for Sandia National Laboratories, was not actually charged with hacking back, but he was fired for it, sued, and won $4.7 million for wrongful termination.
That’s not to say that people don’t do it. “Many cybersecurity firms offer what is called ‘active defense,’” Schmidle writes. “It is an intentionally ill-defined term. Some companies use it to indicate a willingness to chase intruders while they remain inside a client’s network; for others, it is coy shorthand for hacking back. As a rule, firms do not openly advertise themselves as engaging in hacking back.”
“Hacking back” can cover a number of techniques. For example, “honey pots” are sets of enticing-looking files intended to encourage a hacker to download them. Once downloaded, they can be traced. They can include “beacons,” which send messages back to help track the hacker, or “dye packets,” — code can be embedded in a file and activated if the file is stolen, rendering all the data unusable, Schmidle writes.
But Rep. Tom Graves (R-GA-14) wants to change that law. He submitted a bill in 2016, the Active Cyber Defense Security Act, to allow for hacking back, and has updated it a couple of times since then in response to comments, primarily to require reporting to law enforcement if you’re going to do it, as well as a sunset clause.
“Private firms would be permitted to operate beyond their network’s perimeter in order to determine the source of an attack or to disrupt ongoing attacks,” Schmidle writes. “They could deploy beacons and dye packets, and conduct surveillance on hackers who have previously infiltrated the system. The bill, if passed, would even allow companies to track people who are thought to have done hacking in the past or who, according to a tip or some other intelligence, are planning an attack.”
Experts caution against hacking back, because it’s not always as simple as it sounds. For example, hackers often use “hop points,” or go from site to site – as many as 30 of them — to try to hide their tracks. Hacking back could nail an innocent bystander who just happens to be on that path.
People, like Carpenter’s bosses, also worry that hacking back might invite additional attacks or draw attention to the original breach. “if companies weren’t able to defend themselves in the first place, it’s unlikely they’re going to come off best in a digital firefight,” warns Martin Giles in MIT Technology Review. (A number of the arguments resemble those against civilians carrying firearms in public.)
More ominously, particularly in the case of hackers sponsored by states as opposed to “script kiddies,’ this could be more dangerous. In one case, a company that was trying to fight back against hackers found pictures of executives’ children in email from the hackers, Schmidle writes.
Ultimately, the majority of people appear to be against hacking back, writes Josephine Wolff in the Atlantic. “Its critics range from law enforcement officials who worry it will lead to confusion in investigating cyberattacks, to lawyers who caution that such activity might well violate foreign laws even if permitted by the U.S., to security advocates who fear it will merely serve as a vehicle for more attacks and greater chaos, particularly if victims incorrectly identify who is attacking them, or even invent or stage fake attacks from adversaries as an excuse for hacking back,” she writes. (The paper The Ethics of Hacking Back looks at, and dismisses, a number of the reasons why not to hack back.)
Another alternative is to have a list of firms authorized to hack back, which companies could hire. “Department stores hire private investigators to catch shoplifters, rather than relying only on the police,” write Jeremy and Ariel Rabkin in Lawfare about their paper, Hacking Back Without Cracking Up. “So too private companies should be able to hire their own security services. There should be a list of approved hack-back vendors from which victims are free to choose. These vendors would primarily be in the business of identifying attackers and imposing deterrent costs on attackers by providing the threat of retaliation.”
In any event, thus far, Graves’ bill hasn’t gone anywhere. Yet.
Most of us are pretty aware that we need to wipe the memory of a hard drive or other storage device when we get rid of a computer or a cell phone. Some of us are even aware that you may need to do so with a printer, copier, or fax machine.
It turns out that now you have to do it with your car, too.
The Federal Trade Commission (FTC) has put forth a program, with the somewhat unwieldy name of “Be discreet when you delete your fleet,” for making sure people delete personal data from cars when they sell them. That data can include:
- Phone contacts and an address book
- Mobile apps’ log-in information, or data
- Digital content like music
- Location data like addresses or the routes you take to home, work, and favorite places
- Garage door codes
The same can be true when you buy a car – the previous owner, for example, may still have information about how to find or start the car on their phone, the FTC warns.
It’s not just with your own car, but also with rentals and car shares. In fact, it may be even more urgent with them, because you know someone else is going to be using the car after you do. “With cars increasingly asking if to download your phonebook, that have facilities for you to make and receive calls, and to message, browse the internet, and stream media, the trove of data on infotainment systems will only increase,” notes the December 2017 report, Connected Cars: What Happens To Our Data On Rental Cars? from Privacy International. “If you use the GPS in a rental car to get home, for instance, a robber could find your address,” writes Rebekah Sanders in USA Today. “Or a stranger could reveal your identity by matching your device name to profiles on social-media such as Facebook, Instagram or Twitter.”
“People called their phone all sorts of things, including what looks like their actual names,” writes Adam Racusin of ABC 10 News. “When I first saw this I audibly exclaimed that I’m looking at someone’s first and last name and what type of phone they have, whether it’s an iPhone or an android,” he quotes Ted Harrington, executive partner at Independent Security Evaluators, as saying. “That’s a lot of information that’s just free for me to access. No one’s hacking that — the car is giving that information out right now.”
The complication is, how do you do it? The FTC is advising people, when they’re about to sell their car, or turn in a rental, to look for a factory reset option to wipe personal data. Privacy International is calling for a single button for car renters to press before turning the car back in.
At the same time, even a factory reset might not wipe out personal information such as subscriptions to satellite radio or Spotify, so you might have to do that manually, the FTC warns. In addition, even charging your device from the car’s USB port might transfer data automatically to the car. Use a cigarette lighter instead, with an adapter if there is one advises the FTC.
This is going to become even more critical an issue as automated vehicles come into play, especially with “mobility as a service” and other shared transportation alternatives. So it’s probably a good idea to get into the habit now.
In response to incidents such as the Federal Bureau of Investigation (FBI) using material in a genetic database to track down a murder subject, the major genetic testing firms are pledging that they will follow certain best practices before doing so in the future. But don’t cheer just yet.
“Under the new guidelines, the companies said they would obtain consumers’ ‘separate express consent’ before turning over their individual genetic information to businesses and other third parties, including insurers,” write Tony Romm and Drew Harwell in the Washington Post. “They also said they would disclose the number of law-enforcement requests they receive each year.”
Well, that’s nice, except for a few things.
- The agreement doesn’t cover GEDMatch, the open source database used by law enforcement to track down the alleged “Golden State Killer.”
- How long is it going to take before insurers offer either carrots – “We’ll give you this sort of price break to give us access!” – or sticks – “We won’t insure you unless you give us access”?
- What happens when law enforcement puts gag orders on these firms forbidding them to release information about law enforcement requests or releases of information? In other words, how long will it be before we see a “warrant canary” on genetic database sites?
- At this point, it’s something the companies are doing only out of the goodness of their hearts—and their concern that people will stop using their services if they are afraid the information could get out. “Adherence to the rules is voluntary,” Romm and Harwell write. “While the policy offers users of participating sites added new protections at a time of great ‘uncertainty,’ it doesn’t have the force of law, said Justin Brookman, the director of consumer privacy and technology policy at Consumers Union.”
- Having once submitted your data, it’s not at all clear that you can delete it from the databases. “Customers of these DNA testing services would gain some limited rights to have their biological data deleted, but they may not be able to withdraw data that was already in use by researchers,” note Romm and Harwell.
This is all happening at the same time that the genetic database companies are finding new ones to monetize the data. 23andMe recently announced it had struck a research deal with GlaxoSmithKline for $300 million, Romm and Harwell write. “As part of that pact, GlaxoSmithKline can access ‘de-identified’ genetic data about 23andMe users — provided they’ve previously given their consent — so that the firm can ‘gather insights and discover novel drug targets driving disease progression,’ the company said.”
That’s fine – noble, even – except that studies have demonstrated that the so-called “de-identified” data can actually be “re-identified” pretty easily. And under the guidelines, the genetic database testing companies don’t need to inform their users about these efforts, Romm and Harwell write. (And other genetic databases for research may also be subject to police search and not subject to these guidelines, writes Natalie Ram in Slate.)
Another nuance – the genetic databases suffer from a “lack of diversity,” and concern about privacy, particularly from law enforcement, could keep ethnically diverse individuals from submitting their material to the databases, writes Eric Rosenbaum for CNBC. 23andMe has noted that the genetic testing industry remains challenged by a lack of diversity, and to the extent that poverty is intertwined with the criminal justice system, a focus on using these databases to identify criminals will create unease or distrust, especially among historically targeted populations, he writes. In addition, when companies are sold or go out of business, as in Sports Authority or Radio Shack, the new owner may not hold to the same provisions, he notes.
As many as 12 million Americans – 1 in 25 – have had their genetics tested by one of the companies as of 2017, according to MIT Technology Review.
The guidelines themselves are a pretty interesting read, with some fascinating circumlocutions. For example, genetic information is important because, in the document’s words, “It may contain unexpected information or information of which the full impact may not be understood at the time of collection.” In other words, you may unexpectedly find out that your daddy isn’t your daddy or that you were adopted. Not to mention, “It may have cultural significance for groups or individuals,” and that could have any number of meanings.
There’s another offhand sentence in the Washington Post story that’s pretty ominous: “Companies, meanwhile, would have to ensure the person submitting DNA data is the actual owner of that data.” Uh, yeah. You mean they don’t do that now? There’s all sorts of interesting possibilities around that. You think Facebook stalking is bad? How about someone sending off some hair or spit from a prospective partner or job applicant? Or let’s get into science fiction and imagine bounty hunters on the prowl for people with – or without – certain genetic conditions. Remember those “I woke up without a kidney” urban legends?
Social media companies have been reporting the number of law enforcement requests they get, on a semiannual basis, for several years. Genetic testing database companies are also planning to do this, with Ancestry saying it had received 34 requests, 31 of which it had fulfilled, and 23andme saying it had received five requests, none of which it had fulfilled. If the social media companies are any indication, these numbers should zoom up over time.
Here we go again. The European Union is calling for an end to the so-called Privacy Shield agreement by September 1 if the U.S. doesn’t follow through on its commitments, which could make it really difficult for U.S. computer companies to acquire data from European customers.
As you may recall, this all dates back to about two years ago, when the EU and the U.S. finally reached an agreement to replace the Safe Harbor Act, which is what enabled American countries and companies to gain access to data about foreign citizens. After the Snowden case and other breaches, EU countries said they didn’t feel that data about their citizens was safe in the U.S., and the U.S. had to improve security in some of its regions, as well as in the companies themselves.
Soon after President Donald Trump’s inauguration in January, EU members expressed concern about an executive order he signed that could have been interpreted as saying that people who weren’t citizens of the U.S. weren’t protected by the U.S. Privacy Act.
Since then, it’s been pretty quiet, and the European Union has been busy paying attention to its own General Data Protection Regulation privacy standard. But now that that’s finished, EU member states are starting to turn their focus to the U.S. again. And they’re wondering why it’s taking the U.S. so long to do certain things required under the pact, such as hiring an ombudsman to deal with complaints from EU citizens, as well as appoint other officials responsible for overseeing the program.
In particular, EU representatives are concerned about the Facebook data scandal where the personal information from up to 87 million US voters was passed on to Cambridge Analytica, a company employed by Trump’s presidential campaign team, writes Mehreen Khan in the Financial Times.
Vera Jourova, the EU’s commissioner for justice, has written to Wilbur Ross, US commerce secretary, complaining that the White House is stalling. ““Now that the new state secretary is in office and we are almost two years into the term of this administration, the European stakeholders find little reason for the delay in the nomination of a political appointee for this position,” she wrote.
“The Privacy Shield is due for its second review from the European Commission in October,” Khan writes. “Brussels has the power to unilaterally revoke the agreement if Washington is not meeting its commitment to ensure the rights of EU citizens are adequately protected in the US.”
That would be bad. If the EU does end the Privacy Shield, it would mean that each company would need to negotiate individually with each country how it could obtain data about the country’s citizens. That could take a long time and be really complicated. Without the agreement, more than 4,000 European and U.S. companies wouldn’t have been able to exchange data about each other’s citizens as easily, which could make commerce more difficult. That’s currently worth up to $260 billion, writes Mark Scott in the New York Times.
About a month to go.
Security experts watched in horror as journalists attending the US – North Korea summit in Singapore in June were handed gift bags containing adorable little USB fans. Plug them into your laptop and they would help keep you cool.
USB fans in and of themselves are nothing new; there are pages and pages of the things available online. But eek! These were from North Korea! They could be booby-trapped!
“Aiieee! Journalists – Do. Not. Plug. This. In.” warned one Tweet.
It would be only poetic justice if North Korea did, since people have been “attacking” North Korea with USB drives, loaded with shows such as Desperate Housewives, The Mentalist and soap operas, films like Bad Boys, The Interview, and action films, and a Korean-language Wikipedia – all intended to foment dissent using popular culture, using little balloons to take them across the border.
None of the tested fans have revealed any malware thus far. “This particular sample of USB fan does not have any computer functionality on USB interface,” noted one UK analysis, which dissected one of the little fans. “It can only be used for driving the motor from USB power.”
“No data transmission of any sort was observed,” noted another report by the Celsus Advisory Group, a security consulting firm. “The resistance of the device went up some over time, but this appeared to be connected to the rising temperature of the device rather than something nefarious. The device seemed to be free of implants.”
But officials are still worried. Perhaps some of the USB fans are indeed booby-trapped, and the innocent ones are decoys intended to let our guard down. Perhaps the malware is simply too well hidden. Celsus went on to describe some of the possibilities. “There is a motor…which if built to ‘custom spec’ might oscillate at a specific frequency providing a specific electronic signature when operating. This could be used to profile the target, or perhaps something even more interesting.“
And there’s more. “Imagine a factory installed battery that is designed to track keystrokes and user activity (phone, email, chat) by baselining and tracking power flow to the unit,” Celsus writes. “Each character and action creates a power spike! There’s even an easy way to exfil the data since most mobiles are internet connected 24/7. Pwn the phone without touching the OS/Baseband. Cool right? It’s been done.”
And more. “Envision a 3d printed WiFi connected plastic object, and metamaterial printed antenna utilizing local WiFi RF backscatter to provide power and to connect to the internet,” Celsus writes. “Imagine creating a heatmap of a room or a vehicle using the reflected WiFi signals and exfiltrating the WiFi hologram outbound, all without a battery.”
(Incidentally, if you like this sort of thing, Black Hat is going on in Las Vegas next week; the even more nerdy Defcon occurs a couple of weeks later.)
Singapore’s Ministry of Communications and Information, which the BBC reported had put together the gift bags, was affronted that anyone would think anything nefarious of them. “The USB fans were part of Sentosa Development Corporation’s (SDC) ready stock of collaterals, originally meant for Sentosa Islander members,” the organization told the BBC. “SDC had assessed USB fans to be a handy and thoughtful gift for the media, who would be working in Singapore’s tropical climate. MCI and SDC have confirmed that the USB fans are simple devices with no storage or processing capabilities.”
The manufacturer also weighed in, saying they didn’t have any ability to make such a device..
Of course, that’s what they’d want you to think.
All in all, the fact that no sabotage had taken place in the examined USB fans was in itself a ominous development, researchers write. “This does not eliminate the possibility of malicious or Trojan components wired to USB connector in other fans, lamps and other end-user USB devices,” the UK report continued. “Hence, their evaluation will be essential before any sensitive usage.”
“Maybe the person who received the package wasn’t a targeted POI,” Celsus noted. “Maybe the system in question requires being tickled in a specific way to elicit an illicit behavior. Or perhaps none of the fans were dual purpose in nature; eg fan AND surveillance implant. This is a difficult problem to address without reviewing ALL the potentially poisoned pills.”
“Malicious actors could have narrowly targeted one reporter who was of special interest out of 100, meaning that most fans may have appeared harmless even as some might have been used to target specific journalists,” warned Hamza Shaban in the Washington Post. (Which, no doubt, was put out that it wasn’t considered important enough to target in this way.)
In other words, you would need to dissect each fan individually to make sure it was safe – at which point, of course, it would no longer work. Some 2500 journalists were accredited to cover the conference, so we’re talking about a lotta fans.
Apparently it is necessary to destroy the fan in order to save it.
What almost became a new law in Illinois presages a series of similar laws in other states that could make it a whole lot easier to identify and arrest people.
Both the Illinois House and Senate passed bills that would have allowed law enforcement to use drones to scan groups of people. The House version required crowds of at least 1,500 – unlike an earlier version of the bill that would have allowed “crowds” of just 100 – and banned the use of facial recognition software with the drones. The Senate had passed a similar bill earlier in the month. Even without facial recognition software, the use of drones could intimidate people into not exercising their right of free speech, writs the American Civil Liberties Union.
“Fortunately, advocates of free speech and privacy defeated the 2018 proposal,” writes the Electronic Frontier Foundation. “While the Illinois House and Senate each approved a version of this bill, the state legislative session expired on May 31 without reconciling their conflicting versions.”
This facial recognition technology is already being used. For example, it was reportedly used to help identify the suspect in the Capitol Gazette shooting when he was “uncooperative.” “Anne Arundel County police ran Jarrod Warren Ramos’ photo through a database of millions of images from driver’s licenses, mug shots and other records to help identify him as the suspect in Thursday’s Capital Gazette shooting,” writes Yvonne Wenger in the Gazette. “Police Chief Timothy Altomare said Friday that officials used the Maryland Image Repository System to determine who Ramos was. The 38-year-old Laurel man was not cooperating, and police were facing a lag in getting results from a fingerprint search, so the chief said they turned to technology to move as quickly as possible.”
And in Seattle, international visitors can have their face scanned rather than show their passport when they come into the airport. Similar systems are used at 17 airports, including 13 in the U.S., writes Colin Wood in StateScoop. Because, you know, it’s so much faster and more convenient than showing a passport.
Facebook has reportedly also started using facial recognition, ostensibly to help protect people from other people hacking into their accounts. Amazon has also developed facial recognition software, which it is selling to law enforcement organizations. In fact, the American Civil Liberties Union and about two dozen other organizations have asked Amazon to stop selling its Rekognition software to law enforcement. Madison Square Garden has reportedly also used the technology – all in the name of safety and security, of course.
The thing is, there’s not much in the way of laws yet regarding facial recognition, so there was nothing to stop law enforcement from using the new technology. And as we’ve seen with technology such as phone encryption, it’s seen as more okay to violate people’s rights when they’re really bad people like child pornographers and terrorists.
Maryland also used its facial recognition database – considered superior to that of other states because it includes 10 million motor vehicle database photos — to monitor protesters during the rioting in Baltimore in 2015 after Freddie Gray’s death, Wenger writes. “As of 2016, as many as 6,000 or 7,000 law enforcement officials had access to the database,” she writes. “Officials said the system at times was accessed more than 175 times in a single week.” Given that law enforcement personnel have been known to look up people that interested them in driver’s license databases, how much longer before it’s learned that they also look up people in the facial database as well?
Altogether, as many as 130 million people – just regular people, not necessarily criminals – may have their faces stored in databases, writes Nick Wingfield in the New York Times. The FBI facial database was reported to be more than 400 million people as of 2016.
There’s also the question of accuracy. In 2016, the FBI had said that as many as 20 percent of its identifications were incorrect. This is particularly true for women and minorities. “ One study by the Massachusetts Institute of Technology showed that the gender of darker-skinned women was misidentified up to 35 percent of the time by facial recognition software,” Wingfield writes. In comparison, white men are identified accurately 99 percent of the time, writes Steve Lohr in the New York Times.” In 2015, for example, Google had to apologize after its image-recognition photo app initially labeled African Americans as ‘gorillas,’” he writes.
The California Supreme Court has ruled that online services such as Yelp!, which depend on user-generated content (UGC), can’t be forced to take down that content by a legal proceeding against the user who generated it if the legal proceeding didn’t mention the online service in the first place.
Like the Supreme Court’s recent Carpenter ruling, this is another one of those cases where the American Civil Liberties Union and the Libertarians were working together, on a case that was appealed to the California Supreme Court.
“Personal-injury lawyer Dawn Hassell of the Hassell Law Group accused former client Ava Bird of defaming her law firm on Yelp,” writes Zuri Davis in Reason. “Hassell sued Bird in 2013, but Bird did not appear—it is believed that Bird was never served with court papers. The San Francisco County Superior Court ruled in Hassell’s favor by default and awarded her $557,918. The court ordered Bird to remove the reviews and Yelp to ‘remove all reviews posted by AVA BIRD under user names “Birdzeye B.” and “J.D.,”'” despite not having definitively confirmed that Bird used the alias ‘J.D.’”
So let’s unpack this a little bit:
- Somebody had a bad experience with a company.
- The person posted a negative review on Yelp! like many of us do.
- The company sued them (apparently, according to NBC News, because of the person “falsely claiming that her firm failed to communicate with the client”).
- The person who posted the review might not have heard about the lawsuit.
- Some of the reviews for which they were sued might not have been written by them.
- They now owe more than a half-million dollars and are supposed to take the reviews down.
- Yelp!, which wasn’t even a party to the case, is told it has to take down the postings, since the person they’re attributed to – who, again, might not have written them and might not have been notified – isn’t taking them down. And that’s the part that got all the lawyers excited.
“WTF???” wrote attorney Eric Goldman in 2016 about the original decision, where the California appellate court upheld the lower court’s ruling. “As a non-party to the lawsuit, the court says Yelp doesn’t face liability from the suit itself, and the court thinks contempt sanctions–including the possibility of monetary damages–against a non-party don’t count as ‘liability’ because it’s ‘a different type of liability’? And a judicial compulsion to remove content that Yelp chooses to publish doesn’t treat Yelp ‘as a publisher or distributor’? Wow.”
(One has to admire a legal professional who can write an opinion saying, not once, not twice, but three times, “WTF?” And this particular opinion was cited lots of times about this case, WTF and all.)
“Neither court” – the lower one or the appellate one – “seemed to understand that the First Amendment protects not only authors and speakers, but also those who publish or distribute their words,” wrote the Electronic Frontier Foundation in April, when it also submitted an amicus. “Both courts completely precluded Yelp, a publisher of online content, from challenging whether the speech it was being ordered to take down was defamatory—i.e., whether the injunction to take down the speech could be justified.”
Now, however, the California Supreme Court has ruled that both the lower and appellate courts were wrong, and Yelp! doesn’t have to remove the reviews in question (assuming they’re still there; it seems like Yelp! is pretty busy right now removing postings from the company’s page).
To be honest, the aspect of this case that freaks me out the most is one that nobody is even mentioning: Someone can be fined a half-million dollars for libel by posting a negative review on Yelp!? When she might not even have been properly served in the first place? Yikes! I hope she’s fighting this. I’m surprised Yelp! isn’t helping her with this; getting its users sued for a half-million dollars for the posts on which its service is built on can’t be good for the online commenting business.
Meanwhile, the Hassell Law Group (interesting name) is reportedly considering appealing the case to the Supreme Court. Hopefully, nobody tries to give them a bad Yelp! review over it.
A fairly common theme here has been “Don’t poke strange USB sticks in things,” because it’s a common vector for transmitting malware (and reprogramming your keyboard, and setting your PC on fire). Here’s a new take on that. It’s pretty esoteric but now that the technique is out there, it may become more common.
First, you have to understand the concept of an “air gap.” An air gap is actually a plumbing term and refers to the use of air in the system to keep water from going to places it shouldn’t. The term has been applied in computer security to computers that aren’t hooked up to networks, to keep them more secure. “Air-gapped systems are common practice in many countries for government, military, and defense contractors, as well as other industry verticals,” according to Palo Alto Networks researchers who are writing about this.
Second, there is apparently a South Korean defense company that makes “secure USBs.” Exactly what these are and what makes them secure, I haven’t been able to find out. But they are a thing. At least some secure USBs encrypt the data on them. That may or may not be what this particular South Korean secure USB does.
So apparently the deal is this: Some researchers found evidence that hackers have found a way to put malware on these secure USBs, with the intention of targeting these airgapped, otherwise unreachable PCs.
It gets better. The malware only works if the PCs in question are running Microsoft Windows XP or Windows Server 2003.
The organization likely involved with this malware has a history of spearphishing attacks, or email attacks aimed at particular people. In fact, past versions of the organization’s malware used a Happy New Year program, and recipients were asked to change the extension to .exe so that it would play.
Which raises the question – if an organization is paranoid enough to airgap its PCs, wouldn’t you think they’d be smart enough to keep up on their security patches? Unless it’s a system just too old to update, like the nuclear missiles controlled by 8-inch disks. And that’s what researchers suggest. “Outdated versions of Operating Systems are often used in those environments because of no easy-update solutions without internet connectivity,” they write.
Wouldn’t its employees be smart enough not to open a Happy New Year card that’s obviously a program, even if it appeared to come from someone they know?
Researchers feel that this malware might be very specifically targeted to one particular installation where all of these factors would come into play. “This would seem to indicate an intentional targeting of older, out-of-support versions of Microsoft Windows installed on systems with no internet connectivity,” they write. But basically, they put malware on the old machines that look for the secure USB drives, and if one gets plugged in, it looks for the other malware on it and loads it onto the airgapped system.
Exactly what the malware would do once it got there, researchers don’t know. They also don’t know exactly what PCs or even what organization is being targeted. But now that the technique is out there, we may see it in places other than Korea and Japan.
So, the usual warnings still apply:
- Don’t poke strange USB sticks in things. Even if they’re supposedly secure.
- Keep your software updated, including your OS.
- Don’t open strange files in your email, even if they seem to come from someone you know, particularly if they are obviously programs.
- And if for some reason you have to look at a strange USB stick, or open a strange file in your email, at least use it away from the supersecure airgapped system, recommends Development Standards Technologies, a software development and consulting company.
- Development Standards also recommends, like a number of security organizations, that you not just depend on keeping people out, but detecting them should they make it in. “Prevention aside, critical systems should have threat detection controls that can alert where an infected drive has been plugged into an endpoint and take remedial steps beyond raising an alarm, such as isolating an infected machine from the rest of the network,” they write.
“The American Civil Liberties Union deserves congratulations” is not a sentence one is accustomed to read from the Cato Institute, a conservative think tank, but it just goes to show how pervasive the Carpenter case was that the Supreme Court decided last week.
As you may recall, Carpenter is a case where two guys in Detroit were accused of robbery, and the Federal Bureau of Investigation (FBI) used their cellphones to prove that they were nearby a number of the incidents. To do this, the FBI went to the suspects’ cellphone providers and obtained a lot of data about the suspects’ locations – more than 12,000 for one guy, and almost 24,000 for the other guy. The defense attorneys for the guys are saying that the phones revealed so much personal data about the guys that a warrant should have been required for the search.
“Prosecutors didn’t seek a warrant for the cell-site data, which would have required a showing of probable cause to believe the records show evidence of a crime,” write Jess Bravin and Brent Kendall in the Wall Street Journal. “Instead, they sought the data under the Stored Communications Act, which requires only ‘reasonable grounds’ to believe the information is relevant to an investigation.” The defense attorneys for the guys were saying that the phones revealed so much personal data about the guys that a warrant should have been required for the search.
Last week, the Supreme Court agreed.
“We decline to grant the state unrestricted access to a wireless carrier’s database of physical location information,” writes Chief Justice John Roberts in his decision. “In light of the deeply revealing nature of [cell site location information], its depth, breadth, and comprehensive reach, and the inescapable and automatic nature of its collection, the fact that such information is gathered by a third party [the cell phone companies] does not make it any less deserving of Fourth Amendment protection.”
A big part of this case was the Riley case from a couple of years back. Riley had already ruled that law enforcement officials needed a warrant to search someone’s cell phone. So it wasn’t a big stretch to add to it by saying that law enforcement also needed a warrant to search the places someone’s cell phone had been.
The other was Jones, which we haven’t written about before, at least under that name. It’s the 2012 case where the Supreme Court ruled that collection of data from a GPS tracker required a warrant. In Carpenter, law enforcement argued that the cellphone tower location data was less specific than the data from a GPS tracker, so it didn’t require the same level of protection. But the Court didn’t buy it.
There are, however, still a few concerning aspects.
- Gaining cellphone location data for a short period is apparently still okay. How short? Not clear, but apparently less than seven days, writes Adam Liptak in the New York Times. “Chief Justice Roberts left open the question of whether limited government requests for location data required a warrant,” he writes. “But he said that access to seven days of data is enough to raise Fourth Amendment concerns.”
- Gaining cellphone location data from the tower itself is apparently still okay. “The Court rather ominously notes that it does not ‘express a view on matters not before us’ including so-called ‘tower dumps,’ where police request ‘a download of information on all the devices that connected to a particular cell site during a particular interval,’” writes Ian Millhiser in ThinkProgress. In other words, while law enforcement can’t track location data from the phone itself, it can look at all the phones tracked by a single tower. At some point, some law enforcement organization is simply going to get all the data from every tower in town, and then that will eventually end up in the Supremes’ lap as well.
- The ruling was 5-4. In other words, as time goes on and President Donald Trump has the opportunity to replace more Justices, this and similar cases in the future could go the other way. Notably, although Justice Neil Gorsuch had indicating during arguments that he might not uphold the lower court, he supported it this time, and wrote a dissent. (“He agreed stronger privacy protections were in order, but not in the way the court provided them,” Bravin and Kendall write.) And just to make this potential a little more real, Justice Anthony Kennedy – though he voted against this ruling – has announced that he’s retiring. So next year could be really interesting.
Every couple of years, it seems like everyone has to lose their minds over glass storage.
In 2012, Hitachi demonstrated using glass for storage, saying it could end up being a product by 2015. Or, at least, once they figured out how to make something to read it. Yes, that would be a problem, unless you had a use case for write-only memory.
Then in 2013, researchers demonstrated another type of glass storage. Peter Kazansky and other researchers from the University of Southampton demonstrated “5D” glass etching, which is how the discs achieved much higher density. The information encoding uses size and orientation in addition to the three-dimensional position of these nanostructures, the University writes.
First, researchers stored a 300 kb file. Then, in 2016, they stored other documents, including the Universal Declaration of Human Rights (UDHR), Newton’s Opticks, Magna Carta and the King James Bible, Reading it back again, however, is tricky – it requires a combination of an optical microscope and a polarizer, similar to that found in Polaroid sunglasses, the University writes. While the University went on to say that the team was looking for industry partners to commercialize the technology, we haven’t heard much about it.
Until now. Now, everyone’s talking about glass storage again, because of Elon Musk. A recent SpaceX satellite payload he sent up included some of the 2013-era glass storage. (Musk actually got two of them; he’s keeping the other one.)
“Stashed inside the midnight-cherry Roadster was a mysterious, small object designed to last for millions (perhaps billions) of years – even in extreme environments like space, or on the distant surfaces of far-flung planetary bodies,” writes Peter Dockrill in Science Alert. “Called an Arch (pronounced ‘Ark’), this tiny storage device is built for long-term data archiving, holding libraries of information encoded on a small disc of quartz crystal, not much larger than a coin.” Each of the discs could hold 360 terabytes, he continues.
Beyond the storage capacity, what excites people about glass is that it reportedly isn’t subject to bit rot, because it’s etched into the glass rather than stored magnetically. So it can supposedly last up to 14 million years – unless someone drops it, presumably.
But unlike similar efforts such as the golden disks designed by Carl Sagan that went aboard Voyager in 1977 that included sounds of Earth, which were intended to help communicate with any intelligent life out there, these glass discs contain things like Isaac Asimov’s Foundation series. Not that these aren’t a swell batch of books but how realistic is it that anyone would be able to comprehend them, even if they found a way to read the discs themselves?
Future endeavors are also planned. “Subsequent launches are planned for 2020 and 2030, with the ‘Lunar’ and ‘Mars’ Arch libraries intended to send curated backups of human knowledge to the Moon and Mars – with the latter disc hoped to serve as a useful aid for colonists on the Red Planet, helping them to ‘seed’ a localized internet on Mars,” Dockrill writes.
Who knows. If aliens discover it, maybe they can figure out a way for us to read it.