Oh, this should be fun. Microsoft is warning users that the next Windows 10 update might kill their systems.
Pass the popcorn.
This all came out in a warning issued earlier this month from Microsoft. “On Microsoft Windows 10 systems that have limited storage space (such as thin clients or embedded systems), when you run Windows Update, the update initialization may fail.”
Of course, Microsoft doesn’t define “limited” or say how much storage space the update initialization actually takes, exactly how the update initialization may fail, what the repercussions of that are, or how to recover from it. If you can.
“How much storage space do you need? Microsoft isn’t saying,” writes Kevin Murnane in Forbes, adding that last spring’s update needed 16GB of empty space for 32-bit systems and 20GB for 64-bit.
The company is, however, very clear on what causes it: “Windows Update does not check systems for adequate space requirements before it initializes.”
The note then launches into its Resolution section with seven separate steps detailing how users can delete files from their systems to increase the amount of empty storage space.
How about this Resolution: “We will hold off on this update until we instate the system space check, and in the meantime find out which bonehead authorized a system update without one.”
Murnane savages Microsoft for this move. “Microsoft’s decision to push out a major upgrade without warning the user if they don’t have enough free space to safely install it is unconscionable and outrageous,” he writes. “You would think the company had learned its lesson about arrogant disregard for the needs and desires of its customers after the epic fail of the Xbox One launch. Apparently not. Microsoft’s left you hanging in the wind so check to see how much storage space you have available and make space if you need it.”
The other interesting aspect is that Microsoft is replacing its venerable Disk Check utility with something called Storage Sense, which is a more automated version that puts some of your files into the cloud using Microsoft’s OneDrive. And while that’s a useful function (assuming people know this, can find their files later, and security is taken care of), it’s going to be sad not using Disk Check anymore.
Admittedly, I’ve been using Crap Cleaner for years and usually only use Disk Check afterwards, just in case. But it was something I was used to. Of course, I’m old enough that I still remember running the defrag utility and being mesmerized by the little animation that showed exactly which block was being defragged and watching all the little squares change colors. I miss that, too, even though it would probably take hours to run with the hard disk drive sizes we have these days.
Meanwhile, it seems clear that a number of users won’t find out about the problem in time, won’t take sufficient steps to deal with it, and will end up crashing their systems – at which point, we’ll at least find out what that actually means.
Better get more popcorn.
We’ve talked before about security issues involved with USB drives, but here’s a new one: A vendor alerting us to malware on a USB drive that it’s shipping with its product.
Schneider Electric recently notified users of its Conext Combox and Conext Battery Monitor that USB removable media shipped with the products may have been exposed to malware during manufacturing at a third-party supplier’s facility.
The Conext Combox and the Conext Battery Monitor are both used to monitor solar system harvest and yield of solar power systems, according to the company, which is based in France. This is somewhat concerning in the context of the security of the power grid.
It also isn’t known where the third-party supplier’s facility is, to help determine whether this is a state-sponsored activity. China? South Korea? Japan?
“Schneider Electric has determined that some USB removable media shipped with the Conext Combox and Conext Battery Monitor products were contaminated with malware during manufacturing by one of our suppliers,” the company said in its alert. “Schneider Electric has confirmed that the malware should be detected and blocked by all major anti-malware programs. Out of caution, Schneider Electric recommends that these USB removable media are not used. These USB removable media contain user documentation and non-essential software utilities. They do not contain any operational software and are not required for the installation, commissioning, or operation of the products mentioned above. This issue has no impact on the operation or security of the Conext Combox or Conext Battery Monitor products.”
Instead of using the documentation on the USB drives, Schneider recommends that people download the documentation from the company website.
This isn’t the first time something like this has happened. A year ago, IBM reportedly shipped some USB flash drives, containing the initialization tool for its Storwize storage system, that contained a file that has been infected with malicious code. IBM was similarly tight-lipped about how the malware came to be there.
In fact, there’s a security website (called “Rationally Paranoid”) that tracks such incidents, and it goes as far back as 2000. It doesn’t yet include the Schneider incident, nor any other incident from 2018.
With the Schneider incident, there are still a number of outstanding questions:
- What kind of malware is it?
- Who is the third-party manufacturer and where are they located?
- What was the USB drives’ intended use? Did they get plugged into the solar device itself, or into PC?
- Were these particular USB drives belonging to Schneider Electric targeted, or was it just run-of-the-mill malware? In other words, was someone trying to hack into the power grid this way?
- Who else uses USB drives from that manufacturer? Are there USB drives infected too?
Companies are understandably reticent about such incidents, because they don’t want to give people ideas, nor set themselves up for liability. On the other hand, if we’re going to protect ourselves from such incidents in the future, it’s important to know all we can about them. “Security through obscurity” never works.
Every few years, tape manufacturers get together to remind us that tape is not dead.
And it’s not. You still get the most bandwidth for your buck using a station wagon full of tapes hurtling down a highway. Tape is the Internet’s attic, or basement – a pain in the ass to get to, but it’s nice to not have to trip over the Christmas decorations the rest of the year.
The result is that tape drive manufacturers shipped 108,457 petabytes (PB) of total tape capacity (compressed) shipped in 2017, an increase of 12.9 percent over the previous year. Admittedly, since they’re counting compressed capacity, that certainly reflects improvements in compression technology as much as anything, but it’s still a lotta tape. Though even the vendors had to admit that resulted in fewer unit shipments.
While hard disk drive manufacturers are having to resort to increasingly convoluted measures to continue adding capacity to their drives, tape drive manufacturers keep diligently releasing new versions of the Linear Tape Open (LTO) specification every few years, which typically double the capacity. They’re now up to version LTO-8, and have a roadmap for versions up to 12, which if they keep to their schedule should be announced around 2029.
“A modern tape cartridge can hold 15 terabytes,” writes Mark Lantz in IEEE Spectrum. “And a single robotic tape library can contain up to 278 petabytes of data. Storing that much data on compact discs would require more than 397 million of them, which if stacked would form a tower more than 476 kilometers high.”
Of course, part of the reason that tape still has room to expand its density is because people weren’t using it as much once hard disk drives came along, Lantz admits. “Early on, the areal densities of tapes and hard drives were similar,” he writes. “But the much greater market size and revenue from the sale of hard drives provided funding for a much larger R&D effort, which enabled their makers to scale up more aggressively. As a result, the current areal density of high-capacity hard drives is about 100 times that of the most recent tape drives.”
That also means that every few years, everyone still using tape needs to upgrade all their equipment and write all their data to the new format, because each new LTO version can read back only two generations. You can call that “planned obsolescence” or you can call it helping to ensure that the data survives. Either way, it helps keep the industry going.
(PS, tape organizations: If you want to convince people there’s a future for tape, you might want to redesign your websites and logos so they look like they came from this century.)
Tape manufacturers point out, rightly, that their products can be more secure against intrusion than hard disk drives because they can be “air gapped,” or not on the Internet unless they’re actually in use. “If a cartridge isn’t mounted in a drive, the data cannot be accessed or modified,” Lantz writes. “This ‘air gap’ is particularly attractive in light of the growing rate of data theft through cyberattacks.”
And, using a more recent consideration, they also don’t use energy when not in use, making them more “green.” “Tape is the greenest storage technology available for large amounts of inactive data,” writes the Information Storage Industry Consortium in its report, 2015-2025 International Magnetic Tape Storage Roadmap. “Its removable media consumes no power while not in use. Automated digital libraries consume very little power yet provide access to vast amounts of data. Tape’s footprint is also reduced, minimizing the square footage required.
Those benefits do come with a cost, though. Yes, a tape not in use isn’t as vulnerable and isn’t using energy. But if you do need something on that tape, the tape needs to be located, inserted into a reader (perhaps with a robot, as in the Rogue One Star Wars movie – and we saw how that turned out — but still), and then spun until the data shows up. That takes time. That’s why tape is dandy as a long-term cold storage medium, but not necessarily for data that you’re using right now.
Nobody, not even tape drive manufacturers, is trying to say that tape should be used for all storage solutions. But it can be handy to have. Just remember that when you’re getting the Christmas lights from the attic.
Earlier this summer, we talked about machines that are intended specifically to destroy hard disk drives. But Google does it one better.
It has robots.
That’s according to Joe Kava, Google’s vice president of data centers. “Google first detailed its process for this back in 2011,” writes Yevgeniy Sverdlik. “A company-produced video showed wiped drives get punctured with a steel piston and then thrown into an industrial shredder. The tiny pieces of plastic and metal then got boxed and recycled. What happens to each drive being replaced in the company’s data centers today is still the same. What’s different is who’s doing it. It’s now done by robots in what Google calls a ‘fully-automated disk-erase environment,’ Kava said.”
(Sadly, videos showing this robotic process don’t seem to be available, though a photograph is.)
The advantage of having a robot do the destruction is it reduces the number of people who have to handle a hard disk drive, Sverdlik writes, therefore also reducing the amount of tracking that has to be done for each hard disk drive.
The hard disk drive destruction robots come in particularly handy when Google is doing a forklift upgrade of its hard disk drives, Kava said. This would seem to indicate that other companies with very large quantities of hard disk drives, such as Facebook or Backblaze, might use hard disk drive destruction robots, too.
That said, apparently humans still need to perform the actual disconnection of the hard disk drive from the system, Kava added.
Videos of Google’s data center seem to crop up every couple of years, and destroying the obsolete hard disk drives is always a major part of it.
Actually, an interesting nuance about Kava’s video was his explaining that the only hard disk drives that are destroyed are the ones that can’t be verified as 100 percent wiped. He didn’t explain how Google verifies this, or what would keep a particular hard disk drive from being able to be wiped. Reportedly, the hard disk drives that can be verified are sold to other companies, Sverdlik writes.
Developing a 100 percent way of wiping and verifying hard disk drives would also make it easier to recycle the material from which hard disk drives are made, writes Tom Coughlin in Forbes. There is, in fact, an entire organization — the Value Recovery From Used Electronics Project, organized by iNEMI (the International Electronics Manufacturing Initiative) – that is intended to help develop a more circular economy for hard disk drives, he writes.
“There are three major reason why HDDs are a good candidate for a circular economy: (1) the demand for data storage is increasing rapidly; (2) data storage demand is increasing significantly faster than increases in HDD storage density, and (3) industry output of HDDs (manufacturing capacity) is not expected to increase significantly, according to industry projections,” Coughlin writes. “This leads to a potential gap between estimated data storage needs and the estimated ability of HDD and SSD manufacturers to keep up with demand. There are a number of ways to fill this gap: continued investment in fabs and technologies to increase HDD and SDD storage, increase HDD reliability, and increase the reuse of used HDDs so that they are available to meet some of our global data storage needs.”
But practices such as Google’s make that difficult, Coughlin writes (though he notes that Google is participating in the project). “Some existing practices severely impede the overall value recovered from the products across the reverse chain of commerce,” he writes. “Data destruction demands by the last user, which are not always essential to meet justified data security needs, lead to wholesale HDD shredding, which precludes reuse and reduces material recovery options.” And while shredding does allow for recycling of the raw material, it “precludes reuse and can reduce recovery of trace, but highly valuable, materials (e.g. rare earth metals),” he adds.
In the meantime, shredding robots it is.
When someone breaks into your system, is it fair to go break into theirs?
Sometimes it’s in an “eye for an eye” situation. More often, people want to use hacking techniques to help figure out who hacked them. Either way, it’s called “hacking back” and has been illegal, with sentences of up to 20 years. “Any form of hacking is a federal crime,” writes Nicholas Schmidle in the New Yorker. “In 1986, Congress enacted the Computer Fraud and Abuse Act, which prohibits anyone from ‘knowingly’ accessing a computer ‘without authorization.’” The law was inspired by the 1983 movie WarGames, he adds.
No one has ever been charged under the law, Schmidle writes, reportedly because it wouldn’t look good to charge people with attacking hackers. In fact, people like Shawn Carpenter, a former security analyst for Sandia National Laboratories, was not actually charged with hacking back, but he was fired for it, sued, and won $4.7 million for wrongful termination.
That’s not to say that people don’t do it. “Many cybersecurity firms offer what is called ‘active defense,’” Schmidle writes. “It is an intentionally ill-defined term. Some companies use it to indicate a willingness to chase intruders while they remain inside a client’s network; for others, it is coy shorthand for hacking back. As a rule, firms do not openly advertise themselves as engaging in hacking back.”
“Hacking back” can cover a number of techniques. For example, “honey pots” are sets of enticing-looking files intended to encourage a hacker to download them. Once downloaded, they can be traced. They can include “beacons,” which send messages back to help track the hacker, or “dye packets,” — code can be embedded in a file and activated if the file is stolen, rendering all the data unusable, Schmidle writes.
But Rep. Tom Graves (R-GA-14) wants to change that law. He submitted a bill in 2016, the Active Cyber Defense Security Act, to allow for hacking back, and has updated it a couple of times since then in response to comments, primarily to require reporting to law enforcement if you’re going to do it, as well as a sunset clause.
“Private firms would be permitted to operate beyond their network’s perimeter in order to determine the source of an attack or to disrupt ongoing attacks,” Schmidle writes. “They could deploy beacons and dye packets, and conduct surveillance on hackers who have previously infiltrated the system. The bill, if passed, would even allow companies to track people who are thought to have done hacking in the past or who, according to a tip or some other intelligence, are planning an attack.”
Experts caution against hacking back, because it’s not always as simple as it sounds. For example, hackers often use “hop points,” or go from site to site – as many as 30 of them — to try to hide their tracks. Hacking back could nail an innocent bystander who just happens to be on that path.
People, like Carpenter’s bosses, also worry that hacking back might invite additional attacks or draw attention to the original breach. “if companies weren’t able to defend themselves in the first place, it’s unlikely they’re going to come off best in a digital firefight,” warns Martin Giles in MIT Technology Review. (A number of the arguments resemble those against civilians carrying firearms in public.)
More ominously, particularly in the case of hackers sponsored by states as opposed to “script kiddies,’ this could be more dangerous. In one case, a company that was trying to fight back against hackers found pictures of executives’ children in email from the hackers, Schmidle writes.
Ultimately, the majority of people appear to be against hacking back, writes Josephine Wolff in the Atlantic. “Its critics range from law enforcement officials who worry it will lead to confusion in investigating cyberattacks, to lawyers who caution that such activity might well violate foreign laws even if permitted by the U.S., to security advocates who fear it will merely serve as a vehicle for more attacks and greater chaos, particularly if victims incorrectly identify who is attacking them, or even invent or stage fake attacks from adversaries as an excuse for hacking back,” she writes. (The paper The Ethics of Hacking Back looks at, and dismisses, a number of the reasons why not to hack back.)
Another alternative is to have a list of firms authorized to hack back, which companies could hire. “Department stores hire private investigators to catch shoplifters, rather than relying only on the police,” write Jeremy and Ariel Rabkin in Lawfare about their paper, Hacking Back Without Cracking Up. “So too private companies should be able to hire their own security services. There should be a list of approved hack-back vendors from which victims are free to choose. These vendors would primarily be in the business of identifying attackers and imposing deterrent costs on attackers by providing the threat of retaliation.”
In any event, thus far, Graves’ bill hasn’t gone anywhere. Yet.
Most of us are pretty aware that we need to wipe the memory of a hard drive or other storage device when we get rid of a computer or a cell phone. Some of us are even aware that you may need to do so with a printer, copier, or fax machine.
It turns out that now you have to do it with your car, too.
The Federal Trade Commission (FTC) has put forth a program, with the somewhat unwieldy name of “Be discreet when you delete your fleet,” for making sure people delete personal data from cars when they sell them. That data can include:
- Phone contacts and an address book
- Mobile apps’ log-in information, or data
- Digital content like music
- Location data like addresses or the routes you take to home, work, and favorite places
- Garage door codes
The same can be true when you buy a car – the previous owner, for example, may still have information about how to find or start the car on their phone, the FTC warns.
It’s not just with your own car, but also with rentals and car shares. In fact, it may be even more urgent with them, because you know someone else is going to be using the car after you do. “With cars increasingly asking if to download your phonebook, that have facilities for you to make and receive calls, and to message, browse the internet, and stream media, the trove of data on infotainment systems will only increase,” notes the December 2017 report, Connected Cars: What Happens To Our Data On Rental Cars? from Privacy International. “If you use the GPS in a rental car to get home, for instance, a robber could find your address,” writes Rebekah Sanders in USA Today. “Or a stranger could reveal your identity by matching your device name to profiles on social-media such as Facebook, Instagram or Twitter.”
“People called their phone all sorts of things, including what looks like their actual names,” writes Adam Racusin of ABC 10 News. “When I first saw this I audibly exclaimed that I’m looking at someone’s first and last name and what type of phone they have, whether it’s an iPhone or an android,” he quotes Ted Harrington, executive partner at Independent Security Evaluators, as saying. “That’s a lot of information that’s just free for me to access. No one’s hacking that — the car is giving that information out right now.”
The complication is, how do you do it? The FTC is advising people, when they’re about to sell their car, or turn in a rental, to look for a factory reset option to wipe personal data. Privacy International is calling for a single button for car renters to press before turning the car back in.
At the same time, even a factory reset might not wipe out personal information such as subscriptions to satellite radio or Spotify, so you might have to do that manually, the FTC warns. In addition, even charging your device from the car’s USB port might transfer data automatically to the car. Use a cigarette lighter instead, with an adapter if there is one advises the FTC.
This is going to become even more critical an issue as automated vehicles come into play, especially with “mobility as a service” and other shared transportation alternatives. So it’s probably a good idea to get into the habit now.
In response to incidents such as the Federal Bureau of Investigation (FBI) using material in a genetic database to track down a murder subject, the major genetic testing firms are pledging that they will follow certain best practices before doing so in the future. But don’t cheer just yet.
“Under the new guidelines, the companies said they would obtain consumers’ ‘separate express consent’ before turning over their individual genetic information to businesses and other third parties, including insurers,” write Tony Romm and Drew Harwell in the Washington Post. “They also said they would disclose the number of law-enforcement requests they receive each year.”
Well, that’s nice, except for a few things.
- The agreement doesn’t cover GEDMatch, the open source database used by law enforcement to track down the alleged “Golden State Killer.”
- How long is it going to take before insurers offer either carrots – “We’ll give you this sort of price break to give us access!” – or sticks – “We won’t insure you unless you give us access”?
- What happens when law enforcement puts gag orders on these firms forbidding them to release information about law enforcement requests or releases of information? In other words, how long will it be before we see a “warrant canary” on genetic database sites?
- At this point, it’s something the companies are doing only out of the goodness of their hearts—and their concern that people will stop using their services if they are afraid the information could get out. “Adherence to the rules is voluntary,” Romm and Harwell write. “While the policy offers users of participating sites added new protections at a time of great ‘uncertainty,’ it doesn’t have the force of law, said Justin Brookman, the director of consumer privacy and technology policy at Consumers Union.”
- Having once submitted your data, it’s not at all clear that you can delete it from the databases. “Customers of these DNA testing services would gain some limited rights to have their biological data deleted, but they may not be able to withdraw data that was already in use by researchers,” note Romm and Harwell.
This is all happening at the same time that the genetic database companies are finding new ones to monetize the data. 23andMe recently announced it had struck a research deal with GlaxoSmithKline for $300 million, Romm and Harwell write. “As part of that pact, GlaxoSmithKline can access ‘de-identified’ genetic data about 23andMe users — provided they’ve previously given their consent — so that the firm can ‘gather insights and discover novel drug targets driving disease progression,’ the company said.”
That’s fine – noble, even – except that studies have demonstrated that the so-called “de-identified” data can actually be “re-identified” pretty easily. And under the guidelines, the genetic database testing companies don’t need to inform their users about these efforts, Romm and Harwell write. (And other genetic databases for research may also be subject to police search and not subject to these guidelines, writes Natalie Ram in Slate.)
Another nuance – the genetic databases suffer from a “lack of diversity,” and concern about privacy, particularly from law enforcement, could keep ethnically diverse individuals from submitting their material to the databases, writes Eric Rosenbaum for CNBC. 23andMe has noted that the genetic testing industry remains challenged by a lack of diversity, and to the extent that poverty is intertwined with the criminal justice system, a focus on using these databases to identify criminals will create unease or distrust, especially among historically targeted populations, he writes. In addition, when companies are sold or go out of business, as in Sports Authority or Radio Shack, the new owner may not hold to the same provisions, he notes.
As many as 12 million Americans – 1 in 25 – have had their genetics tested by one of the companies as of 2017, according to MIT Technology Review.
The guidelines themselves are a pretty interesting read, with some fascinating circumlocutions. For example, genetic information is important because, in the document’s words, “It may contain unexpected information or information of which the full impact may not be understood at the time of collection.” In other words, you may unexpectedly find out that your daddy isn’t your daddy or that you were adopted. Not to mention, “It may have cultural significance for groups or individuals,” and that could have any number of meanings.
There’s another offhand sentence in the Washington Post story that’s pretty ominous: “Companies, meanwhile, would have to ensure the person submitting DNA data is the actual owner of that data.” Uh, yeah. You mean they don’t do that now? There’s all sorts of interesting possibilities around that. You think Facebook stalking is bad? How about someone sending off some hair or spit from a prospective partner or job applicant? Or let’s get into science fiction and imagine bounty hunters on the prowl for people with – or without – certain genetic conditions. Remember those “I woke up without a kidney” urban legends?
Social media companies have been reporting the number of law enforcement requests they get, on a semiannual basis, for several years. Genetic testing database companies are also planning to do this, with Ancestry saying it had received 34 requests, 31 of which it had fulfilled, and 23andme saying it had received five requests, none of which it had fulfilled. If the social media companies are any indication, these numbers should zoom up over time.
Here we go again. The European Union is calling for an end to the so-called Privacy Shield agreement by September 1 if the U.S. doesn’t follow through on its commitments, which could make it really difficult for U.S. computer companies to acquire data from European customers.
As you may recall, this all dates back to about two years ago, when the EU and the U.S. finally reached an agreement to replace the Safe Harbor Act, which is what enabled American countries and companies to gain access to data about foreign citizens. After the Snowden case and other breaches, EU countries said they didn’t feel that data about their citizens was safe in the U.S., and the U.S. had to improve security in some of its regions, as well as in the companies themselves.
Soon after President Donald Trump’s inauguration in January, EU members expressed concern about an executive order he signed that could have been interpreted as saying that people who weren’t citizens of the U.S. weren’t protected by the U.S. Privacy Act.
Since then, it’s been pretty quiet, and the European Union has been busy paying attention to its own General Data Protection Regulation privacy standard. But now that that’s finished, EU member states are starting to turn their focus to the U.S. again. And they’re wondering why it’s taking the U.S. so long to do certain things required under the pact, such as hiring an ombudsman to deal with complaints from EU citizens, as well as appoint other officials responsible for overseeing the program.
In particular, EU representatives are concerned about the Facebook data scandal where the personal information from up to 87 million US voters was passed on to Cambridge Analytica, a company employed by Trump’s presidential campaign team, writes Mehreen Khan in the Financial Times.
Vera Jourova, the EU’s commissioner for justice, has written to Wilbur Ross, US commerce secretary, complaining that the White House is stalling. ““Now that the new state secretary is in office and we are almost two years into the term of this administration, the European stakeholders find little reason for the delay in the nomination of a political appointee for this position,” she wrote.
“The Privacy Shield is due for its second review from the European Commission in October,” Khan writes. “Brussels has the power to unilaterally revoke the agreement if Washington is not meeting its commitment to ensure the rights of EU citizens are adequately protected in the US.”
That would be bad. If the EU does end the Privacy Shield, it would mean that each company would need to negotiate individually with each country how it could obtain data about the country’s citizens. That could take a long time and be really complicated. Without the agreement, more than 4,000 European and U.S. companies wouldn’t have been able to exchange data about each other’s citizens as easily, which could make commerce more difficult. That’s currently worth up to $260 billion, writes Mark Scott in the New York Times.
About a month to go.
Security experts watched in horror as journalists attending the US – North Korea summit in Singapore in June were handed gift bags containing adorable little USB fans. Plug them into your laptop and they would help keep you cool.
USB fans in and of themselves are nothing new; there are pages and pages of the things available online. But eek! These were from North Korea! They could be booby-trapped!
“Aiieee! Journalists – Do. Not. Plug. This. In.” warned one Tweet.
It would be only poetic justice if North Korea did, since people have been “attacking” North Korea with USB drives, loaded with shows such as Desperate Housewives, The Mentalist and soap operas, films like Bad Boys, The Interview, and action films, and a Korean-language Wikipedia – all intended to foment dissent using popular culture, using little balloons to take them across the border.
None of the tested fans have revealed any malware thus far. “This particular sample of USB fan does not have any computer functionality on USB interface,” noted one UK analysis, which dissected one of the little fans. “It can only be used for driving the motor from USB power.”
“No data transmission of any sort was observed,” noted another report by the Celsus Advisory Group, a security consulting firm. “The resistance of the device went up some over time, but this appeared to be connected to the rising temperature of the device rather than something nefarious. The device seemed to be free of implants.”
But officials are still worried. Perhaps some of the USB fans are indeed booby-trapped, and the innocent ones are decoys intended to let our guard down. Perhaps the malware is simply too well hidden. Celsus went on to describe some of the possibilities. “There is a motor…which if built to ‘custom spec’ might oscillate at a specific frequency providing a specific electronic signature when operating. This could be used to profile the target, or perhaps something even more interesting.“
And there’s more. “Imagine a factory installed battery that is designed to track keystrokes and user activity (phone, email, chat) by baselining and tracking power flow to the unit,” Celsus writes. “Each character and action creates a power spike! There’s even an easy way to exfil the data since most mobiles are internet connected 24/7. Pwn the phone without touching the OS/Baseband. Cool right? It’s been done.”
And more. “Envision a 3d printed WiFi connected plastic object, and metamaterial printed antenna utilizing local WiFi RF backscatter to provide power and to connect to the internet,” Celsus writes. “Imagine creating a heatmap of a room or a vehicle using the reflected WiFi signals and exfiltrating the WiFi hologram outbound, all without a battery.”
(Incidentally, if you like this sort of thing, Black Hat is going on in Las Vegas next week; the even more nerdy Defcon occurs a couple of weeks later.)
Singapore’s Ministry of Communications and Information, which the BBC reported had put together the gift bags, was affronted that anyone would think anything nefarious of them. “The USB fans were part of Sentosa Development Corporation’s (SDC) ready stock of collaterals, originally meant for Sentosa Islander members,” the organization told the BBC. “SDC had assessed USB fans to be a handy and thoughtful gift for the media, who would be working in Singapore’s tropical climate. MCI and SDC have confirmed that the USB fans are simple devices with no storage or processing capabilities.”
The manufacturer also weighed in, saying they didn’t have any ability to make such a device..
Of course, that’s what they’d want you to think.
All in all, the fact that no sabotage had taken place in the examined USB fans was in itself a ominous development, researchers write. “This does not eliminate the possibility of malicious or Trojan components wired to USB connector in other fans, lamps and other end-user USB devices,” the UK report continued. “Hence, their evaluation will be essential before any sensitive usage.”
“Maybe the person who received the package wasn’t a targeted POI,” Celsus noted. “Maybe the system in question requires being tickled in a specific way to elicit an illicit behavior. Or perhaps none of the fans were dual purpose in nature; eg fan AND surveillance implant. This is a difficult problem to address without reviewing ALL the potentially poisoned pills.”
“Malicious actors could have narrowly targeted one reporter who was of special interest out of 100, meaning that most fans may have appeared harmless even as some might have been used to target specific journalists,” warned Hamza Shaban in the Washington Post. (Which, no doubt, was put out that it wasn’t considered important enough to target in this way.)
In other words, you would need to dissect each fan individually to make sure it was safe – at which point, of course, it would no longer work. Some 2500 journalists were accredited to cover the conference, so we’re talking about a lotta fans.
Apparently it is necessary to destroy the fan in order to save it.
What almost became a new law in Illinois presages a series of similar laws in other states that could make it a whole lot easier to identify and arrest people.
Both the Illinois House and Senate passed bills that would have allowed law enforcement to use drones to scan groups of people. The House version required crowds of at least 1,500 – unlike an earlier version of the bill that would have allowed “crowds” of just 100 – and banned the use of facial recognition software with the drones. The Senate had passed a similar bill earlier in the month. Even without facial recognition software, the use of drones could intimidate people into not exercising their right of free speech, writs the American Civil Liberties Union.
“Fortunately, advocates of free speech and privacy defeated the 2018 proposal,” writes the Electronic Frontier Foundation. “While the Illinois House and Senate each approved a version of this bill, the state legislative session expired on May 31 without reconciling their conflicting versions.”
This facial recognition technology is already being used. For example, it was reportedly used to help identify the suspect in the Capitol Gazette shooting when he was “uncooperative.” “Anne Arundel County police ran Jarrod Warren Ramos’ photo through a database of millions of images from driver’s licenses, mug shots and other records to help identify him as the suspect in Thursday’s Capital Gazette shooting,” writes Yvonne Wenger in the Gazette. “Police Chief Timothy Altomare said Friday that officials used the Maryland Image Repository System to determine who Ramos was. The 38-year-old Laurel man was not cooperating, and police were facing a lag in getting results from a fingerprint search, so the chief said they turned to technology to move as quickly as possible.”
And in Seattle, international visitors can have their face scanned rather than show their passport when they come into the airport. Similar systems are used at 17 airports, including 13 in the U.S., writes Colin Wood in StateScoop. Because, you know, it’s so much faster and more convenient than showing a passport.
Facebook has reportedly also started using facial recognition, ostensibly to help protect people from other people hacking into their accounts. Amazon has also developed facial recognition software, which it is selling to law enforcement organizations. In fact, the American Civil Liberties Union and about two dozen other organizations have asked Amazon to stop selling its Rekognition software to law enforcement. Madison Square Garden has reportedly also used the technology – all in the name of safety and security, of course.
The thing is, there’s not much in the way of laws yet regarding facial recognition, so there was nothing to stop law enforcement from using the new technology. And as we’ve seen with technology such as phone encryption, it’s seen as more okay to violate people’s rights when they’re really bad people like child pornographers and terrorists.
Maryland also used its facial recognition database – considered superior to that of other states because it includes 10 million motor vehicle database photos — to monitor protesters during the rioting in Baltimore in 2015 after Freddie Gray’s death, Wenger writes. “As of 2016, as many as 6,000 or 7,000 law enforcement officials had access to the database,” she writes. “Officials said the system at times was accessed more than 175 times in a single week.” Given that law enforcement personnel have been known to look up people that interested them in driver’s license databases, how much longer before it’s learned that they also look up people in the facial database as well?
Altogether, as many as 130 million people – just regular people, not necessarily criminals – may have their faces stored in databases, writes Nick Wingfield in the New York Times. The FBI facial database was reported to be more than 400 million people as of 2016.
There’s also the question of accuracy. In 2016, the FBI had said that as many as 20 percent of its identifications were incorrect. This is particularly true for women and minorities. “ One study by the Massachusetts Institute of Technology showed that the gender of darker-skinned women was misidentified up to 35 percent of the time by facial recognition software,” Wingfield writes. In comparison, white men are identified accurately 99 percent of the time, writes Steve Lohr in the New York Times.” In 2015, for example, Google had to apologize after its image-recognition photo app initially labeled African Americans as ‘gorillas,’” he writes.