What almost became a new law in Illinois presages a series of similar laws in other states that could make it a whole lot easier to identify and arrest people.
Both the Illinois House and Senate passed bills that would have allowed law enforcement to use drones to scan groups of people. The House version required crowds of at least 1,500 – unlike an earlier version of the bill that would have allowed “crowds” of just 100 – and banned the use of facial recognition software with the drones. The Senate had passed a similar bill earlier in the month. Even without facial recognition software, the use of drones could intimidate people into not exercising their right of free speech, writs the American Civil Liberties Union.
“Fortunately, advocates of free speech and privacy defeated the 2018 proposal,” writes the Electronic Frontier Foundation. “While the Illinois House and Senate each approved a version of this bill, the state legislative session expired on May 31 without reconciling their conflicting versions.”
This facial recognition technology is already being used. For example, it was reportedly used to help identify the suspect in the Capitol Gazette shooting when he was “uncooperative.” “Anne Arundel County police ran Jarrod Warren Ramos’ photo through a database of millions of images from driver’s licenses, mug shots and other records to help identify him as the suspect in Thursday’s Capital Gazette shooting,” writes Yvonne Wenger in the Gazette. “Police Chief Timothy Altomare said Friday that officials used the Maryland Image Repository System to determine who Ramos was. The 38-year-old Laurel man was not cooperating, and police were facing a lag in getting results from a fingerprint search, so the chief said they turned to technology to move as quickly as possible.”
And in Seattle, international visitors can have their face scanned rather than show their passport when they come into the airport. Similar systems are used at 17 airports, including 13 in the U.S., writes Colin Wood in StateScoop. Because, you know, it’s so much faster and more convenient than showing a passport.
Facebook has reportedly also started using facial recognition, ostensibly to help protect people from other people hacking into their accounts. Amazon has also developed facial recognition software, which it is selling to law enforcement organizations. In fact, the American Civil Liberties Union and about two dozen other organizations have asked Amazon to stop selling its Rekognition software to law enforcement. Madison Square Garden has reportedly also used the technology – all in the name of safety and security, of course.
The thing is, there’s not much in the way of laws yet regarding facial recognition, so there was nothing to stop law enforcement from using the new technology. And as we’ve seen with technology such as phone encryption, it’s seen as more okay to violate people’s rights when they’re really bad people like child pornographers and terrorists.
Maryland also used its facial recognition database – considered superior to that of other states because it includes 10 million motor vehicle database photos — to monitor protesters during the rioting in Baltimore in 2015 after Freddie Gray’s death, Wenger writes. “As of 2016, as many as 6,000 or 7,000 law enforcement officials had access to the database,” she writes. “Officials said the system at times was accessed more than 175 times in a single week.” Given that law enforcement personnel have been known to look up people that interested them in driver’s license databases, how much longer before it’s learned that they also look up people in the facial database as well?
Altogether, as many as 130 million people – just regular people, not necessarily criminals – may have their faces stored in databases, writes Nick Wingfield in the New York Times. The FBI facial database was reported to be more than 400 million people as of 2016.
There’s also the question of accuracy. In 2016, the FBI had said that as many as 20 percent of its identifications were incorrect. This is particularly true for women and minorities. “ One study by the Massachusetts Institute of Technology showed that the gender of darker-skinned women was misidentified up to 35 percent of the time by facial recognition software,” Wingfield writes. In comparison, white men are identified accurately 99 percent of the time, writes Steve Lohr in the New York Times.” In 2015, for example, Google had to apologize after its image-recognition photo app initially labeled African Americans as ‘gorillas,’” he writes.
The California Supreme Court has ruled that online services such as Yelp!, which depend on user-generated content (UGC), can’t be forced to take down that content by a legal proceeding against the user who generated it if the legal proceeding didn’t mention the online service in the first place.
Like the Supreme Court’s recent Carpenter ruling, this is another one of those cases where the American Civil Liberties Union and the Libertarians were working together, on a case that was appealed to the California Supreme Court.
“Personal-injury lawyer Dawn Hassell of the Hassell Law Group accused former client Ava Bird of defaming her law firm on Yelp,” writes Zuri Davis in Reason. “Hassell sued Bird in 2013, but Bird did not appear—it is believed that Bird was never served with court papers. The San Francisco County Superior Court ruled in Hassell’s favor by default and awarded her $557,918. The court ordered Bird to remove the reviews and Yelp to ‘remove all reviews posted by AVA BIRD under user names “Birdzeye B.” and “J.D.,”'” despite not having definitively confirmed that Bird used the alias ‘J.D.’”
So let’s unpack this a little bit:
- Somebody had a bad experience with a company.
- The person posted a negative review on Yelp! like many of us do.
- The company sued them (apparently, according to NBC News, because of the person “falsely claiming that her firm failed to communicate with the client”).
- The person who posted the review might not have heard about the lawsuit.
- Some of the reviews for which they were sued might not have been written by them.
- They now owe more than a half-million dollars and are supposed to take the reviews down.
- Yelp!, which wasn’t even a party to the case, is told it has to take down the postings, since the person they’re attributed to – who, again, might not have written them and might not have been notified – isn’t taking them down. And that’s the part that got all the lawyers excited.
“WTF???” wrote attorney Eric Goldman in 2016 about the original decision, where the California appellate court upheld the lower court’s ruling. “As a non-party to the lawsuit, the court says Yelp doesn’t face liability from the suit itself, and the court thinks contempt sanctions–including the possibility of monetary damages–against a non-party don’t count as ‘liability’ because it’s ‘a different type of liability’? And a judicial compulsion to remove content that Yelp chooses to publish doesn’t treat Yelp ‘as a publisher or distributor’? Wow.”
(One has to admire a legal professional who can write an opinion saying, not once, not twice, but three times, “WTF?” And this particular opinion was cited lots of times about this case, WTF and all.)
“Neither court” – the lower one or the appellate one – “seemed to understand that the First Amendment protects not only authors and speakers, but also those who publish or distribute their words,” wrote the Electronic Frontier Foundation in April, when it also submitted an amicus. “Both courts completely precluded Yelp, a publisher of online content, from challenging whether the speech it was being ordered to take down was defamatory—i.e., whether the injunction to take down the speech could be justified.”
Now, however, the California Supreme Court has ruled that both the lower and appellate courts were wrong, and Yelp! doesn’t have to remove the reviews in question (assuming they’re still there; it seems like Yelp! is pretty busy right now removing postings from the company’s page).
To be honest, the aspect of this case that freaks me out the most is one that nobody is even mentioning: Someone can be fined a half-million dollars for libel by posting a negative review on Yelp!? When she might not even have been properly served in the first place? Yikes! I hope she’s fighting this. I’m surprised Yelp! isn’t helping her with this; getting its users sued for a half-million dollars for the posts on which its service is built on can’t be good for the online commenting business.
Meanwhile, the Hassell Law Group (interesting name) is reportedly considering appealing the case to the Supreme Court. Hopefully, nobody tries to give them a bad Yelp! review over it.
A fairly common theme here has been “Don’t poke strange USB sticks in things,” because it’s a common vector for transmitting malware (and reprogramming your keyboard, and setting your PC on fire). Here’s a new take on that. It’s pretty esoteric but now that the technique is out there, it may become more common.
First, you have to understand the concept of an “air gap.” An air gap is actually a plumbing term and refers to the use of air in the system to keep water from going to places it shouldn’t. The term has been applied in computer security to computers that aren’t hooked up to networks, to keep them more secure. “Air-gapped systems are common practice in many countries for government, military, and defense contractors, as well as other industry verticals,” according to Palo Alto Networks researchers who are writing about this.
Second, there is apparently a South Korean defense company that makes “secure USBs.” Exactly what these are and what makes them secure, I haven’t been able to find out. But they are a thing. At least some secure USBs encrypt the data on them. That may or may not be what this particular South Korean secure USB does.
So apparently the deal is this: Some researchers found evidence that hackers have found a way to put malware on these secure USBs, with the intention of targeting these airgapped, otherwise unreachable PCs.
It gets better. The malware only works if the PCs in question are running Microsoft Windows XP or Windows Server 2003.
The organization likely involved with this malware has a history of spearphishing attacks, or email attacks aimed at particular people. In fact, past versions of the organization’s malware used a Happy New Year program, and recipients were asked to change the extension to .exe so that it would play.
Which raises the question – if an organization is paranoid enough to airgap its PCs, wouldn’t you think they’d be smart enough to keep up on their security patches? Unless it’s a system just too old to update, like the nuclear missiles controlled by 8-inch disks. And that’s what researchers suggest. “Outdated versions of Operating Systems are often used in those environments because of no easy-update solutions without internet connectivity,” they write.
Wouldn’t its employees be smart enough not to open a Happy New Year card that’s obviously a program, even if it appeared to come from someone they know?
Researchers feel that this malware might be very specifically targeted to one particular installation where all of these factors would come into play. “This would seem to indicate an intentional targeting of older, out-of-support versions of Microsoft Windows installed on systems with no internet connectivity,” they write. But basically, they put malware on the old machines that look for the secure USB drives, and if one gets plugged in, it looks for the other malware on it and loads it onto the airgapped system.
Exactly what the malware would do once it got there, researchers don’t know. They also don’t know exactly what PCs or even what organization is being targeted. But now that the technique is out there, we may see it in places other than Korea and Japan.
So, the usual warnings still apply:
- Don’t poke strange USB sticks in things. Even if they’re supposedly secure.
- Keep your software updated, including your OS.
- Don’t open strange files in your email, even if they seem to come from someone you know, particularly if they are obviously programs.
- And if for some reason you have to look at a strange USB stick, or open a strange file in your email, at least use it away from the supersecure airgapped system, recommends Development Standards Technologies, a software development and consulting company.
- Development Standards also recommends, like a number of security organizations, that you not just depend on keeping people out, but detecting them should they make it in. “Prevention aside, critical systems should have threat detection controls that can alert where an infected drive has been plugged into an endpoint and take remedial steps beyond raising an alarm, such as isolating an infected machine from the rest of the network,” they write.
“The American Civil Liberties Union deserves congratulations” is not a sentence one is accustomed to read from the Cato Institute, a conservative think tank, but it just goes to show how pervasive the Carpenter case was that the Supreme Court decided last week.
As you may recall, Carpenter is a case where two guys in Detroit were accused of robbery, and the Federal Bureau of Investigation (FBI) used their cellphones to prove that they were nearby a number of the incidents. To do this, the FBI went to the suspects’ cellphone providers and obtained a lot of data about the suspects’ locations – more than 12,000 for one guy, and almost 24,000 for the other guy. The defense attorneys for the guys are saying that the phones revealed so much personal data about the guys that a warrant should have been required for the search.
“Prosecutors didn’t seek a warrant for the cell-site data, which would have required a showing of probable cause to believe the records show evidence of a crime,” write Jess Bravin and Brent Kendall in the Wall Street Journal. “Instead, they sought the data under the Stored Communications Act, which requires only ‘reasonable grounds’ to believe the information is relevant to an investigation.” The defense attorneys for the guys were saying that the phones revealed so much personal data about the guys that a warrant should have been required for the search.
Last week, the Supreme Court agreed.
“We decline to grant the state unrestricted access to a wireless carrier’s database of physical location information,” writes Chief Justice John Roberts in his decision. “In light of the deeply revealing nature of [cell site location information], its depth, breadth, and comprehensive reach, and the inescapable and automatic nature of its collection, the fact that such information is gathered by a third party [the cell phone companies] does not make it any less deserving of Fourth Amendment protection.”
A big part of this case was the Riley case from a couple of years back. Riley had already ruled that law enforcement officials needed a warrant to search someone’s cell phone. So it wasn’t a big stretch to add to it by saying that law enforcement also needed a warrant to search the places someone’s cell phone had been.
The other was Jones, which we haven’t written about before, at least under that name. It’s the 2012 case where the Supreme Court ruled that collection of data from a GPS tracker required a warrant. In Carpenter, law enforcement argued that the cellphone tower location data was less specific than the data from a GPS tracker, so it didn’t require the same level of protection. But the Court didn’t buy it.
There are, however, still a few concerning aspects.
- Gaining cellphone location data for a short period is apparently still okay. How short? Not clear, but apparently less than seven days, writes Adam Liptak in the New York Times. “Chief Justice Roberts left open the question of whether limited government requests for location data required a warrant,” he writes. “But he said that access to seven days of data is enough to raise Fourth Amendment concerns.”
- Gaining cellphone location data from the tower itself is apparently still okay. “The Court rather ominously notes that it does not ‘express a view on matters not before us’ including so-called ‘tower dumps,’ where police request ‘a download of information on all the devices that connected to a particular cell site during a particular interval,’” writes Ian Millhiser in ThinkProgress. In other words, while law enforcement can’t track location data from the phone itself, it can look at all the phones tracked by a single tower. At some point, some law enforcement organization is simply going to get all the data from every tower in town, and then that will eventually end up in the Supremes’ lap as well.
- The ruling was 5-4. In other words, as time goes on and President Donald Trump has the opportunity to replace more Justices, this and similar cases in the future could go the other way. Notably, although Justice Neil Gorsuch had indicating during arguments that he might not uphold the lower court, he supported it this time, and wrote a dissent. (“He agreed stronger privacy protections were in order, but not in the way the court provided them,” Bravin and Kendall write.) And just to make this potential a little more real, Justice Anthony Kennedy – though he voted against this ruling – has announced that he’s retiring. So next year could be really interesting.
Every couple of years, it seems like everyone has to lose their minds over glass storage.
In 2012, Hitachi demonstrated using glass for storage, saying it could end up being a product by 2015. Or, at least, once they figured out how to make something to read it. Yes, that would be a problem, unless you had a use case for write-only memory.
Then in 2013, researchers demonstrated another type of glass storage. Peter Kazansky and other researchers from the University of Southampton demonstrated “5D” glass etching, which is how the discs achieved much higher density. The information encoding uses size and orientation in addition to the three-dimensional position of these nanostructures, the University writes.
First, researchers stored a 300 kb file. Then, in 2016, they stored other documents, including the Universal Declaration of Human Rights (UDHR), Newton’s Opticks, Magna Carta and the King James Bible, Reading it back again, however, is tricky – it requires a combination of an optical microscope and a polarizer, similar to that found in Polaroid sunglasses, the University writes. While the University went on to say that the team was looking for industry partners to commercialize the technology, we haven’t heard much about it.
Until now. Now, everyone’s talking about glass storage again, because of Elon Musk. A recent SpaceX satellite payload he sent up included some of the 2013-era glass storage. (Musk actually got two of them; he’s keeping the other one.)
“Stashed inside the midnight-cherry Roadster was a mysterious, small object designed to last for millions (perhaps billions) of years – even in extreme environments like space, or on the distant surfaces of far-flung planetary bodies,” writes Peter Dockrill in Science Alert. “Called an Arch (pronounced ‘Ark’), this tiny storage device is built for long-term data archiving, holding libraries of information encoded on a small disc of quartz crystal, not much larger than a coin.” Each of the discs could hold 360 terabytes, he continues.
Beyond the storage capacity, what excites people about glass is that it reportedly isn’t subject to bit rot, because it’s etched into the glass rather than stored magnetically. So it can supposedly last up to 14 million years – unless someone drops it, presumably.
But unlike similar efforts such as the golden disks designed by Carl Sagan that went aboard Voyager in 1977 that included sounds of Earth, which were intended to help communicate with any intelligent life out there, these glass discs contain things like Isaac Asimov’s Foundation series. Not that these aren’t a swell batch of books but how realistic is it that anyone would be able to comprehend them, even if they found a way to read the discs themselves?
Future endeavors are also planned. “Subsequent launches are planned for 2020 and 2030, with the ‘Lunar’ and ‘Mars’ Arch libraries intended to send curated backups of human knowledge to the Moon and Mars – with the latter disc hoped to serve as a useful aid for colonists on the Red Planet, helping them to ‘seed’ a localized internet on Mars,” Dockrill writes.
Who knows. If aliens discover it, maybe they can figure out a way for us to read it.
A DNA testing database was apparently hacked sometime last fall, but it wasn’t nearly as interesting as it could have been.
It wasn’t one of the more well-known sites like 23andme or ancestry.com. It also wasn’t the GEDmatch site that law enforcement used to track down the so-called “Golden State Killer”a few months back. It was a site called MyHeritage.com with about 92 million users, though it isn’t clear how many of them had actually submitted DNA.
Moreover, none of the DNA information appeared to have been stolen. In fact, it was a pretty run-of-the-mill incident, for these days; a file called MyHeritage was reportedly found on a third-party server that had a list of 92 million email addresses, followed by their hashed, or encoded, passwords. Since the passwords were hashed, there’s not likely going to be a way for anyone to reverse-engineer them to the actual passwords. And the company said that no financial information was involved, either.
“Credit card information is not stored on MyHeritage to begin with, but only on trusted third-party billing providers (e.g. BlueSnap, PayPal) utilized by MyHeritage,” the company wrote in a blog post about the incident. “Other types of sensitive data such as family trees and DNA data are stored by MyHeritage on segregated systems, separate from those that store the email addresses, and they include added layers of security. We have no reason to believe those systems have been compromised.”
So what we actually have stolen here is a list of 92 million email addresses – basically, all the system’s users up until October 26, 2017. And sure, someone could have fun with that, looking to see how many of them have “password” as their password, or how many of them reused a password from a different system that hackers might know, and so on.
But generally, as these things go, this was pretty ho-hum, because the company did what it was supposed to. It encrypted its passwords. It stored its financial information separately. It stored its genetic information separately. As a number of security experts have been saying, it’s less an issue of hardening your system so that you *never* get broken into, because sometime, it’s likely going to happen. Instead, it’s an issue of how to limit the damage once someone breaks in.
Because there’s lots of interesting things that could have been done with stolen DNA:
- Plant it somewhere to incriminate someone, ranging from a crime to blackmail, or to protect a criminal by having multiple people’s DNA at a crime scene
- Use it to get medical treatment
- Use it to reveal someone’s dirty laundry, such as being illegitimate or unable to get health insurance due to a genetic condition
- Use it to protect against being called out for a genetic condition, much the way people will buy clean urine to pass a drug test
- Heck, they could have started cloning people, or breeding people
That said, it may just be a matter of time before one of these DNA storage places is hacked – with actual DNA. As you may recall, researchers are looking into storing data on DNA. Apparently there is also research going on into how you could store malware on DNA, and then submit that DNA to a DNA storage service, where it would “come to life” and start stealing data. “The researchers were even able to encode a strand of synthetic DNA to contain malware, allowing them to take remote control of a computer being used to sequence and process genetic data,” writes Usha Lee McFarling in STAT.
Meanwhile, the company is doing all the appropriate things. It’s not only recommending that people change their passwords; it’s forcing everyone to do so (though it isn’t clear whether they could just put in the same password they’d used before). They also set up a round-the-clock security team to answer user questions. They are working on setting up two-factor authentication (and yes, they should have done that in the first place), but it sounds like they aren’t going to require it, but only “recommend” it. It is also looking into how the data got stolen in the first place, and why it didn’t detect that at the time.
But it could have been so much more interesting.
Configuration is hard.
At least, that’s the conclusion to draw from a recent storage security issue where the Los Angeles County hotline number, 211, was storing many of the records regarding its hotline calls in the cloud. Except instead of keeping it secured, as would be required by law as a medical record, the organization had a number of its files configured to be publicly available.
While not all the files themselves were publicly available, a number of them were, which meant that anyone who happened to have the URL of that Amazon AWS resource could download the information stored there. That included “access credentials for those operating the 211 system, email addresses for contacts and registered resources of LA County 211, and most troubling, detailed call notes,” according to the organization discovering the error. “These notes describe the reason for the calls, including personally identifying information for people reporting the problem, persons in need, and, where applicable, their reported abusers. Included in the more than 3 million rows of call logs are 200,000 rows of detailed notes, including graphic descriptions of elder abuse, child abuse, and suicidal distress, raising serious, large-scale privacy concerns. In many of these cases, full names, phone numbers, addresses, and even 33,000 instances of full Social Security numbers are revealed among the data.”
Los Angeles County211 blamed the issue on a configuraton problem, according to the report. Fortunately, the problem has now been fixed, and the Amazon AWS files have been properly configured.
How this was discovered is actually the more interesting part. It turns out that there’s this company called Upguard and they do the same sort of thing that hackers do – roam the world looking for open back doors and ports and unpatched systems and so on. But when they find them, instead of breaking in, they contact the company and let it know so that the problem can be corrected.
Then Upguard alerts the media so people in other companies can also be aware of these problems.
So, that’s what happened here. Somebody was playing around with Amazon AWS links, and realized they could get in, so they wandered around for a while, taking screenshots of the available data and, eventually, letting the organization and eventually the media know.
The group of people who do this are called the UpGuard Cyber Risk Research Team, and they have it down to a science. “The UpGuard Cyber Risk Research team follows the processes and procedures detailed in the internal governance document ‘UpGuard Breach Research Process’ for breach research, notification, and disclosure,” the company writes in its Cyber Risk Research Guidelines.
Needless to say, UpGuard considers itself a “white hat” or ethical hacker, as opposed to the “black hats” who do the same thing but steal the information or sell access to it. “The UpGuard Cyber Risk Research team finds publicly exposed data, helps the owners secure it, and shares information on how these exposures can be avoided,” the company explains. “Reducing data exposures is a public good, and the vast majority of individuals whose data is leaked lack the capacity to identify and remove those exposures themselves. Publicizing these findings raises awareness of the problem of data breaches, both in its scale and the severity of the data exposed.”
Ethical hacking! How chaotic good can you get?
Not that the company is doing this for purely altruistic reasons. “While we believe this activity provides a benefit to the public, and indeed to ourselves as private citizens, it also benefits UpGuard in that UpGuard provides solutions for preventing data breaches and a mature market for cyber risk mitigation would logically benefit UpGuard,” the company goes on to explain.
That said, it doesn’t try to shake down its subjects. “UpGuard never uses the discovery of a data breach to approach any affected entity in a sales capacity for UpGuard’s separate enterprise services,” the company writes. It also appears that the company is essentially doing a passive search, looking for security holes, as opposed to, say, using social engineering to try to create vulnerabilities.
It isn’t entirely clear to what extent the entities that are exposing their data have any say-so in whether the vulnerability gets published. (Once it’s secured, of course.) “The UpGuard Cyber Risk Team can also work to help secure a data exposure without publishing a report,” the company writes. “The guiding decision in a decision to publicize a breach is whether the public interest is best served by a public report. UpGuard has no obligation to report exposed data. As an institution, we feel compelled to promote visibility and address as many leaked data sets as we feel appropriate. The research team evaluates the projected impact of each data breach, and other relevant factors, in order to prioritize breach notifications.”
Though the company does go on to add, “The manner in which the breached entity responds to the data breach notification may impact the manner in which media are made aware of the situation and when the information is presented.” Heh. It would be fun to be a fly on the wall in some of those instances.
In any event, for some people, that has got to be the funnest job in the world.
There is such a thing as a device specifically intended to destroy a hard disk drive.
In fact, there’s actually a number of such devices. This isn’t just degaussers and so on that are intended to wipe the data from the hard disk drive itself through magnetism (or de-magnetism, as the case may be). No, this is purely physical destruction. Some of them even brag that they don’t use electricity (which would actually be handy, if you were, say, about to be overrun by someone and they had cut your power or something).
Some of them punch a hole through the disk drive, while others shred the platters. Others just crunch it completely.
“Other hard drive destroyers just fold or punch a couple of holes in a hard drive,” notes one vendor, saying that its product “obliterates hard drives with a potent combination of 20 tons of force and corrugated crushing plates. The result is a drive that’s rendered hopelessly inoperable, with every inch of the media totally ruined.”
This may seem like overkill. Not to mention, wasteful. You can’t donate the hard disk drive to Computers for Kids or something so they can be used by someone else? You can’t even recycle the components? But as we’ve mentioned before, if you absolutely, positively can’t let the data get to anybody else, the only sure way is to physically destroy the drive.
And while shooting it with a .45, taking it apart with a hammer, and so on, are all great ways to do it (or, if you’re Terry Pratchett, running a steamroller over it), aside from letting out your frustration, what does a company do when it needs to destroy hundreds of disks on a regular basis? Nobody is that frustrated.
Enter the hard disk drive destruction device, some of which claim that they can destroy multiple drives at once, or can destroy a drive in five seconds. If you have hundreds of drives to destroy on a regular basis, that’s the way to go. Or, in particular, if you have a company, perhaps, that’s in the business of destroying hard disk drives for people.
The problem is, of course, if you’re using such a company, you as the hard disk drive owner have to make sure that they really are destroying the hard disk drives, and not just putting them on eBay with all the data still present on them, which happens periodically. Security experts periodically buy used hard disk drives on Craigslist and such, not just to see what goodies they can pick up but to see, in general, if any companies can be shamed about doing this.
Not surprisingly, these machines can be expensive. Even the one that doesn’t use electricity, and is essentially a fancy vise grip, costs more than a thousand dollars. And that one is pretty onesy-twosy as far as destroying hard disk drives. A device that destroys two hard disk drives per minute using a shredder can cost more than $30,000, while the “20 tons of force” machine costs more than $16,000. (Though right now it’s on sale for $11,000. Is there some reason that May is the bargain month for these devices? Perhaps some sort of post-Tax Day sale?)
Needless to say, as with so many things today, you can see video of some of these devices in action on YouTube. Search for machines destroying hard disk drives for hours of destructive fun.
OMG. Can our long international nightmare be over? HP won a criminal case against Autonomy, the e-discovery vendor it bought in 2011 for more than $10 billion, of which it had to write off $8.8 million, claiming that the U.K. company had inflated its value.
Sushovan Hussain, “the former chief financial officer of Autonomy Corp. was found guilty of orchestrating an accounting fraud to arrive at the $10.3 billion price Hewlett-Packard Co. paid for the U.K. software maker more than six years ago,” writes Joel Rosenblatt for Bloomberg. “A jury voted to convict Sushovan Hussain Monday on all 16 counts of wire and securities fraud after three days of deliberations in San Francisco federal court.”
The trial lasted three months, according to the Telegraph. Hussain was first charged by prosecutors in 2016. He was convicted of one count of conspiracy, fourteen counts of wire fraud and one count of securities fraud. Assuming the charges stick, he faces a maximum sentence of 20 years in prison, and a fine of $250,000, plus restitution, for the conspiracy count and each of the wire fraud counts, as well as a maximum sentence of 25 years in prison, and a fine of $250,000, plus restitution, for the securities fraud count. He was supposed to have been sentenced May 8, but that appears to have been changed to August. In the meantime, he had to surrender his passport, wear a GPS bracelet, and can’t go to airports or bus stations, according to the Times UK (which also has even more detail about the accounting problems Autonomy had).
So what did he do?
“Specifically, Hussain used backdated contracts, roundtrips, channel stuffing, and other forms of accounting fraud to inflate Autonomy’s publicly-reported revenues by as much as 14.6% in 2009, 17.9% in 2010, 21.5% in the first quarter of 2011, and 12.4% in the second quarter of 2011,” according to a Department of Justice press release. “In addition, Hussain, and his co-conspirators, fraudulently concealed from investors and market analysts the scale of Autonomy’s hardware sales, which were used to boost the company’s reported top-line revenue. Autonomy’s total revenues included re-sold hardware of approximately $53.3 million in 2009, $99.08 million in 2010, $20.09 million in the first quarter of 2011, and $20.85 million in the second quarter of 2011.”
On the other hand, HP didn’t come off so great in the case, either. For one thing, it didn’t help HP’s case that it had several other purchases where it had to write off part of the value.
“Hussain’s lawyer argued that HP bought, and then hobbled, an increasingly profitable software company,” Rosenblatt writes. “It was one of a string of failed acquisitions requiring write-offs, a list that includes Palm, Compaq, and Electronic Data Systems, he said.”
“Even if his conviction is upheld, HP’s acquisition of Autonomy should be remembered as one of the most poorly thought out and incompetently executed deals of all time,” notes the Financial Times in an unsigned editorial. “ It does not suggest that HP was anything less than catastrophically careless.
But hey! We’re not done!
“It also gives the company momentum as it heads toward a trial next year in London in a $5 billion civil suit against Hussain and Autonomy co-founder and former Chief Executive Officer Mike Lynch,” Rosenblatt writes.
Plus, Hussain plans to appeal, writes FT.
Here we go again.
Yet another court has ruled that U.S. Customs and Border Patrol agents have to have some sort of probable cause to search people’s electronics. The Fourth Circuit has now agreed.
The Department of Homeland Security has said in the past that it is entitled to broad powers of search within 100 miles of the U.S. border. Knowing that the U.S. is 3000 miles across, that doesn’t sound like much, but given how bumpy our border is, that covers a lot of territory. More to the point, it covers a lot of territory where people are.
CityLab actually has a really cool map of just how much territory we’re talking about. “The border zone is home to 65.3 percent of the entire U.S. population, and around 75 percent of the U.S. Hispanic population,” writes Tanvi Misra. “This zone, which hugs the entire edge of the United States and runs 100 air miles inside, includes some of the densest cities—New York, Philadelphia, and Chicago. It also includes all of Michigan and Florida, and half of Ohio and Pennsylvania.”
And those broad powers of search are…pretty broad. “In the ‘border zone,’ different legal standards apply,” Misra writes. “Agents can enter private property, set up highway checkpoints, have wide discretion to stop, question, and detain individuals they suspect to have committed immigration violations—and can even use race and ethnicity as factors to do so.”
Consequently, over the past few years, there have been a number of incidents of people having the storage of their portable electronics, ranging from laptops to cellphones, and even cameras, searched. That includes the electronics of people such as journalists and attorneys, who are supposed to have some degree of protection against such things.
And we’re not just talking a Border Patrol agent taking a cellphone and scanning to see what apps it has. This involves actually taking the person’s electronics, shipping them hundreds of miles to a lab, and doing a full forensic—sometimes taking as long as seven months. It also appears that this sort of search has been ramping up under President Donald Trump.
This has been drawing the ire of civil liberties organizations such as the American Civil Liberties Union and the Electronic Frontier Foundation for some time.
Fortunately, over the past couple of years, courts have started to agree, especially after the Riley Supreme Court decision that said law enforcement officials had to have a warrant to search someone’s cellphone. In March, the Eleventh Circuit – while it did uphold a border search – at least had a strong dissent. Also in March, the Fifth Circuit made such a ruling, although it fell short of actually saying agents couldn’t search devices.
Most recently, the Fourth Circuit made a similar ruling, in a case called Kolsuz. “After Riley, we think it is clear that a forensic search of a digital phone must be treated as a nonroutine border search, requiring some form of individualized suspicion,” the court writes. Indeed, the court suggested that it might have gone further had the appeal asked for it. “Because Kolsuz does not challenge the initial manual search of his phone at Dulles, we have no occasion here to consider whether Riley calls into question the permissibility of suspicionless manual searches of digital devices at the border.”
In response to criticism and rulings, Customs and Border Protection has been backing off some. For example, in January it clarified that agents could only search the physical devices themselves, not whatever storage they might have access to in the cloud. A number of people are also taking steps such as not taking their own phones and laptops across the border, or wiping them as they approach the border.
Where this goes from here isn’t clear. So far, it seems like the lower courts are mostly agreeing. In addition, the civil liberties organizations have been pushing for a test case that would extend the Riley decision to laptops at the border. This may yet end up at the Supreme Court, but it isn’t clear how it would rule with this court and in this political climate.