We have always been at war with Eastasia.
In an era where people have to resort to smartphones and the Internet just to look up phone numbers, people are warning that the next wave of hacking might not take information, but add or change it instead.
“Most of the public discussion regarding cyber threats has focused on the confidentiality and availability of information; cyber espionage undermines confidentiality, whereas denial of service operations and data deletion attacks undermine availability,” wrote Director of National Intelligence James Clapper, in testimony presented to the House Subcommittee on Intelligence earlier this month. “In the future, however, we might also see more cyber operations that will change or manipulate electronic information in order to compromise its integrity (i.e., accuracy and reliability) instead of deleting it or disrupting access to it. Decision making by senior government officials (civilian and military), corporate executives, investors, or others will be impaired if they cannot trust the information they are receiving.”
In particular, hackers or terrorists could wreak havoc by changing data about infrastructure, postulates Patrick Tucker, technology editor for Defense One. Remember that as far back as Die Hard 2, the bad guys were crashing planes by feeding them incorrect data on their actual altitude.
Clapper isn’t the only one to suggest this. For example, when the Office of Personnel Management revealed earlier this year that it had been hacked, some speculated that more could be involved than simply taking information. “For those of us who wear tinfoil hats – what if records were not only taken, but some were added as well?” writes Steve Ragan in CSO Online. “Would the OPM be able to tell?”
As it turns out, Clapper has actually been saying this for some time; articles quoting him talking about hackers who could “change or manipulate” information have been published since at least February, when he testified to the Senate Armed Services Committee. “[Clapper] described future attacks which will change or manipulate [there’s that phrase again] electronic information in order to compromise its integrity,” Business Korea wrote in March. “In the future, hackers may launch more clandestine cyber espionage programs that manipulate data so victims lose credibility.”
What might it have done, for example, if at some point someone had added data to government records to make it appear that President Obama actually had been born in Kenya?
People have always added fake people to rosters to get additional paychecks and other benefits – remember M*A*S*H’s “Captain Tuttle”? – but doing it through the computer can make it a lot easier. “A doctor pulls up your electronic medical records to discover that they have been changed and you have been receiving the wrong dosage of a lifesaving medicine,” writes Rep. John Ratcliffe (R-Texas), who chairs the Homeland Security Subcommittee on Cybersecurity, Infrastructure Protection and Security Technologies, and sits on the Judiciary Committee, in The Hill. “Now imagine this happening at every hospital in the United States.”
Think the whole notion of messing things up by changing information is farfetched? How many “Han shot first” arguments have you seen? And that’s with film that millions of people have seen – not to mention someone who actually admits that they changed something.
Of course, if you really want to drive yourself crazy, you can remind yourself that this is the age of Edward Snowden. Maybe Clapper is warning us to beware of hackers changing data because he wants us to be suspicious of our data. Maybe he’s going to be changing the data – and is laying the groundwork now to blame it on hackers.
If you need me, I’ll be hiding under the bed.
“Joe, I know how we can make a mint. We just get a database full of personal information for a bunch of really gullible guys with something to hide, and then we can sell it!”
“That’s great, Ron, but how do we get the database of personal information in the first place?”
Which is what gave me the idea for a database that will actually be the most elaborately designed honeypot in history.
Okay, work with me here.
First of all, we find a bunch of guys who admit to strangers that they’re looking to cheat on their wives. They even provide their contact information.
Heck, they’re even willing to pay for the privilege!
Meanwhile, we create a bunch of fake female profiles. And not only do these guys not realize the women are fake, but they have conversations with them! We just send out messages periodically from these fake women so the guys think there’s really women on this database interested in cheating with them.
Who knows. Maybe we can even convince the guys to pay more to talk to these fake women.
Oh, sure, some guys are going to figure it out, or feel guilty, and drop out. But the ones who keep on – we know we’ve got ‘em.
And the ones who do drop out, and feel guilty, and want us to delete all their information so their wives never find out? We tell them they have to pay more to delete their information! And most of them pay it!
And then we don’t delete it all!
After all, data is valuable.
Then, when the database gets big enough, we tell them it’s been hacked! And all their information has been stolen! And to make that more plausible, we’ll use a really simple encryption technique that would make it easy for someone to hack it.
(Not that we need to worry much about that. They’ll end up picking really common, easy passwords.)
That’s when we can sell the data. We can market it as “Gullible guys with lots of disposable income who won’t want to go to the police.”
We can even sell the database to blackmailers. Sure, none of these guys actually cheated – but how would they convince their wives of that? Just the fact that they signed up to be in a database of cheaters is damning enough, isn’t it?
And yes, some of the guys might be kind of upset. There might be some collateral damage. We have to be prepared for that.
But think of the money we’d make! Plus, we can do it again! All the publicity will probably cause even more gullible guys to join!
Still. Maybe we don’t want to do it for real. Maybe we should just write a movie script about it.
Naah. Nobody’d believe it.
First, lightning struck the utility grid used by one of Google’s three data centers in St. Ghislain, Belgium, a small town about 50 miles southwest of Brussels, knocking the center offline and losing some data.
You know how they say lightning never strikes twice? Well, it hit the grid that Google uses four times.
Naturally, there were backup systems, but they failed too, writes Yevginey Sverdlik in Data Center Knowledge. “Besides failover systems that switch to auxiliary power when primary power source goes offline, servers in Google data centers have on-board batteries for extra backup,” he writes. “But some of the servers failed anyway because of ‘extended or repeated battery drain,’ according to the incident report.”
The storage in question was part of the Google Compute Engine (GCE) disks, which allow customers to run cloud-based virtual machines, according to Mike Brown in the International Business Times. “It’s not the first time GCE has had issues,” Brown writes. “In February, GCE experienced a global outage that lasted for nearly two hours affecting businesses that depend on GCE for their day-to-day operations. GCE is seen as a competitor to Amazon AWS and Microsoft Azure for dominance of the cloud, but instances like these will shake consumer confidence in the GCE brand as they look for the most stable cloud services possible.”
Brown noted that “To be sure, AWS and Azure have also had their share of outages,” such as Virginia thunderstorms in 2012 that took out major Internet services such as Netflix, Pinterest, and Instagram.
Altogether, Google servers had problems for about five days, with a resultant loss of 0.000001 percent of data, Sverdlik writes. (How many bytes that is, Google didn’t say.) It’s not known which clients were affected, or what type of data was lost, according to the BBC.
“Having worked in data recovery, that’s a remarkable achievement and a definite feather in Google’s bow,” commented one reader.
Google staff apparently had to tweak the servers some to retrieve data as well, wrote the company in its incident report. “In almost all cases the data was successfully committed to stable storage, although manual intervention was required in order to restore the systems to their normal serving state.” The company also pointed out that users needed to make additional copies of data in case of such incidents. “GCE instances and persistent disks within a zone exist in a single Google data center and are therefore unavoidably vulnerable to data center-scale disasters,” Google wrote, recommending GCE snapshots and Google Cloud Storage.
Next, a failed chilled water pipe caused the air conditioning system to fail in a CenturyLink data center in Weehawken, N.J. This data center provides facilities for a number of companies, including education company Pearson, Thomson Reuters, and trading companies BATS Global Markets and Investment Technology Group.
As a precaution, CenturyLink reportedly shut down its systems, meaning that the companies went offline as well. Incidentally, this was happening at the same time the stock market was tanking last week.
And this is just in August.
By the way, the hurricane season typically enters its heaviest phase on September 1. We’re already up through Fred.
“OMG, did you hear? You can now mail a hard drive in to Google to store it on the cloud! Squee!”
Not exactly. Hold your horses, people.
Yes, it’s true that being able to mail a hard disk device for import into a cloud storage service helps a lot when you have a lot of data. Nobody’s arguing with that. As Google points out, trying to upload just a terabyte can take more than 100 days. And as we all know from the station-wagon-full-of-backup-tapes calculation, sometimes that’s just the fastest way.
But this isn’t really a new thing. Sorry.
So, what actually is new here? Well, there’s the name: Offline Media Import/Export.
- Instead of mailing in your device to Google directly, you now do it through third-party service providers. So far, that list is exactly one vendor long: Iron Mountain, which performs the service in North America. Providers for EMEA and APAC are reportedly coming. You can, however, also do it through any other provider you like, Google writes. Meanwhile, it looks like the direct-to-Google service, which was only experimental anyway, might be kaput.
- Previously, you could send in data only on a hard drive, which Google would upload for you for $80 apiece. Now you can also send in data on tapes or thumb drives. Google doesn’t specify what kind of devices or formats it supports; presumably that’s up to the third-party vendor. (Can you send in that box of 8” floppy drives you found in the closet? Who knows?) Incidentally, Amazon has also supported storage devices other than hard drives; Microsoft supports only hard drives, and only up to 6TB.
What happens to the storage device afterwards? That’s up to you, writes Ben Chong, product manager, in a blog post describing the service. “Once data upload is complete, Iron Mountain can send the hard drive back to you, store it within their vault or destroy it.”
How much will it cost? Google didn’t say, because you now pay the third-party provider, not Google.“Neither Google nor Iron Mountain call out pricing for the service, but it’s likely competitive with Amazon’s rates of $80 per storage device and then $2.49 per hour it actually takes to upload to the cloud,” writes Matt Weinberger in Business Insider. Google’s previous service cost an $80 flat fee with no per-hour charge, plus Google supported drives of up to 400TB; Microsoft charges $80 per disk drive.
Google’s previous service not only supported encryption, it required it. The new service definitely supports encryption, but it isn’t clear whether it’s required. The service’s product page doesn’t even mention encryption, though Chong’s blog post does. “Save and encrypt your data to the media of your choice (hard drives, tapes, etc.) and ship them to the third party service provider through your preferred courier service,” he writes. The encrypted data will be uploaded to Google Cloud Storage using high speed infrastructure.”
So there you go. Always nice to have a new option but it’s not really the Great! New! Thing! That some — including Google, apparently — would have you believe.
Now in its 73rd official year, science fiction fandom is grappling with a very present-day problem: How to archive its history in a way that future generations can reference.
“Archiving for the Future,” a panel session held at this week’s World Science Fiction Convention (Worldcon) in Spokane, Wash., included several science fiction historians as well as archiving professionals who discussed the aging of the fandom population, the lack of a clear repository for the history, and the fact that there’s so much material that no single site could actually serve as such a repository.
In the same way that some comic books became scarce because everyone’s moms threw them out, some irreplaceable science fiction fandom material, such as fanzines, has been lost because it was considered ephemera and discarded, panelists lamented. The problem is, many well-known science fiction authors started out as fans and their early work was included in those fanzines. “You have to save everything because you don’t know who that person will become,” one noted. “Some of those people became Harlan Ellison.”
Moreover, material on paper is vulnerable to a variety of ills ranging from moisture to fire. When you get access to material, share it, panelists were told: You never know when your house is going to burn down or be hit by a hurricane.
Even now, in an era where material is born digital, some is considered ephemeral but is actually significant historically, noted participant Leslie Johnston, whose day job is Director of Digital Preservation at the National Archive.
For example, when the Library of Congress announced in 2010 that it would archive Twitter – a project still under some criticism — some people didn’t understand why they’d want to bother saving details of what people had for breakfast, she said. But Twitter has become the first place a number of historical events and reactions to them were documented, such as the death of Osama bin Laden. “Twitter is today’s diaries,” she said.
And when such material does manage to make it to a collection rather than being thrown out, it’s often missing much of the context that gives it value, panelists said, citing cases of getting hundreds of photographs “from Worldcon” but with none of the participants, or even which Worldcon it was, identified. “It always comes down to the metadata,” said panelist Pierre Pettinger Jr., whose particular specialty is costuming. People have had to resort to such techniques as identifying venues based on the woodwork and carpeting shown in the pictures, panelists reported.
It was suggested, though, that crowdsourcing could help with some of that identification. Crowdsourcing has been used by a variety of libraries, from the New York Public Library to the British Library, to help identify and verify material ranging from maps to menus.
While some people may think that the preservation problem is solved once material is scanned or otherwise digitized, that’s no panacea, either, Johnston said. “Digitization is not preservation,” she said. “It’s creating a whole new set of materials that need to be preserved.”
What’s the issue? First of all, some of the digitized formats themselves are vulnerable. “CDs make me crazy,” Johnston said, because of their fragility, and thumb drives aren’t much better, relating the case of one that went through the wash.
Second, as time goes on, the hardware and software required to read material in particular formats can become hard to find, no matter how popular it once was, Johnston said. For example, the industry stopped manufacturing slide projectors three years ago, which will make it more difficult to look at slides going forward. She praised organizations such as the National Audio-Visual Conservation Center in Culpeper, Va., which holds a large archive of such hardware and software.
This loss of data isn’t just with old files, Johnston cautioned, noting that even some more recent material, which used early versions of cutting-edge storage formats, is now inaccessible.
Another issue, particularly with photographs, is that of the rights, panelists reported. Pettinger noted that he often posts images online that are of a lower quality than others he has because of concerns that people will appropriate them.
Similarly, panelists discussed the conflicting rights among the people who owned a picture vs. the people who might appear in it. Few of those subjects ever signed model releases, said fan history specialist Joe Siclari, who added that he always takes down images on request from the people in them. Fandom needs a better education in rights and how those rights can be transferred and archived, panelists said.
What’s needed now is for members of fandom to take responsibility for identifying and organizing the material they have, while they’re still around to do it, panelists said. In addition, fandom should set up a collaborative collection, where it’s accepted that a repository for one kind of material, such as costuming, will be located at one institution, with other institutions acting as repositories for other kinds of material.
Finally, that information also needs to be made available to fandom by creating repository directories, because there’s so much material that no one institution can take it all, Johnston said. That way, aging fans, and their descendants, know the value of the material and the process to follow for donating it.
In addition, there needs to be a canonical list of the types of hardware and software available, and where, that are available to read the different file formats. That way, archivists will be able to find out how to retrieve past material, panelists said.
Ideally, fans of the future would be able to see material in the same way that the writer originally did, Johnston said, citing the example of professor Salman Rushdie’s archive at Emory University.
Meanwhile, people are on their own. “How do we find the place that wants the stuff we have before we croak?” summarized one session attendee.
As you may recall, a year or so ago, activist investor Elliott Management Corp. took a large position in EMC stock, with the goal of “releasing shareholder value” – in other words, selling or reorganizing some of the pieces of EMC and VMware to make more money for stockholders. EMC CEO Joe Tucci has largely been resisting that effort, but a deadline is coming up that may mean something will happen – ranging from EMC buying VMware to VMware buying EMC.
How large is large? Reportedly it was more than $1 billion, which would amount to about 2 percent of its value, and also make it EMC’s fifth-largest shareholder.
So if this is something that’s been going on for a year, why the pressure now? It’s because in January, Elliott and EMC made a “standstill agreement,” which basically means that Elliott would not publicly pressure the company into divesting its holdings in VMware, in return for getting two people on the board of directors, writes Martin Blanc in Bidness Etc.. However, that agreement is set to expire in September, writes Anne Shields in Market Realist.
Moreover, Tucci’s on-again, off-again retirement is on again, Shields writes. “EMC’s CEO, Joe Tucci, is also under tremendous pressure to get EMC on the right track before he retires,” she notes. “David Goulden, CEO of EMC’s information infrastructure unit, as well as Patrick Gelsinger, VMware’s present CEO, are seen as equal contenders for EMC’s future CEO position.”
It might sound weird for the subsidiary VMware to buy out the parent EMC, but it makes sense because VMware stock is worth more than EMC stock, writes Blanc. “The move would likely be backed by Elliot Management as it will unlock more value for investors,” he writes. “Secondly, VMware already makes up for 73% of EMC’s entire market capitalization, so it makes more financial sense.”
Also, in some ways, VMware is the stronger company, with EMC facing pressure from flash drive manufacturers, commodity storage manufacturers, and other sources. “EMC would emerge weaker than before,” writes Arik Hesseldahl in Re/Code, which started this whole speculation. “An EMC-minus-VMware scenario leaves the parent with a value of about $11 a share, or less than half what it’s trading for now.”
A VMware acquisition would work like this, according to Hesseldahl: “VMware would issue somewhere between $50 billion and $55 billion worth of new shares,” he writes. “A portion of those shares — about $30 billion — would be used to cancel EMC’s 80 percent stake in VMware, which currently has a market value of $38.5 billion. The remaining new VMware shares would be issued to current EMC shareholders, who will also get some cash generated from the issuance of about $10 billion in new debt.”
Putting VMware in charge would also make the merged company more forward-looking. “Inverting the company to make VMware the pinnacle would send a message that says storage hardware is not the future and virtualization/cloud (whatever that means) is where the world is headed,” agrees analyst Chris Evans. It would probably also play better with the companies’ various partners, he adds.
Ultimately, some sort of acquisition between the two companies wouldn’t have much effect in the long run about how they operate, writes Chris Mellor in The Register UK. “Not much would have changed fundamentally, on the good ship EMC, apart from the deck chair arrangement and signage,” he notes.
One big change? Integrating the two companies could reduce their operational expenses by almost $1 billion, writes Shields. And indeed, the most recent EMC earnings call hinted at such a possibility, with the company promising $850 million in savings by the end of 2016, though it didn’t say how.
That said, the stock market wasn’t necessarily thrilled about the potential merger news, particularly from the VMware side, writes Shields. “EMC shares rose more than 3%, whereas VMware shares fell more than 5% on August 5, 2015,” she notes.
Companies that collect large amounts of user data, such as Facebook, Google, and Twitter, may have a tougher time fighting government requests for that information after a recent court case.
New York prosecutors had filed 381 warrants in 2013 to get photos and private information from Facebook on hundreds of public employees suspected of Social Security fraud. A Manhattan-based state appeals court has unanimously ruled that the warrants could only be challenged by defendants in criminal cases to move to suppress the evidence they produced, according to Reuters.
This is the third time Facebook has lost on this ruling, and it had already provided the requested data to prosecutors.
Reportedly, “Facebook pages showed public employees who claimed to be disabled riding jet skis, playing golf and participating in martial arts events,” Reuters writes. By collecting the Facebook data, the government has collected nearly $25 million from those people.
It’s not the first time that people have been fired or lost insurance due to pictures on Facebook. What was new in this case was the government using warrants to gather information about the people from Facebook, some of whom were September 11 first responders. It also used private messages, not just information available publicly.
This appeal arose from the largest set of search warrants that Facebook had ever received, according to the brief on the case. It noted that of the 381 warrants, only 62 of the targeted Facebook users were charged with any crime. (Eventually, 134 users had charges filed.)
“The warrants also contained broad gag provisions barring Facebook from informing its users what the Government was forcing it to do,” the brief continues. “The Government’s bulk warrants, which demand ‘all’ communications and information in 24 broad categories from the 381 targeted accounts, are the digital equivalent of seizing everything in someone’s home. Except here, it is not a single home but an entire neighborhood of nearly 400 homes. The vast scope of the Government’s search and seizure here would be unthinkable in the physical world.”
Facebook’s objections were primarily to the fishing expedition aspect of the warrants, noting that only a fraction of the information requested had anything to do with proving Social Security fraud, and that there was no provision for the government to return the data to the users.
Throwing a sop, the court agreed that Facebook had a point. “Our holding today does not mean that we do not appreciate Facebook’s concerns about the scope of the bulk warrants issued here or about the district attorney’s alleged right to indefinitely retain the seized accounts of the uncharged Facebook users,” the five-judge panel wrote, according to NBC.
Facebook also pointed out that as the holder of the data, it had to do all the work to collect it for the police, compared with a typical search warrant where the police are doing the searching.
Ultimately, though, that wasn’t enough. “If the cops show up at your door with a warrant to search your house, you have to let them search,” writes Orin Kerr in the Volokh Conspiracy legal blog. “You can’t stop them if you have legal concerns about the warrant. And if a target who is handed a warrant can’t bring a pre-enforcement challenge, then why should Facebook have greater rights to bring such a challenge on behalf of the targets, at least absent legislation giving them that right?”
While this particular action happened to target Facebook, there were amici curiae briefs from companies including Google, Microsoft, Pinterest, Twitter, and Yelp (as well as the New York Civil Liberties Union), because it could have just as easily been them. (Similarly, Microsoft is carrying the water for a case concerning the right of the U.S. government to seize data stored offshore, with Apple, AT&T, Cisco, and Verizon backing it up.) Tumblr, Foursquare, Kickstarter, and Meetup also filed a brief, arguing that “the lower court’s decision was especially troubling for startup online platforms like themselves” because smaller companies often lacked the financial resources to challenge warrants.
Part of the problem, the companies acknowledged, is that their business models are predicated on people being willing to share information about themselves online, which is sort of hard to do when you feel like the government could come in and snap up anything you post and the company can’t even warn you about it. Or, in lawyer talk,“Here that freeze also threatens the willingness of users to participate in online platforms — fora for speech of all kinds — that small and mid-size companies offer, for fear that their private information will be obtained improperly and without their knowledge,” the brief said.
Part of the problem, too, is that at least some of these people actually did appear to be committing fraud. In the same way that fighting for the right of people to encrypt their data and not reveal the key to the government means you end up supporting child pornographers, it’s can be more challenging to support legal principles if in the process crooks go free.
Facebook is reportedly considering whether to appeal the decision.
Legislation that had allowed enforcement and intelligence agencies in the U.K. to force communications providers to store records of their customers’ activities has been shot down by the country’s highest court, but the government has nine months – til March 2016 — to rewrite the law to make it more palatable.
Plus, the UK has already put forth another bill that could be even worse.
The Data Retention and Investigatory Powers Act (DRIPA) had been challenged by Members of Parliament David Davis and Tom Watson on the grounds that it lacked sufficient privacy and data-protection safeguards, Politico writes. “This is the first time a British national court has struck down primary legislation in the country, and the first time that a member of parliament has brought a successful judicial review against the government,” the site adds.
What was wrong with the law? “The MPs complained that use of communications data was not limited to cases involving serious crime, that individual notices of data retention were kept secret, and that no provision was made for those under obligation of professional confidentiality, in particular lawyers and journalists,” writes the Guardian. “Nor, they argued, were there adequate safeguards against communications data leaving the EU.”
Critics also said it had been rushed through Parliament, which is what led to the unusual judicial challenge, the BBC writes. “Normally it would be scrutinized in Parliament, but the two MPs say that because the Data Retention and Investigatory Powers Act was rushed through in days, there was no time for proper parliamentary scrutiny and that this judicial review was their only option.” Legislation in the UK usually takes months to pass, but the government claimed it needed the bill right away to protect British citizens against terrorism.
The law governed gathering information about who suspects contact by telephone or email, according to the BBC, and allowed the data to be stored for up to a year. “This does not include content but does include the fact that calls and emails are made, by whom, to whom and how often,” the BBC writes. “Some half a million requests are made each year for this data.”
As with similar laws in the U.S., DRIPA supporters said the law was important to save lives in cases such as kidnapping and potential suicides.
The UK bill followed a similar one for the European Union as a whole, which was invalidated by the Court of Justice of the European Union in April, 2014. “The court struck down the directive largely because of poor access controls, although it was also concerned that citizens were not being informed about who was holding their data, and that some of the data might unlawfully leave the EU,” Politico explains. The MPs also drew on a number of EU laws in their arguments against the law.
DRIPA wasn’t just an issue for residents of the UK. The law also had a clause making it clear that foreign firms holding data on U.K. citizens could also be served with a warrant to hand over information. Anyone providing a “communication service” to customers in the UK, regardless of where that service is provided from, needed to comply, writes Lexology. “This was previously considered to be a grey area, and this clarification has significant ramifications for those providing communication services in the U.K. from overseas,” Lexology adds.
Exactly how the law could be rewritten is now being discussed. It could include more time to allow proper scrutiny of the proposed measures, writes the Media Policy Project blog of the London School of Economics.
The UK government has already said it plans to appeal the ruling. “I do think there is a risk here of giving succour to the paranoid liberal bourgeoisie whose peculiar fears are placed ahead of the interests of the people,” Security Minister John Hayes reportedly told BBC Radio 4’s The World at One.
But Parliament is already slated to see next month another bill that could be even worse: the Investigatory Powers Bill, writes the Huffington Post. “Revealed during the Queen’s Speech as a replacement for the emergency bill, the Investigatory Powers Bill has potentially far greater reach than even DRIPA with some of the preliminary wording suggesting that if fully approved it would allow the Government powers to ban encrypted communications services such as WhatsApp, iMessage and Facebook Messenger,” the Post writes.
Many of us who were around in 1982 remember a shocking incident. National Geographic ran a picture on its cover of the Great Pyramids of Gaza, but it later developed (no pun intended) that the magazine had moved the two pyramids closer together so they’d both fit into the picture. The world was horrified. This was National Geographic! Could we ever trust a published photograph again?
“No one might have noticed if the photographer, Gordon Gahan, hadn’t complained,” notes the website hoaxes.org. “It then became a source of major controversy. Sheila Reaves, a journalism professor at the University of Wisconsin has speculated that, ‘The enormity of moving such a large object brought home to people that you can move a shoulder or a smile.’”
The backlash was fierce. “The magazine was harshly criticized for this unethical act, and later, when the director of photography was replaced, the magazine issued a formal statement of apology and promised to never alter their images again,” writes Gettysburg College, which uses the incident as an ethics example in a journalism course.
This was hardly the first case of photo doctoring (though it was one of the first well-known cases of digital photo doctoring). But now there is a company that claims it can spot these instances of photo doctoring, by examining changes in the file caused by how it’s stored.
The company and product are both called Verifeyed. The product works by using machine learning to figure out whether photos have been through editing software and can establish which camera or phone was used to take them, writes Lucy England in Business Insider.
“Traditional digital cameras have several components: an optical system, then a photo sensor, and finally a storage system,” England explains. “If an image has been tampered with, it is decompressed, loaded onto photo-editing software, manipulated, and recompressed.”
But every time you compress a JPEG image, some information is lost to make a smaller file. When a JPEG image is compressed, it is split into adjacent blocks of pixels. Those blocks are compressed separately but still have to relate to one another in the same way they would in the original image. If someone has made changes to parts of the image, the changes will not relate to one another in the same way, and the Verifeyed algorithm can spot these differences, England writes.
The company is signing up clients from organizations such as banks, media companies, and insurance firms, which use the software on pictures that clients submit with claims. As many as 1 out of every 750 photos shows signs of digital tampering, Verifeyed notes. Another insurance company found that 1 out of 1000 pictures was fraudulent, which saved the company $1 million.
(Doctoring insurance photos has a long history. In the book Denial of Disaster, San Francisco librarian Gladys Hansen discovered that many 1906 earthquake pictures had been modified through techniques such as painting flames and clouds of smoke on earthquake-damaged buildings, because insurance companies would pay in the case of fire but not in the case of earthquake damage.)
Verifeyed claims that only 0.01% of the digital images the software examines are false positives. Moreover, it can analyze an image in less than a second, England writes.
So the product has a lot of interesting possibilities. What if someone decides to run Verifeyed against all the photos on the Internet? All the photos in the news? The company has already released a white paper showing how a number of photos have been manipulated, as well as publishing blog posts about them.
It could make the upcoming campaign season very interesting.
In case you thought the November 2014 revelation that Lois Lerner’s missing IRS email messages might have been found when backup tapes had been located meant the end of the story – LOL. You haven’t spent much time in politics, have you?
As you may recall, the whole thing started last June with former director of exempt organizations for the IRS Lois Lerner, and how something like two years’ worth of email messages — conveniently covering a period of time under Congressional investigation — were unavailable because employees could only store 500 mb of email, backup tapes were only saved for six months, and her computer had crashed, wiping out her hard disk drive. Last November, the IRS actually found the backup tapes – ironically, right where they were supposed to be – but it wasn’t clear whether they had any new messages on them and looking might be hard and expensive.
As with previous reports, conservative media has been leading the charge on much of the most recent news, which can sometimes make it challenging to figure out what’s really going on. That said, here goes.
A watchdog organization called Judicial Watch suggested last August that Lerner’s email messages might be on what turned out to be a total of 1,268 backup tapes. The Treasury Inspector General for Tax Administration (TIGTA) took possession of the tapes and was able to retrieve about 32,000 Lerner email messages from 744 backup tapes, as of November.
However, Judicial Watch wanted to know what was going on with 424 tapes (which still leaves 100 tapes unaccounted for, and what’s up with that?), and filed a Freedom of Information Act request to find out. “The conservative watchdog group wants to know their contents, whether they are now in the hands of the inspector general and whether the IRS must release the emails under the Freedom of Information Act,” writes the McClatchey Newspaper’s Washington D.C. bureau.
So U.S. District Court Judge Emmet Sullivan ordered on June 4 the IRS needed to answer questions by Friday, June 12, on the status of the lost email. On June 12, the IRS responded that TIGTA had given it 6,400 additional messages, which it had found in April, but that it needed to go through the messages – which could take until mid-September — to remove any duplicates before providing them to Judicial Watch – which didn’t take kindly to this.
Meanwhile, TIGTA has reportedly put together a 1,600-page report examining the agency’s handling of Lerner’s missing email messages and computer crash, according to Fox News, which said it had seen a copy of the report.
In contrast, the AP story says the report was only 22 pages long, but adds that 118 witnesses were interviewed for it, while GovExec said the report is scheduled to be released during 4th of July week (in the fine tradition of taking out the trash).
In any event, J. Russell George, the Treasury inspector general for tax administration, testified before Congress on the contents of the report in late June. And the upshot, writes the Associated Press, is that as many as 24,000 Lerner email messages may have been lost because the 422 backup tapes were erased. (Some reports, including the Judicial Watch press release, say 424 tapes, and the arithmetic for the total number of tapes does work out better for that number.)
This is despite the fact that IRS Chief Technology Officer Terry Mulholland had issued a directive in May, 2013, telling the department to preserve the records. (That said, the reason the IRS had said in the first place a year ago that it didn’t have the records was because backup tapes were routinely wiped so they could be used over again.)
“George and deputy Tim Camus said that two ‘lower level’ employees at a Martinsburg, W.Va., IRS facility erased the tapes as part of their normal housekeeping procedures,” writes GovExec. “’The investigation uncovered testimony and email traffic between IRS employees that indicate that the involved employees did not know about, comprehend or follow the chief technology officer’s May 22, 2013, email directive to halt the destruction of email backup media due to ‘the current environment’ and ongoing investigations,’ George said. ‘It appears they had a misunderstanding of the memo–they thought it was for hard-drives and personal computers, not backup tapes,’ Camus said.”
“’When interviewed, those employees said, ‘Our job is to put these pieces of plastic into that machine and magnetically obliterate them. We had no idea that there was any type of preservation (order) from the chief technology officer,'” Camus told the committee,” writes the Associated Press. “Rep. Thomas Massie, R-Ky., asked Camus if incompetence was to blame for the tapes being erased. ‘One could come to that conclusion,’ Camus said.”
Whose incompetence it was – the low-level employees, whoever it was who sent them the tapes for wiping, or Mulholland for not making a big enough deal of his directive — nobody said.
In addition, other testimony that day indicated that the original hard drive problem with Lerner’s laptop that caused her to no longer have copies of the email messages in the first place was reportedly caused by an “impact” rather than, say, a heat problem. Testimony didn’t indicate whether it was a “fell off the bed” or a “took a hammer to it” impact, but the laptop itself was reportedly undamaged, though the hard drive reportedly showed some “scoring.”
Mulholland was reportedly “blown away” upon learning that tapes had been degaussed, according to Fox News. It would mean that “evidence was destroyed 10 months after a preservation order for the emails; seven months after a subpoena; and one month after IRS officials realized there were potential problems locating certain emails,” Fox reports in another story.
But even Fox News had to admit that, according to the report, it all seemed like a case that should be attributed to stupidity rather than to malice. “The report says investigators found ‘no evidence that the IRS and its employees purposely erased the tapes in order to conceal’ some of the emails in question,” Fox writes. “However, the report demonstrates the IRS did a sloppy job retaining documents despite a House Ways and Means Committee directive to do so.”
In another one of life’s little ironies, Catherine Duvall, the person who had been in charge of producing the IRS email messages, is now in charge of producing former Secretary of State Hillary Clinton’s email messages.