Would you be willing to pay $16 a year essentially to store 1.56 megabytes of data? Some people don’t think it’s worth it, and they’re burning up the Interwebs talking about it.
Of course, there’s more to the $16 than just storing the 1.56 MB. Here’s the deal: Nintendo (actually Game Freak, which owns the rights to The Pokemon Company) is offering storage capacity for all the various versions of Pokemon games, which would let users store their monsters and transfer them between games.
Geez, everyone’s using the cloud these days.
(Think this is trivial? The Washington Post is covering it, for goodness’ sake.)
“Pokémon HOME is a cloud service for Nintendo Switch and compatible mobile devices designed as a place where all Pokémon can gather,” the company explains. “By linking the same Nintendo Account to both the Nintendo Switch version and mobile version of Pokémon HOME, you’ll be able to access the same Pokémon Boxes on both versions! With Pokémon HOME, you can move Pokémon between compatible games, trade Pokémon on the go, and more!”
In case you didn’t have children, or weren’t a child yourself in recent memory, Nintendo has brought out a version of Pokemon on just about every game console there is, from the Nintendo DS to, most recently, the Nintendo Switch. The cloud service, to be called Pokemon Home and to be made available sometime in February, lets people who went through their Pokemon phase in middle school migrate their collection up to their up-to-date console without having to do all the work to recreate them. And, to encourage this, it’s making the previous version, Pokemon Bank, free for the first month.
For those of us who only started playing Pokemon with Pokemon Go on our phones (and yes, I count myself among them, and I’m a level 40, thank you very much), this doesn’t help us much yet, because the phone-based Pokemon Go isn’t yet supported, though it’s expected to be in the future.
And, Game Freak gets to make money on the process, by charging monthly ($2.99), quarterly ($4.99) or annual ($15.99) subscriptions. It’s sweetening the pot by adding other features, such as the ability to trade monsters with other people around the world. It even includes a sort of Craigslist where you can list the monsters you have to trade and what you’re willing to trade them for. I admit it, I’m looking forward to that part, if only so I can finally get that Carnivine I missed out on getting when I was in the Caribbean.
There is a free version, but it’s pretty minimal. It supports just 30 monsters, compared with 6,000 in the paid version, for example.
Some people are pretty excited about this development, which Nintendo and Game Freak have been talking about since last June.
Other people are pretty excised, calling it a ripoff. And going on about it. At length.
And to a certain extent, I can understand both sides. $16 a year isn’t that big a deal, and you’d be surprised how many grownups play it, who presumably have that much disposable income. On the other hand, I also know a lot of families where it’s Mom, Dad and six kids who play, and that’s going to add up.
Some Pokemon fans are also just upset about how the company is nickel-and-diming them on everything.
“After all, a Nintendo Switch Online subscription [$19.99 a year] is required for online play in Pokemon Sword and Shield,” writes William White in CCN. “Add in the $4.99 per year fee to access Pokemon Bank, and the price of admission is quickly getting out of hand. This means that players must shell out more than $40 per year just to take full advantage of Nintendo’s Pokemon games.”
In fact, some of them have gone so far as to calculate how much space an individual monster’s data takes up, and then extrapolate from there how much space the 6,000 critters that Premium level provides you would take up. Hence the 1.56 MB figure.
“Do you realize how much space an individual Pokemon’s file actually takes up? 260 BYTES of data. 6000 Mons is the equivalent of 1.56 MB of data. Keep in mind, very few people ACTUALLY have 6000 Pokemon,” writes one commenter to White’s piece. “The math has already been done. Running servers with backup and redundancy across multiple countries for every single purchased copy of Sword and Shield with a full Home of 6000 Pokemon (which will never happen, mind you) would cost about $30,000 a year to maintain (this includes maintenance and tech upgrades, and was determined very conservatively; it would likely cost half of that realistically). On the other hand, you have Google Drive, which lets you have up to 15 GB of free online cloud storage. For perspective, that’s around 57 MILLION Pokemon that you can store PER PERSON, if storage on Google Drive were a thing.”
On the other hand, it’s teaching us great storage hygiene: Take your important data stored on old hardware and in old software, and migrate it up to more modern hardware and software to be able to keep using it.
Maybe that’s worth $16 a year all by itself.
The FBI is getting on Apple’s case about iPhone encryption again, after a mass shooter in Pensacola, Fla., reportedly has two encrypted iPhones to which the FBI wants access.
As you may recall, this has been going on ever since Apple and Google imposed encryption as a default in their phones in 2014. The FBI predicted direly that we would all be overrun by terrorist and pedophiles – those being the types of crimes who apparently deserve less protection from the Fourth Amendment than the rest of us – and the issue crops up every couple of years when there’s a high-profile case where someone had an encrypted iPhone.
For example, in 2016 the FBI asked Apple to write a new version of its operating system to make it easier to break into the iPhone of the San Bernardino shooters. Apple’s refusal to do so led President Donald Trump to call for a boycott against the company.
On the whole, law enforcement has dealt. In the San Bernardino case where the FBI asked Apple for help, it ended up figuring it out on its own. And the New York Police Department has created a $10 million lab to help it break into encrypted iPhones, writes Christopher Carbone for Fox News.
Most recently, we have the case of a shooter who killed three U.S. sailors and injured eight other people at Pensacola Naval Air Station in December. No less than Trump criticized Apple for not unlocking iPhones that the shooter used, according to James Rogers of Fox News. Attorney General William Barr also criticized the company, which responded by saying it had provided iCloud backups, account information and other data about transactions, and that the FBI didn’t approach it until a month after the shootings. (In fact, the shooter actually shot the iPhones in question, writes Kevin Williamson in the National Review.)
Apple and security experts have pointed out that there’s no way to create a “back door” into encryption for the use of law enforcement only. A back door is a back door, and it could be used by anyone who could break into it, including criminals. And there’s the suspicion by civil libertarians that, as in similar past cases, law enforcement is simply looking for an appealing case to use as a test case to require such a back door once and for all, Williamson quotes the ACLU as saying. There are, in fact, security flaws in both of the Pensacola shooter’s iPhones that could be used to break into them, writes Jack Nicas in the New York Times.
It turns out, too, that Apple had been planning to offer end-to-end iCloud encryption on backups, too, two years ago, but backed down after the FBI complained, writes Joseph Menn in Reuters.
“Under that plan, primarily designed to thwart hackers, Apple would no longer have a key to unlock the encrypted data, meaning it would not be able to turn material over to authorities in a readable form even under court order,” Menn notes. “The company did not want to risk being attacked by public officials for protecting criminals, sued for moving previously accessible data out of reach of government agencies or used as an excuse for new legislation against encryption.”
Not only does law enforcement have easier access to iCloud backups, they can be searched in secret, Menn writes. “In the first half of last year, the period covered by Apple’s most recent semiannual transparency report on requests for data it receives from government agencies, U.S. authorities armed with regular court papers asked for and obtained full device backups or other iCloud content in 1,568 cases, covering about 6,000 accounts,” he notes. “The company said it turned over at least some data for 90% of the requests it received. It turns over data more often in response to secret U.S. intelligence court directives, which sought content from more than 18,000 accounts in the first half of 2019, the most recently reported six-month period.”
On the other hand, iCloud encryption would also have meant that people who forgot their passwords would be SOL, writes Sam Shead for the BBC.
Incidentally, Google does offer end-to-end encryption, writes Courtney Linder in Popular Mechanics. For some reason, this hasn’t roused the FBI’s ire.
Part of the issue is trying to figure out to what extent law enforcement even knows what it’s talking about. Manhattan District Attorney Cyrus Vance was quoted by Fox as saying that he believes Apple actually does have the ability to unlock encrypted phones, because it can send his phone software updates.
“If the FBI succeeds in such an effort, it would set a dangerous precedent,” warns Will Baird on the blog of the American Enterprise Incident, a think tank. “Phones are not the only thing law enforcement would want to decrypt: With forced cellphone decryption in its toolkit, decryption of personal messages and internet traffic could soon follow. Over time, the number of law enforcement requests for decryption would almost certainly lead tech companies to establish back doors, with all of the fallout they entail.”
So what’s going to happen with the ediscovery marketplace in 2020?
It seems likely that, just like every year since 2011, acquisition and consolidation is going to be a thing. The big companies have already merged. Now, there’s myriad startups in the field, but they’re too small individually to continue to grow past a certain point, so they will have to merge with each other to be big enough to have the economies of scale required to survive.
Legaltech News did a pretty thorough survey of legal professionals on the subject, and the consensus seems to be that technology will continue to play two huge roles in ediscovery.
First, technology keeps giving us new ways to transmit and store information, and ediscovery needs to be prepared to follow along so that those types of information can continue to be used in a courtroom.
“E-discovery will continue to be bombarded by atypical data sources as ephemeral messaging, IoT device data, collaboration tools and app-based information becomes increasing relevant to investigations and litigation,” said Cat Casey, chief innovation officer, DISCO. Other data sources, such as collaboration software like Slack and Teams, will also become more important to ediscovery, others predicted, as well as, potentially, even genomes.
And in an intriguing suggestion, ediscovery will need to learn to be better at picking out fake news, John Davis, co-chair of the E-Discovery & Information Management practice, Crowell & Moring, told Legaltech News. “AI-driven deepfakes and fabricated evidence will come to the fore,” he said. “With the rise of ‘alternative facts’ and distrust of evidence, there will be an increasing emphasis on validation and authentication. Expect the use of blockchain and defensive AI in the courtroom, boardroom and public arena to establish or raise doubt as to data integrity and authenticity.”
Plus, the vast majority of ediscovery data will be on the cloud, writes Doug Austin on cloudnine.com.
The second role of technology in ediscovery is in the analysis. To the extent that case law permits it – and that’s also expected to expand over time – ediscovery vendors will increasingly be using analytics, artificial intelligence (AI), and other technologies to go through ediscovery data to minimize the amount of time that actual expensive humans have to spend doing it.
“Savvy providers must capitalize on advanced analytics and all the tools in their toolboxes to manage this explosion in terms of variety and volume of data,” Casey said. “Advanced analytics will become increasingly the norm as linear approaches simply become untenable.”
At the same time, privacy regulations like the European Union’s General Data Protection Regulation (GDPR) and California’s new California Consumer Privacy Act will also make ediscovery more challenging. When people can request that data about them be deleted, what does that do to the ability of legal professionals to be able to use that data as evidence in a future legal case?
“AI’s ever-growing powers will not only continue to butt up against the GDPR and other international data laws but also come face to face with a litany of new U.S. states’ privacy laws, including the CCPA and the regulations thereunder,” Robert Brownstone, chair of EIM group, Fenwick & West, told Legaltech News. “The more that AI tools can seek, sift and gather, the greater the difficulty in responding to data subjects’ requests for access, correction and/or erasure. And the more complicated will become key intersecting e-discovery issues such as collection and historically typical long-term non-anonymized preservation.”
In other news, be sure to remember E-Discovery Day on December 4.
And speaking of 2011 acquisitions, could 2020 be the year that the Autonomy case concludes? The most recent case rested as of January 15, but it is likely to be months before we hear a judgement. And beyond that? Could it be appealed? Beyond that, former CFO Sushovan Hussain is appealing his five-year conviction for fraud and it isn’t clear when that’s likely to go to trial. Believe it or not, the Autonomy legal cases may well continue to drag on into 2021.
The problem with storage becoming a commodity is that people stop thinking about how important it is. If you look at the various predictions for 2020 – and people are loving to make them, because “2020” is such a cool number – hardly any of them explicitly mention storage, and the ones that do typically limit their predictions to the cloud.
Yet, many of the other predictions they make are predicated on having easy access to reliable, secure, inexpensive and, most of all, plentiful storage.
Gartner, for example – which, to show how forward-thinking it is, makes its 2020 predictions in October during its Symposium conference at Walt Disney World – did include one storage-based prediction in its Gartner Top 10 Strategic Technology Trends for 2020, “distributed cloud.” “Distributed cloud refers to the distribution of public cloud services to locations outside the cloud provider’s physical data centers, but which are still controlled by the provider,” the company writes. “In distributed cloud, the cloud provider is responsible for all aspects of cloud service architecture, delivery, operations, governance and updates. The evolution from centralized public cloud to distributed public cloud ushers in a new era of cloud computing. Distributed cloud allows data centers to be located anywhere. This solves both technical issues like latency and also regulatory challenges like data sovereignty. It also offers the benefits of a public cloud service alongside the benefits of a private, local cloud.”
Yay! Storage, sort of! Even if it does take to Trend #7 to get to it. A related trend is #6, the “empowered edge.” “Edge computing is a topology where information processing and content collection and delivery are placed closer to the sources of the information, with the idea that keeping traffic local and distributed will reduce latency,” Gartner writes. “This includes all the technology on the Internet of Things (IoT). Empowered edge looks at how these devices are increasing and forming the foundations for smart spaces and moves key applications and services closer to the people and devices that use them.” One aspect of IoT is that they typically generate a horrendous amount of data, which has to be stored.
But a number of the other trends also touch on storage. Take Trend #1, “hyperautomation.” “Automation uses technology to automate tasks that once required humans,” the company writes. “Hyperautomation deals with the application of advanced technologies, including artificial intelligence (AI) and machine learning (ML), to increasingly automate processes and augment humans. Hyperautomation extends across a range of tools that can be automated, but also refers to the sophistication of the automation (i.e., discover, analyze, design, automate, measure, monitor, reassess.)”
Okay. How are you going to do that without storage?
Similarly, there’s Trend #3, “democratization.” “Democratization of technology means providing people with easy access to technical or business expertise without extensive (and costly) training,” Gartner writes. “It focuses on four key areas — application development, data and analytics, design and knowledge — and is often referred to as ‘citizen access,’ which has led to the rise of citizen data scientists, citizen programmers and more. For example, democratization would enable developers to generate data models without having the skills of a data scientist. They would instead rely on AI-driven development to generate code and automate testing.”
Anytime you see “data scientists,” that means big data – and a place to put it. And artificial intelligence typically requires a large amount of data the computer can learn from.
And there’s also keeping track of the data once you get it, as in Trend #6, “The evolution of technology is creating a trust crisis. As consumers become more aware of how their data is being collected and used, organizations are also recognizing the increasing liability of storing and gathering the data,” Gartner writes. “Legislation, like the European Union’s General Data Protection Regulation (GDPR), is being enacted around the world, driving evolution and laying the ground rules for organizations.” In this trend, you no longer even have to consider the details of how you are storing the data, but about how to deal with it.
Autonomous things, blockchain, and AI security, three other Gartner trends, all also require storage to work.
Gartner’s not alone. Forrester made similar predictions – again, where a number of them are predicated on having large amounts of storage but without ever using storage itself as a trend, such as “Advanced firms will double their data strategy budget,” “Data and AI will get weaponized,” and “Regulation will make and break markets.”
This all just goes to show how important storage is to our lives and how much we’re taking it for granted. You couldn’t do most of these predictions without storage, yet few of them mention it explicitly, just like none of them say, “Hey, you know, we’ll need electricity and telecommunications to do this stuff, too.”
It turns out that, for 2019, there were really only two E-discovery stories: The continuing consolidation of the E-discovery marketplace, and the ongoing train wreck that is the HP-Autonomy merger and the lawsuits that have followed in its wake.
Yes, that was a mixed metaphor.
Well, okay, there was one more. The Supreme Court ruled that winning litigants couldn’t necessarily count on being reimbursed for E-discovery costs. Considering that can be a major component of court cases these days, it will be interesting to see how often that gets used as a precedent in coming years.
The ongoing consolidation of the E-discovery marketplace has been a thing ever since 2011, when Gartner published its first Magic Quadrant for the E-discovery marketplace, giving larger vendors a handy shopping list for acquisition. So well did that shopping list work that, out of 22 vendors in the original one, only a handful still remain. In fact, Gartner stopped publishing the E-discovery Magic Quadrant altogether after 2015, having apparently concluded there is no longer an E-discovery marketplace per se.
And yet, vendors keep finding other vendors to acquire, because law firms have figured out that the best way they can make more money is to automate their current processes, so people keep starting up new legal software companies for the purpose of them acquiring each other – or, in a relatively new development, investing in them.
At least it keeps the M&A team busy.
Speaking of M&A, take HP-Autonomy.
Considered to officially be the sixth-worst acquisition of all time, HP and Autonomy have been beating each other up in civil and criminal court for going on five years. So far, the only winners are the lawyers.
In case you’ve forgotten, in the Autonomy-HP merger – officially the sixth-worst merger and acquisition of all time – HP chairman and CEO Leo Apotheker (who was fired later that year) paid $11.1 billion to acquire Autonomy, a European e-discovery company. By the following year, HP claimed that Autonomy had cooked its books to overvalue itself, wrote down the purchase a a $9 billion loss, and sold off the company’s remaining assets in 2016.
Then came the lawsuits, starting with a shareholder lawsuit, which HP settled in 2015 for $100 million. Former Autonomy CFO, Sushovan Hussain, was found guilty on 16 counts of wire and securities fraud. HP also had a countersuit by former Autonomy CFO for $160 million, and an appeal by Hussain, as well as actual criminal fraud charges were filed against Lynch, and were added to.
In March, a $5 billion civil lawsuit against Autonomy CEO Mike Lynch started. And that’s what we’ve been working on this year.
The basic story is the same: HP says Autonomy pumped up its value, and Autonomy says that HP doesn’t understand British accounting and is trying to overcome its own incompetence at not successfully integrating the company. It’s the details that make this a train wreck, like just how many times did Autonomy CEO Mike Lynch use the f-word in email messages to his subordinates?
The high point this year was when former HP CEO Meg Whitman threw former HP board chair Leo Apotheker under the bus.
We know this, because she wrote an email message saying “Happy to throw Leo under the bus.”
Remember, kids, in an E-discovery case, your email messages can come back to haunt you.
HP, in fact, has so much post-traumatic stress disorder around the whole thing that it’s been bleeding over into the potential HP-Xerox acquisition. In case you missed that, Xerox suggested that it merge with HP. But HP, after being mocked on the world stage by having done insufficient due diligence on the Autonomy acquisition, is refusing to reveal any information about itself to Xerox, while at the same time demanding that Xerox provide all sorts of due diligence before HP will even look at it. Whether the acquisition itself actually makes any sense is immaterial; what’s important is that nobody will ever be able to say that HP failed to do enough due diligence again.
And the various Autonomy cases aren’t over. After all, we need E-discovery news for 2020.
The funny thing about writing a year in review for storage in 2019 is that it’s almost exactly the same as the year in review for storage in 2018. Only the links change.
Hard disks and other forms of data storage still get bigger, denser, and cheaper. Researchers look at new technologies for storage in the future, such as glass storage and storage in DNA. We still use magnetic tape. We still lose data and poke USB sticks in things. (Sometimes these two things are related.) Our data is still being added to government databases and various aspects of this keep going to court, such as whether the police can gain access to genetic databases without a warrant. Sometimes we even win, such as when courts finally decided – for good, I hope – that border agents couldn’t search laptops and other storage devices willy-nilly.
Really, the biggest trend in storage this past year was getting rid of it. Thank you, Marie Kondo.
To some people, this year also marks the end of a decade. (Those people would be wrong, but still.) What’s interesting about that is finding out that my doing storage trend pieces in December is actually a fairly recent development, when I thought I’d been doing it all along. It’s funny how easily we can convince ourselves that something has always been true, in a we’ve-always-been-at-war-with-Eastasia kind of way. Who needs 1984? We can do it ourselves.
Incidentally, 1984 was 36 years ago.
That’s actually what demonstrates the value of storage. Human memory is not only limited, but fallible. Humans have so many logical fallacies in the way that we remember and present information.
You know, once I learned about confirmation bias, I started seeing it everywhere.
Another one, not included in that list, is recentism, also known as “the curse of memory” or the “availability heuristic”: If we can remember it, it must be important. Conversely, if we can’t remember it, it must not be important.
The point of storage is to help us override that bias – to ensure that we don’t attribute too much importance to something recent, just because we remember it better, and remind us of important things that happened in the past. Remember “history doesn’t repeat itself, but it rhymes”? Business and history travel in cycles, and it’s important for us to be able to back and look at the last time we were in a similar place in the cycle, because that could help us get through it more easily the next time. Or, better still, prevent it from happening.
That’s actually one of the scarier trends in storage. It’s bad enough when a database remembers or tracks something that we’d just as soon it had forgotten, or, conversely, when data we had counted on being there suddenly no longer is, whether that’s by accident, such as when a system loses data, or on purpose, such as when politicians delete government data that is no longer convenient to have around. (Though, to be fair, President Donald Trump’s administration is doing this a whole lot less than people were afraid of at first.)
The bigger concern is whether we can trust that the data we have is actually accurate. Whether it’s “deep fakes,” or using technology to create believable audio and video of things that did not occur, or actually changing data, such as the concern about voting machines not accurately recording voting data while making it look as though they did, it seems to me that one of the biggest and scariest trends of the year, and perhaps even the decade, is not just the security and robustness of the data we have, but also its reliability. How can we trust that the data we have is actually what we believe it is? How do we know whether data was changed in the process, or made up out of whole cloth?
Perhaps what vendors should be working on now is not how to make data storage bigger, denser, or faster, but to help us find ways to ensure that the data we do have is actually accurate and unchanged.
If you’ve forgotten, or weren’t alive at the time, Infocom games were text-based, because this was back in the day before graphics were particularly available in games.
“It was 1977, and home computers were big, expensive, heavy, and were almost entirely lacking in computing power by today’s standards,” writes Krypton Radio. “Yet, in this primitive environment, the first computer adventure games were born. Zork was the first commercial offering.” It was based on the very first text adventure game, Colossal Cave Adventure — or, as some people called it, just Adventure — and originally written for the Digital Equipment Corp. DEC-10 minicomputer. “That’s right, it took a mainframe to run it!” Krypton Radio notes.
Now, the source code to some of the Infocom games has been posted to GitHub, so that people who have been longing for the days of text-based games can indulge themselves to their heart’s content.
However, that was also back in the day when there were multiple types of computers and each game needed to be written for each. Consequently, the games were all written in a proprietary language called ZIL.
“ZIL, or Zork Implementation Language, is the unique programming language used to make the Zork games and was based on another old coding language called MIT Design Language (MDL),” writes Matt Kim in US Gamer. “[ZIL] is written to create adventure games in an environment people haven’t used commercially in over 25 years. And even then, it was about 15 people. ZIL then is a pretty niche coding language with a niche group of followers. There are actual online communities that teach and carry on ZIL, but it’s not a modern coding language like C++.”
As with other games, looking at the source code reveals all sorts of things about the game. For example, when the source code had first become available a couple of years earlier, people learned that some aspects of Zork were completely random. “While Zork checks the player’s item count to determine if they’re carrying too much, it also uses a random roll just to mess with the player,” writes Logan Booker in Kotaku. “The roll used a number between 0 and 100, forcing players to keep trying to pick things up until it finally worked. I was skeptical at first — surely a system as important as inventory wouldn’t be so cavalier with capacity? My skepticism grew when searches of Zork‘s MDL code from MIT and the public domain source from Infocom came up empty. But, after checking various sources of decompiled code from Zork, it does indeed appear the game would fire out an overburdened message based solely on randomness.”
Source code for other Infocom games were posted as well. “Leisure Suit Larry, the complete source code, have been uploaded to GitHub – alas, not the assets too, so you can’t build Leisure Suit Larry from this, but you can certainly get a glimpse as to how the game was created and how the asset system worked with the game script itself,” Krypton Radio adds, as well as The Hitchhiker’s Guide to the Galaxy and others. “There are about two pages of listings, mostly Infocom, but there are some hidden gems there too, like an open source version of the engine Croteam created for Serious Sam, Peter Spronck’s Space Trader, as well as the complete source for Hexen and Heretic, both from Raven Software.”
If Fortnite has lost its appeal for you, it might be worth checking out.
As you may recall, every couple of years someone does a new experiment with glass storage, and everyone falls all over themselves talking about how it’s the wave of the future and never wears out and is completely indestructible.
Right. Tell that to the casserole dish that I took from the fridge to the oven too fast.
Remember, by 2015 we were all supposed to be using glass storage by now, at least according to Hitachi, which announced it in 2012.
Anyway, Microsoft, which has been on the cutting edge – see what I did there? – of storage research for a while, recently announced a new breakthrough: It had stored an entire movie on glass. And which movie did it pick?
Of all the movies that could be preserved forever, they pick that one? Admittedly, they could do worse. The dorky “Can you read my mind?” scene aside, it’s not a bad movie, may Christopher Reeve and Margot Kidder rest in peace.
It turns out there’s a reason they picked that movie. Warner Brothers, which is partnering with Microsoft on this research to find better, more economical, safer ways to store its backlog (remember the 2008 Universal fire?) apparently had discovered some recordings of the 1940s-era Superman radio show on glass discs, and they took that as a Sign.
You know, that’s the story I really want to hear. How did those recordings get made? How did Warner Brothers find them? How did they figure out a way to play them? Sadly, I can’t find any information on that other than the offhand references in the Microsoft pieces.
Warner Brothers isn’t alone. GitHub is also partnering with Microsoft to store its archives on glass, among many other storage media, in a program called LOCKSS, or Lots of Copies Keeps Stuff Safe.
With the Project Silica technology, Microsoft has reportedly succeeded in storing Superman, all 75.6 gigabytes of it, on a piece of glass the size of a “drink coaster,” 75 x 75 x 2 millimeters, the company writes. I guess comparing it to the size of a CD or DVD didn’t occur to them.
So, it’s good to know that glass has now reached the same level of data density as a DVD. Earlier versions of glass storage could store only 40 megabytes per square inch, which was about the same level as a CD, but not as good as a hard disk.
“A laser encodes data in glass by creating layers of three-dimensional nanoscale gratings and deformations at various depths and angles,” writes Jennifer Langston on the project website. “Machine learning algorithms read the data back by decoding images and patterns that are created as polarized light shines through the glass.”
In other words, this is not technology you’re going to be picking up on a thumb drive anytime soon. And it’s not intended to be. “It represents an investment by Microsoft Azure to develop storage technologies built specifically for cloud computing patterns, rather than relying on storage media designed to work in computers or other scenarios,” Langston writes. “We are not trying to build things that you put in your house or play movies from. We are building storage that operates at the cloud scale.”
And we get the usual song and dance about how indestructible it is. “The hard silica glass can withstand being boiled in hot water, baked in an oven, microwaved, flooded, scoured, demagnetized and other environmental threats that can destroy priceless historic archives or cultural treasures if things go wrong,” Langston writes.
Notice how she doesn’t mention “dropped.” “Sure, it is breakable if you try hard enough,” a Microsoft researcher told Janko Roettgers in Variety. “’If you take a hammer to it, you can smash glass.’ But absent of such brute force, the medium promises to be very, very safe, he argued: ‘I feel very confident in it.’”
And there’s still the formatting issue. “Long-term storage costs are driven up by the need to repeatedly transfer data onto newer media before the information is lost,” Langston writes. “Hard disk drives can wear out after three to five years. Magnetic tape may only last five to seven. File formats become obsolete, and upgrades are expensive. In its own digital archives, for instance, Warner Bros. proactively migrates content every three years to stay ahead of degradation issues. Glass storage has the potential to become a lower-cost option because you only write the data onto the glass once. Femtosecond lasers — ones that emit ultrashort optical pulses and that are commonly used in LASIK surgery — permanently change the structure of the glass, so the data can be preserved for centuries.”
Well, okay. But as Langston mentions, file formats become obsolete, and glass doesn’t solve that problem. All that gives you is a bunch of indestructible data that nobody can read because nobody has the readers or the software for it.
Though you could always use them for coasters.
The CEO of a Chinese Bitcoin exchange, International Data Access Exchange (IDAX), has vanished with the keys, leaving all its balances inaccessible — to anyone but himself, presumably.
“Following the official announcement ‘Announcement of IDAX withdrawal channel congestion’ on November 24, We announce Urgent notice about current situation of IDAX Global,” noted the company’s website. “Since we have announced the announcement on November 24, IDAX Global CEO have gone missing with unknown cause and IDAX Global staffs were out of touch with IDAX Global CEO. For this reason, access to Cold wallet which is stored almost all cryptocurrency balances on IDAX has been restricted so in effect, deposit/withdrawal service cannot be provided.”
The action may be linked to crackdowns by the Chinese government in the cryptocurrency market, reported BeInCrypto. “The news from IDAX comes just days after the exchange suddenly announced its withdrawal from the Chinese market entirely,” writes Rick D. “Citing ‘policy reasons,’ a statement on November 25 explained that the company would no longer provide its services to China. Although not explicit, the sudden announcement seems almost certainly linked with recent news of a further clampdown on digital currency trading venues by the Chinese government.” However, as of yet, no bitcoin were reported missing, he added.
If this sounds familiar, it’s because in December 2018, Gerald Cotton, CEO of crypto exchange QuadrigaCX, reportedly died in India on his honeymoon without leaving access to the keys to anyone including his new wife, Jennifer Robertson. Whether he’s not actually dead has never been ascertained, but since then, a report by Ernst & Young has stated that much of the money was taken out of the exchange and used privately.
“In the course of its investigation, the Monitor identified significant transfers of Fiat from Quadriga to Mr. Cotten and his wife,” the report noted. “The Monitor understands that in the last few years, Mr. Cotten and his wife, either personally or through corporations controlled by them acquired significant assets including real and personal property. The Monitor also understands that they frequently travelled to multiple vacation destinations often making use of private jet services. The Monitor has been advised that neither Mr. Cotten nor his wife had any material source of income other than funds received from Quadriga.”
That real and personal property includes land in Canada, airplanes, and cars, amounting to about $12 million Canadian, or $9 million US, which the report said would be sold to help repay creditors.
The report noted a number of other accounting and financial problems with the company, adding, “In addition, the Monitor understands passwords were held by a single individual, Mr. Cotten and it appears that Quadriga failed to ensure adequate safeguard procedures were in place to transfer passwords and other critical operating data to other Quadriga representatives should a critical event materialize (such as the death of key management personnel).”
In fact, Cotten might not even be dead. “The RCMP and the FBI have refused to comment, but some of their interview subjects have gotten the impression that they believe Cotten might not be dead,” writes Nathaniel Rich in Vanity Fair. “’They asked me about 20 times if he was alive,’ says one witness who has intimate knowledge of Quadriga’s workings and has been questioned by both agencies. ‘They always end our conversations with that question.’ QCXINT, the creditor and blockchain expert, said that the FBI’s Vander Veer told him that with hundreds of millions of dollars missing and no body, ‘it’s an open question.’ The only way to verify that the body Robertson brought home from India was Cotten is to exhume it. The RCMP, which has jurisdiction over the case, has thus far not done so.”
People creating a new system sometimes underestimate how long it’ll be around. That was the core of the “Y2K Problem,” which is when people were concerned that computer programs around the world would fail because the designers had never considered the idea of a year after 1999.
Boy, that feels like a long time ago.
Most of the Y2K bugs got worked out before everything went poof at midnight on December 31, 1999, but it’s not unusual for there to be similar bugs related to data fields that get filled up. In addition, hackers have learned to create and exploit these bugs by putting a system into a vulnerable state through a buffer overflow, such as with the “heartbleed” bug from about five years ago.
But more recently, there’s a doozy.
“Bulletin: HPE SAS Solid State Drives – Critical Firmware Upgrade Required for Certain HPE SAS Solid State Drive Models to Prevent Drive Failure at 32,768 Hours of Operation,” reported the Hewlett Packard Enterprise Support Center earlier this month.
If that seems like an odd number, it’s not – literally, that is. It’s 2 to the 15th power.
So let’s take a guess – some field associated with the solid state drive is 15 bits long, and when the hour count gets beyond that (which is about 1,365 days, or 3 ¾ years), the field fills up and the system is froached.
“The power-on counter in the affected drives uses a 16-bit Two’s Complement value (which can range from −32,768 to 32,767). Once the counter exceeds the maximum value, it fails hard,” writes Marco Chiappetta in Forbes.
And it gets really froached.
“After the SSD failure occurs, neither the SSD nor the data can be recovered,” HPE notes. “In addition, SSDs which were put into service at the same time will likely fail nearly simultaneously.”
Chiappetta goes into more detail about that aspect. “This issue can be particularly catastrophic because the affected enterprise-class drives were likely installed as part of a many-drive JBOD (Just A Bunch Of Disks) or RAID (Redundant Array of Independent Disks), so the potential for ALL of the drives to fail nearly simultaneously (assuming they were all powered on for the first time together) is very likely.”
HPE said that one of its vendors had discovered the problem. “HPE was notified by a Solid State Drive (SSD) manufacturer of a firmware defect affecting certain SAS SSD models (reference the table below) used in a number of HPE server and storage products (i.e., HPE ProLiant, Synergy, Apollo, JBOD D3xxx, D6xxx, D8xxx, MSA, StoreVirtual 4335 and StoreVirtual 3200 are affected).
One wonders how this bug presented itself. Did someone happen to run across it just in time? How long have HPE drives been crashing and burning until this bug was tracked down and repaired?
And which vendor was this? HPE doesn’t say, but one would guess that HPE might not be using that vendor again in the future.
“This HPD8 firmware is considered a critical fix and is required to address the issue detailed below. HPE strongly recommends immediate application of this critical fix.”
You don’t say.