Chrysler killed off Plymouth. GM did it to Oldsmobile and Pontiac. Ford did it to Mercury. Microsoft even did it to Windows XP. Yet today, years after the demise of these products, they all continue to run. For the vehicles, parts remain available and dealers are happy to perform maintenance and take your money. Even for XP, Microsoft still issues the occasional security patch. Elsewhere, ink cartridges continue to be available for Epson inkjet printers discontinued long ago.
So, what’s the problem with the Internet of Things?
Consider Nest’s decision to kill off its $299 Revolv home-automation hub. Revolv (the company) was acquired by Nest in October 2014, which immediately stopped selling Revolv (the product). The service stayed up, however. Not for much longer. “As of May 15, 2016, your Revolv hub and app will no longer work,” states a message on the Revolv website from its founders. “The Revolv app won’t open and the hub won’t work.” Tough luck, buddy. And not exactly a PR-friendly warm and fuzzy writing style, either. (Nest itself had been snapped up by Google in January 2014.)
It’s reasonable to think that it was Revolv’s intellectual assets, especially its talented IoT developers, that Nest coveted. If you’re an IoT developer, or plan to become one, you should view the move as a signal that this is a great career path.
Initially, Nest (or Google or Alphabet) had no plans to compensate Revolv owners, but that now appears to be changing. In a statement provided to CNBC, a Nest spokesman said it will consider providing customers with compensation on a case-by-case basis. What form that compensation takes was not detailed. Either way, these customers are still left with nothing but a fancy paperweight. It’s the Abandonment of Things. Curiously, Google is refunding the full purchase price of its acquired Nik photo-editing software suite now that the company is making it free — and not shutting it down.
Do you think that’s fair? Shutting down the cloud component of your IoT product, turning faithful customers into orphans is a shameful business strategy. Can you imagine a maker of implantable cardiac pacemakers attempting a similar tactic? (Then again, no one jumped up to compensate me when it became impossible to buy blank VHS tapes for my VCR.)
I have an Acer laptop that’s nearly 20 years old. It runs Windows 95, still functions perfectly, and through an RS-232 serial-port powerline interface still runs an ancient version of the superb HomeSeer application, communicating with switch modules to control lighting in my home via the long-defunct X10 protocol. The laptop sits in a closet, out of sight, out of mind, plugged into a small UPS unit for power protection, and perfectly secure because it has no network connection. It’s all hilariously obsolete, but no one ever tried to shut any of these products down.
As developers, we understand that even the simplest of IoT products represents a significant investment. They contain embedded software to make the thing work, server side applications to process messages or send out alerts, databases for maintaining user accounts, iOS and Android mobile apps for controlling devices from your reclining chair, and more. There are license fees for software libraries, too.
I can understand the underlying economic reason for leaving the past behind, but in this connected age, before you arbitrarily put a bullet through your products and applications, you’d best provide a soft landing for the people who paid for the privilege of using them.
What do you think? Have you written code for products your company killed off, leaving customers with no escape route? Share your thoughts; we’d like to hear from you.
Last night, the FBI announced that it was dropping its litigation against Apple, because it had found an alternative way into the iPhone that had belonged to one of the San Bernardino terrorists. It proves, yet again, that nothing is ever completely, totally secure. Suing Apple simply became moot.
To the best of my knowledge, Apple never said that it couldn’t gain access to the phone, only that it wouldn’t. I’m not here to persecute or defend Apple, nor to debate the social or legal issues raised by the case. What I will do is opine about what we believe to be secure.
I’ve always likened device or application security to the idea of the scientific hypothesis. While it’s possible to absolutely prove a hypothesis to be false — all it takes is a single test case — you can never prove a hypothesis to be true. Every time you run a test that doesn’t destroy your hypothesis, all you’ve done is bolster support for it. But, you haven’t proved it true. Stop after a million tests that all support your hypothesis, and it still could be test 1,000,001, the one your imagination never conjured up, that smashes it to bits. You get the idea.
Testing for security is the same. In the digital realm, no matter how many times your security tests stand up to scrutiny, it just might be the very next one that lets the bad guys in. Who, after all, is the party that came forward to teach the FBI how to gain access to the infamous iPhone? Fortunately, this party was not a malicious hacker, but proved there was a way in, regardless of how secure Apple wanted us to believe the device was. (The data recovered from the phone still needs to be decrypted, but that’s separate from getting to the data.) Apple now vows to tighten security further.
One of the traditional arguments against cloud computing continues to be that some CIOs feel uncomfortable about security. Where are these cloud servers? Who is managing them? And wouldn’t 10,000 corporations virtually situated in the same gigantic physical datacenter be much more of a sitting duck target than a handful of servers buried deep in the bowels of 10,000 different corporate headquarters facilities? They’re all legitimate questions.
Cloud providers have the latest security technology and spend a whole lot more on security than any IT department ever could. They can hire experts that businesses can’t afford. They can hire experts that businesses can’t even find. It likely makes clouds much better at security than any business could do on its own, but absolutely, positively secure? In a word, no. After all, if security heavyweight RSA could itself be the victim of a huge breach in 2011, what does that mean for the rest of us?
Enough about the Apple case. Are you hypothesizing about the security of your systems, services, applications, and data? Sleeping well at night? Share your opinion — or your hypothesis; we’d like to hear from you.
As legacy applications from the on-premises datacenter are retired and replaced by cloud-based software-as-a-service subscriptions, those services need to be integrated. That’s the job of developers, armed with APIs and working under the aegis of the CIO. It’s all very tidy. But, what about integrating SaaS implementations that secretly came in through the back door without IT ever being aware? And what about when you find yourself in the crazy scenario of connecting a dozen instances of the very same SaaS?
It happens more often than you’d think, according to Liz Herbert, a vice president and principal analyst at Forrester Research. We’re all aware that different SaaS implementations, such as CRM, order fulfillment, payroll, inventory management, and others need to be integrated. That’s neither news nor fodder for an opinion column. Much more interesting is that we never think about is the need to integrate multiple copies of the same SaaS.
“In a large enterprise, it is not uncommon to have 12 instances of Salesforce that are unaware of each other, usually because each one was brought in separately under the radar,” Herbert observed. It’s yet another aspect of that phenomenon we’ve come to know as Shadow IT or Citizen IT.
It happens because line-of-business departments want to do their own thing and not wait for IT to work its way down a list of pending projects. It can happen because IT doesn’t have the budget. Or because there’s a roadblock with a legacy CIO who thinks in legacy terms. It can happen because a department manager gets wind of what a different department is doing, likes the idea, but goes with his or her personal twist.
Typically, these SaaS instances become known to IT months after they were initiated, when the department manager asks IT to take over management and administration. Whether they conform to the corporate policies on security or governance was never considered — until now.
To be fair, this is not always due to illicit SaaS sign-ups. A more above-board scenario is when businesses go on acquisition sprees and have to meld all the pieces into a unified whole. Though each might use the same payroll processing SaaS, it’s a good bet that no two implementations are identical. That’s more likely to become an official IT project even though the same-but-different integration issues persist.
Have you found yourself in the midst of integrating multiple copies of the same SaaS? If so, we’d like to hear about your adventures in the discussion thread. And remember, it’s not just your organization. When it comes to cloud computing, “I thought it was just us” simply doesn’t apply.
You’re a developer and darned proud of the code you write. You follow the specs and build what the stakeholders and designers want. You’ve tested it and all the test scenarios work as expected. You deploy and the app goes into the wild. But, what happens when there’s a problem that no one anticipated? Not you, not the app owner, not QA, not ops, not anyone.
After having a problem with my mobile phone, I visited the local store of the phone manufacturer, a giant company named for a fruit. None of the store’s so-called geniuses could figure out why the docs and data on my phone kept ballooning up to fill — and even attempt to exceed — the 128 GB device’s available memory. It turns out that this giant hardware company (or is it a software company) had no diagnostic software that could peek into the device and see what was suddenly occupying so much RAM, more than 23 GB. Manually adding up the data use reported by all the installed apps plus recently deleted photos still in RAM totaled less than 1 GB.
Multiple reboots did not help. The only alternative, they said, was a reset and restore from my last backup. After doing that, I attempted to re-add a credit card to the phone’s wallet. That resulted in a “card is already in wallet” error though it clearly was not. Multiple calls with said fruit company’s tech support experts (on my land line) could not solve the problem. Again, there was no way to peer into the phone to get a snapshot.
Four calls later, one young man in the fruit company’s Jacksonville, Fla. call center suggested the purely undocumented move of logging out of my cloud account associated with my phone then logging back in. Voila! I could now add my credit card. Why do this? He couldn’t say.
Why did the phone operating software and its wallet app behave in this manner? No one knows. Why did logging out and back in solve the problem? No one knows. Was that the cloud equivalent of a three-fingered Ctrl-Alt-Del forced-reboot salute? No one knows. With tens of millions of phones in use that support wallets and credit cards, why hadn’t this been seen before? Or if it had been, why wasn’t it documented? No one knows. Web searches did yielded some highly convoluted suggestions, though.
The innocent bystander in this are the developers who build software based on a set of specs. I’m not saying that the developers were surrounded by fools to the left and jokers to the right, but it’s clear that neither the specs nor the test scripts anticipated this situation. Perhaps this couldn’t have been imagined. It’s not like the famous spec failure that says how to process a payment if it’s received less than 29 or more than 29 days after the due date, but which fails to specify what to do if the payment is received exactly 29 days after the due date. That’s just bad design.
Burke Holland, director of developer relations at Progress Software, has reminded me more than once that great software isn’t finished until it’s fully tested. But, just as you can never prove that an app is completely secure, only that it is not secure, you can’t test for scenarios beyond your wildest imagination.
What really would have helped is powerful diagnostic software to do a RAM autopsy. It didn’t exist. And that makes me wonder what we should be doing in terms of creating diagnostics for the apps we build. Do you do that? Share your thoughts, we’d like to hear from you.
It’s a good question to ask. You think your API calls are secure. But, how do you know? It’s likely you probably don’t.
The woeful case of the all-electric Nissan Leaf car has been beaten to a pulp recently by the tech news industry. It turns out the car can be accessed via an insecure API. Battery charging functions, driving history, can all be gotten to once you know the vehicle’s VIN, or vehicle identification number. And that, as you know, is embossed on an aluminum plate that’s always on display. All you need do is look through the windshield.
If this is really the best that a major automotive manufacturer can do, you’ve got to wonder about other companies whose pockets for developing and testing applications are not quite so deep. Nissan reacted by suspending its electric car app.
You can never prove that an application is secure. You can prove only that it is not secure.
This isn’t the first case of remote automotive hacking. You’ll remember last July that Chrysler issued an emergency software patch for 1.4 million vehicles once it became public that on-board software in Jeep, Ram, Durango, Chrysler 200 and 300, Challenger, and Viper models from 2013 to 2015 could be easily hacked.
Security researcher Troy Hunt, in a lengthy blog post, describes the entire Nissan Leaf scenario, complete with an embedded video and many code fragments. If you’re a developer, you should give the piece a thorough read. The issue isn’t that security was implemented incorrectly, but rather that it doesn’t seem to have been implemented at all.
I don’t know what’s worse — bad security or no security.
Roberto Medrano, executive vice president at Akana, a provider of tools for creating, securing, and managing APIs told me that applications, and integrations among them, are becoming increasingly API driven, making connections simple and straightforward. But, it’s security that must always be top of mind.
Here’s what I’ve been saying about application for years: You can never prove that an application is secure, you can prove only that it is not secure. How can that be? Think of it this way — If you run a million different attack scenarios on your app and none succeeds, you’ve proven only that those don’t work. But, maybe scenario 1,000,001, the one you hadn’t thought of, that will break in. Thus, you can prove unequivocally that something is not secure.
This is like a scientific hypothesis. You can never prove a hypothesis to be true, but you can prove it to be false.
What are you doing to test API security? Share your thoughts; we’d like to hear from you.
I’ve been writing opinion columns for various technology publications for more than a quarter century. Rarely have I seen anything touch a nerve to the degree of Facebook’s coming shutdown of the Parse mobile back-end as a service.
One thing that’s great to know is that developers are watching out for each other, offering up ideas for alternative services. As my own service, here’s a digest of some of what has crossed my inbox during the last week. Which are good, bad, or ugly products and services? That’s up to you to decide for your own mobile development projects.
Nimble Parse is a Parse-compatible API service from Nimble Stack that starts at $10 a month, including a half gig of memory and unlimited data. Its offers three service levels up to 2 GB of memory.
Appcelerator, another mobile app development platform, feels your pain. To help, director of product architecture Rick Blalock is hosting a webcast on Feb. 17 to walk through a comparison of Parse and Appcelerator Arrow, showing how to migrate platforms and answer questions.
Syncano is a start-up that touts itself as a platform for creating serverless apps. In her blog post, Sara Cowie shares your sadness and confusion. The company is ready to provide its entire portfolio of features free for six months, including a dedicated support team to guide developers through the migration process.
GameSparks, another back-end services that seems to target developers of gaming apps, wants to provide you with an alternative integrated toolset for building, tuning, and managing server-side components.
I was contacted by apiOmat.com, yet another MBaaS, that appears to have more of an enterprise slant to its offerings. You can check it out and get started for free, though the pricing chart was in euros.
Other alternatives exist. These are the first few that reached out to me. No doubt we’ll all be looking into this a lot more in the next few months. Parse shuts down on Jan 28, 2017. Don’t wait. Start now. And share with us your woes, your outrage, and your plans.
Were you caught off guard by Facebook’s abrupt announcement on Jan. 28 statement that its Parse mobile backend as a service (MBaaS) was going to be shut down? You’re not alone. And we’d like to hear from you.
Outrage on the Twitter #parseshutdown page didn’t take long to get revved up. Developers who entrusted code or data to Parse bemoaned its impending demise, wondering what they would do next. Consultants and competing platform providers began to twee advice for migrating data and applications or for offering replacement mobile development platforms.
Burke Holland, director of developer relations at Progress Software, told me about the plight of one developer, victimized by the sudden and unexpected announcement. “I saw a message on Reddit from a developer who said he had deployed his app on Parse literally two hours before the announcement,” he said. He went on to say that there are small developers who may have built their entire business on Parse and who don’t have the latitude to take a hit like this. He’s right.
For Facebook, it may be that Parse simply wasn’t a profitable business, Richard Mendis, chief product officer of MBaaS provider AnyPresence told me.
What we may have lost sight of is how Facebook views the people who use its various services.
In my opinion, whether we’re posting photos of the new grandkid, linking to videos of cats playing piano, or using the company’s APIs to develop compatible applications, remember this: We are not Facebook’s customers, we are Facebook’s product.
Facebook is in the business of generating revenue, mostly through the sales of advertising that reaches the likes of you and me. If you are not paying money to Facebook, you are not a Facebook customer. What the company is doing is delivering eyeballs (that’s us users) to its paying advertisers.
Al Hilwa, program director of software development research at IDC believes the company had hundreds of developers working on Parse, at a cost approaching $50 million that could better be spent elsewhere. Apparently so.
Facebook is taking a year to wind down Parse. It released a database migration tool to ease the transition to any MongoDB database. It also published a migration guide and its open source Parse Server, which provides developers with the ability to run much of the Parse API from any Node.js server of their choosing. Final shutdown will occur on Jan. 28, 2017, exactly one year after the closure was announced.
If you were developing apps on the Parse platform, what’s the impact and what are you going to do about it? Where do you intend to move your code and data? Join the discussion, we’d like to hear from you.
When you think cloud computing platforms, do you think Walmart? I certainly didn’t. That’s changing.
In a WalmartLabs blog post, Jeremy King, CTO of Walmart Global eCommerce and Tim Kimmet, VP of Platform and Systems, announced that following more than two years of development and testing, the company is making its OneOps cloud management and application lifecycle management platform available to the open source community. Talk about pricing rollbacks. If you’re inclined to take a dive in, you can download the source code from GitHub.
Whatever you think of Walmart as a company, the company has long been a leader in advancing and leveraging technology in the retail industry. As a former director of point-of-sale systems at two national retailers, I saw the technology move from Kimball punch tickets to OCR-A to barcode, RFID, and beyond. Behind that are warehousing systems, sales analysis for merchandise replenishment, and much more. Today, with smartphones and omnichannel marketing strategies, shopping is a vastly different experience than it was even just a decade ago. OneOps is used to manage the Walmart and Sam’s Club e-commerce sites.
But, what WalmartLabs is doing is not just about retail, it’s about any organization that relies on the cloud for its IT needs. Add to that the idea of eliminating cloud provider vendor lock-in, and we might be in for quite a shake-up.
It appears a key benefit of OneOps is the ability for cloud users to avoid vendor lock-in to any one cloud provider. In the words of King and Kimmet, developers can use OneOps to “test and switch between different cloud providers to take advantage of better pricing, technology and scalability.” Impressive. One promise of the cloud has been ease of portability, but, in practice, that’s often not the case.
Four clouds are currently supported, including OpenStack, Rackspace, Microsoft Azure, and Amazon Web Services. CenturyLink support is said to be on the way. Nearly three dozen development products are currently supported, including Tomcat, Node.js, Docker, Ruby, Java, and JBoss, to name a few.
Other features include auto-scaling, metrics instrumentation, auto-healing with automatic instance replacement, and perhaps most important, the idea of out-of-the box readiness for multiple public and private cloud infrastructure providers.
What do you think about this? Would you try it out and perhaps place your business’s existence in the hands of software from Walmart? It’s going to be a fascinating ride. Share your thoughts about this, we’d like to hear from you.
If there’s one thing we can say with certainty about “IT,” it’s that both the information and the technology are constantly changing.
You know about the information; volume, velocity, and variety are exploding, forcing us to be ever more-vigilant about the fourth “v,” veracity. What’s has my interested to piqued is the other side, the technology of handling all that data. What’s more interesting is the way Apache Spark is pushing MapReduce aside for clustered processing of large data volumes.
There is no doubt that Spark’s swift growth is coming at the expense of the MapReduce component of the Apache Hadoop software framework. Consider this — In its December 2015 survey of 3,100 IT professionals, (59% of whom are developers), Typesafe, a San Francisco maker of development tools, noted that 22% of respondents are actively working with Spark.
So what’s the allure of Spark? I asked Anand Iyer, senior product manager at Cloudera, the first company to commercialize Apache Hadoop. “Compared with MapReduce, Spark is almost an order of magnitude faster, it has a significantly more approachable and extensible API, and it is highly scalable,” he said. “Spark is a fantastic, flexible engine that will eventually replace MapReduce.” Cloudera isn’t wasting time: In September 2015, the company revved up its efforts to position Spark as the successor to Hadoop’s MapReduce framework.
Even IBM is on board. In June 2015, IBM called Spark “potentially the most significant open source project of the next decade.” IBM is already working to embed Spark in its analytics and commerce platforms, and it is assigning 3,500 researchers to Spark-related projects. Yes, 3,500 is correct.
Research firm Gartner, reacting to the IBM initiative, said Information and analytics leaders must begin working to ensure that they have needed knowledge and skills.
And that’s the key. As a developer, you must work with an ever-changing toolbox. New skills must be acquired and mastered, and sometimes old ones left behind. Though MapReduce is not going to disappear anytime soon, the shift from MapReduce to Spark appears to be happening with astonishing speed. Are you ready?
What are your organization’s plans with respect to MapReduce and Spark? Are you planning to switch? Or maybe you’ve never even gone so far as to implement MapReduce. Share your opinions on this important topic. We’d like to hear from you.
Today is the day that Microsoft discontinues all support for the loathed Windows 8 operating system, and stops issuing security patches for Internet Explorer versions 8, 9, and 10. Windows 7 and 8.1, along with IE 11 are safe — for now.
The message is clear: upgrade or risk the possibility of bad people doing bad things. If you think this is moot and that most people have already upgraded, well, think again. According to Statista’s Global market share held by operating systems Desktop PCs, Windows XP still had an 8.44% share as of Dec. 2015, even though support was killed off in April 2014. Even the despised Windows Vista still maintained a 1.78% share. It’s not as strange as you might think. Up until a couple of years ago, I knew of a small business that was still using Windows 2000 on roughly a dozen laptops.
There’s one small twist. Since systems running Vista cannot run IE 10 or 11, they still get support for IE 9. That ends in April 2017 when all support for Vista terminates.
It’s clear that Microsoft wants users to be on Win 10 and the Edge browser. Those monumentally annoying pop-ups admonishing me to “upgrade now” aren’t going away until I do. And I will. Soon. Really.
The rules for deep-pocketed corporations and government agencies are a bit different. Those that are willing to pay for continued support of these older products can get it, for now. No doubt you read in June 2015 that your United States Navy is shelling out $9 million a year to have Microsoft continue support for XP.
If your organization or your clients have applications that require these older versions of Windows and IE, the proverbial window is closing. Here are Microsoft’s official end-of-life dates that you need to keep in mind: Vista, Apr. 11, 2017; Win 7, Jan. 14, 2020; Win 8.1, Jan. 10, 2023; and Win 10, Oct. 14, 2025.
It’s time to start the process for movin’ on up to Windows 10 and Edge.
Are you running into issues with applications that require versions of Windows or IE that are no longer supported? What is your CIO doing about it? Tell us, we’d like to hear from you.