Head in the Clouds: SaaS, PaaS, and Cloud Strategy


April 15, 2016  12:19 PM

Developers need to be aware of avoidable costs

Joel Shore Joel Shore Profile: Joel Shore
Container virtualization, cost reduction, SQL Server

What do developers worry about when creating an application? Performance. Data validation. Correct logic and processing. Memory use. Concise code. What they tend not to concern themselves with is cost. Perhaps that needs to change.

In doing research for a story on containers as a service (CaaS), both users I interviewed railed on the topic of license costs, at least as it pertains to Microsoft SQL Server. Both make a good case for paying close attention to how many instances of SQL Server are running, the number of servers on which they run, and, especially the proliferation of home-grown or purchased applications that expect their own personal instance of SQL Server.

The first hint of this came from Don Boxley, CEO of DH2i, a Fort Collins, Colorado company that makes SQL Server containerization software, primarily to endow databases with portability for easy movement from a development environment to production or from one cloud provider to another. Being able to stack containers on physical or virtual machines contributes to cost savings, he explained.

Well, that’s fine when a vendor pitching a product floats the idea, but how does this work out in the real world, in real businesses, with real applications? Turns out it’s a big deal.

Michael York, a systems engineer at Asante, a major regional healthcare system in Oregon, lives with the realities every day. “We have nearly 100 applications that have a SQL Server back end,” he told me. Most were purchased apps that stipulated a dedicated instance of SQL Server as a requirement. Add an app here and another there, and pretty soon you’re suffering from what York characterized as database sprawl. “It’s easy to stand up another instance,” he said. Getting the job done was the developer’s primary concern, not licensure costs. Running one instance per server sped development. Developers, he said, were often not even aware that instances could be stacked.

Through containerization, stacking instances, and de-provisioning of instances that were unnecessary, Asante saved more than $200,000 in 2015.

Tammy Lawson, a database administrator at Sonoco, the South Carolina global product-packaging giant (containers of a different sort), laid it out in very precise terms: “If each of my 61 container SQL instances were on its own server, the SQL Standard license for each server would be around $16K (using 2×8 AMD processors in my calculation since that is what my [DH2i] DxEnterprise physical servers are). That is a total of $976,000 just for Standard licenses. Buying SQL for my 4 Dx cluster nodes ($65K) + the DxEnterprise software came nowhere close to this number. Big savings in the licensing department.” Miss Lawson knows her stuff.

While it’s eminently clear that performance matters most, how much you spend year after year is important, too, especially if much of that spending could be avoided. If you properly stack instances on servers to mix and match database demand for the greatest efficiency, containers can save huge amounts of money. Developers need to know.

What strategies does your organization use to minimize licensure and maintenance fees? Share your ideas; we’d like to hear from you.

April 6, 2016  5:00 PM

Could the Internet of Things morph into the Abandonment of Things?

Joel Shore Joel Shore Profile: Joel Shore

Chrysler killed off Plymouth. GM did it to Oldsmobile and Pontiac. Ford did it to Mercury. Microsoft even did it to Windows XP. Yet today, years after the demise of these products, they all continue to run. For the vehicles, parts remain available and dealers are happy to perform maintenance and take your money. Even for XP, Microsoft still issues the occasional security patch. Elsewhere, ink cartridges continue to be available for Epson inkjet printers discontinued long ago.

So, what’s the problem with the Internet of Things?

Consider Nest’s decision to kill off its $299 Revolv home-automation hub. Revolv (the company) was acquired by Nest in October 2014, which immediately stopped selling Revolv (the product). The service stayed up, however. Not for much longer. “As of May 15, 2016, your Revolv hub and app will no longer work,” states a message on the Revolv website from its founders. “The Revolv app won’t open and the hub won’t work.” Tough luck, buddy. And not exactly a PR-friendly warm and fuzzy writing style, either. (Nest itself had been snapped up by Google in January 2014.)

It’s reasonable to think that it was Revolv’s intellectual assets, especially its talented IoT developers, that Nest coveted. If you’re an IoT developer, or plan to become one, you should view the move as a signal that this is a great career path.

Initially, Nest (or Google or Alphabet) had no plans to compensate Revolv owners, but that now appears to be changing. In a statement provided to CNBC, a Nest spokesman said it will consider providing customers with compensation on a case-by-case basis. What form that compensation takes was not detailed. Either way, these customers are still left with nothing but a fancy paperweight. It’s the Abandonment of Things. Curiously, Google is refunding the full purchase price of its acquired Nik photo-editing software suite now that the company is making it free — and not shutting it down.

Do you think that’s fair? Shutting down the cloud component of your IoT product, turning faithful customers into orphans is a shameful business strategy. Can you imagine a maker of implantable cardiac pacemakers attempting a similar tactic? (Then again, no one jumped up to compensate me when it became impossible to buy blank VHS tapes for my VCR.)

I have an Acer laptop that’s nearly 20 years old. It runs Windows 95, still functions perfectly, and through an RS-232 serial-port powerline interface still runs an ancient version of the superb HomeSeer application, communicating with switch modules to control lighting in my home via the long-defunct X10 protocol. The laptop sits in a closet, out of sight, out of mind, plugged into a small UPS unit for power protection, and perfectly secure because it has no network connection. It’s all hilariously obsolete, but no one ever tried to shut any of these products down.

As developers, we understand that even the simplest of IoT products represents a significant investment. They contain embedded software to make the thing work, server side applications to process messages or send out alerts, databases for maintaining user accounts, iOS and Android mobile apps for controlling devices from your reclining chair, and more. There are license fees for software libraries, too.

I can understand the underlying economic reason for leaving the past behind, but in this connected age, before you arbitrarily put a bullet through your products and applications, you’d best provide a soft landing for the people who paid for the privilege of using them.

What do you think? Have you written code for products your company killed off, leaving customers with no escape route? Share your thoughts; we’d like to hear from you.


March 29, 2016  12:49 PM

Nothing is 100% secure, not even an iPhone

Joel Shore Joel Shore Profile: Joel Shore
iPhone, Security

Last night, the FBI announced that it was dropping its litigation against Apple, because it had found an alternative way into the iPhone that had belonged to one of the San Bernardino terrorists. It proves, yet again, that nothing is ever completely, totally secure. Suing Apple simply became moot.

To the best of my knowledge, Apple never said that it couldn’t gain access to the phone, only that it wouldn’t. I’m not here to persecute or defend Apple, nor to debate the social or legal issues raised by the case. What I will do is opine about what we believe to be secure.

I’ve always likened device or application security to the idea of the scientific hypothesis. While it’s possible to absolutely prove a hypothesis to be false — all it takes is a single test case — you can never prove a hypothesis to be true. Every time you run a test that doesn’t destroy your hypothesis, all you’ve done is bolster support for it. But, you haven’t proved it true. Stop after a million tests that all support your hypothesis, and it still could be test 1,000,001, the one your imagination never conjured up, that smashes it to bits. You get the idea.

Testing for security is the same. In the digital realm, no matter how many times your security tests stand up to scrutiny, it just might be the very next one that lets the bad guys in. Who, after all, is the party that came forward to teach the FBI how to gain access to the infamous iPhone? Fortunately, this party was not a malicious hacker, but proved there was a way in, regardless of how secure Apple wanted us to believe the device was. (The data recovered from the phone still needs to be decrypted, but that’s separate from getting to the data.) Apple now vows to tighten security further.

One of the traditional arguments against cloud computing continues to be that some CIOs feel uncomfortable about security. Where are these cloud servers? Who is managing them? And wouldn’t 10,000 corporations virtually situated in the same gigantic physical datacenter be much more of a sitting duck target than a handful of servers buried deep in the bowels of 10,000 different corporate headquarters facilities? They’re all legitimate questions.

Cloud providers have the latest security technology and spend a whole lot more on security than any IT department ever could. They can hire experts that businesses can’t afford. They can hire experts that businesses can’t even find. It likely makes clouds much better at security than any business could do on its own, but absolutely, positively secure? In a word, no. After all, if security heavyweight RSA could itself be the victim of a huge breach in 2011, what does that mean for the rest of us?

Enough about the Apple case. Are you hypothesizing about the security of your systems, services, applications, and data? Sleeping well at night? Share your opinion — or your hypothesis; we’d like to hear from you.


March 24, 2016  10:08 AM

When multi-SaaS integration is a repeat offender

Joel Shore Joel Shore Profile: Joel Shore
Application integration, SaaS applications, Shadow IT

As legacy applications from the on-premises datacenter are retired and replaced by cloud-based software-as-a-service subscriptions, those services need to be integrated. That’s the job of developers, armed with APIs and working under the aegis of the CIO. It’s all very tidy. But, what about integrating SaaS implementations that secretly came in through the back door without IT ever being aware? And what about when you find yourself in the crazy scenario of connecting a dozen instances of the very same SaaS?

It happens more often than you’d think, according to Liz Herbert, a vice president and principal analyst at Forrester Research. We’re all aware that different SaaS implementations, such as CRM, order fulfillment, payroll, inventory management, and others need to be integrated. That’s neither news nor fodder for an opinion column. Much more interesting is that we never think about is the need to integrate multiple copies of the same SaaS.

“In a large enterprise, it is not uncommon to have 12 instances of Salesforce that are unaware of each other, usually because each one was brought in separately under the radar,” Herbert observed. It’s yet another aspect of that phenomenon we’ve come to know as Shadow IT or Citizen IT.

It happens because line-of-business departments want to do their own thing and not wait for IT to work its way down a list of pending projects. It can happen because IT doesn’t have the budget. Or because there’s a roadblock with a legacy CIO who thinks in legacy terms. It can happen because a department manager gets wind of what a different department is doing, likes the idea, but goes with his or her personal twist.

Typically, these SaaS instances become known to IT months after they were initiated, when the department manager asks IT to take over management and administration. Whether they conform to the corporate policies on security or governance was never considered — until now.

To be fair, this is not always due to illicit SaaS sign-ups. A more above-board scenario is when businesses go on acquisition sprees and have to meld all the pieces into a unified whole. Though each might use the same payroll processing SaaS, it’s a good bet that no two implementations are identical. That’s more likely to become an official IT project even though the same-but-different integration issues persist.

Have you found yourself in the midst of integrating multiple copies of the same SaaS? If so, we’d like to hear about your adventures in the discussion thread. And remember, it’s not just your organization. When it comes to cloud computing, “I thought it was just us” simply doesn’t apply.


March 15, 2016  2:07 PM

When no one knows why an app does what it does

Joel Shore Joel Shore Profile: Joel Shore
Application testing, Mobile Application Development, QA testing

You’re a developer and darned proud of the code you write. You follow the specs and build what the stakeholders and designers want. You’ve tested it and all the test scenarios work as expected. You deploy and the app goes into the wild. But, what happens when there’s a problem that no one anticipated? Not you, not the app owner, not QA, not ops, not anyone.

After having a problem with my mobile phone, I visited the local store of the phone manufacturer, a giant company named for a fruit. None of the store’s so-called geniuses could figure out why the docs and data on my phone kept ballooning up to fill — and even attempt to exceed — the 128 GB device’s available memory. It turns out that this giant hardware company (or is it a software company) had no diagnostic software that could peek into the device and see what was suddenly occupying so much RAM, more than 23 GB. Manually adding up the data use reported by all the installed apps plus recently deleted photos still in RAM totaled less than 1 GB.

Multiple reboots did not help. The only alternative, they said, was a reset and restore from my last backup. After doing that, I attempted to re-add a credit card to the phone’s wallet. That resulted in a “card is already in wallet” error though it clearly was not. Multiple calls with said fruit company’s tech support experts (on my land line) could not solve the problem. Again, there was no way to peer into the phone to get a snapshot.

Four calls later, one young man in the fruit company’s Jacksonville, Fla. call center suggested the purely undocumented move of logging out of my cloud account associated with my phone then logging back in. Voila! I could now add my credit card. Why do this? He couldn’t say.

Why did the phone operating software and its wallet app behave in this manner? No one knows. Why did logging out and back in solve the problem? No one knows. Was that the cloud equivalent of a three-fingered Ctrl-Alt-Del forced-reboot salute? No one knows. With tens of millions of phones in use that support wallets and credit cards, why hadn’t this been seen before? Or if it had been, why wasn’t it documented? No one knows. Web searches did yielded some highly convoluted suggestions, though.

The innocent bystander in this are the developers who build software based on a set of specs. I’m not saying that the developers were surrounded by fools to the left and jokers to the right, but it’s clear that neither the specs nor the test scripts anticipated this situation. Perhaps this couldn’t have been imagined. It’s not like the famous spec failure that says how to process a payment if it’s received less than 29 or more than 29 days after the due date, but which fails to specify what to do if the payment is received exactly 29 days after the due date. That’s just bad design.

Burke Holland, director of developer relations at Progress Software, has reminded me more than once that great software isn’t finished until it’s fully tested. But, just as you can never prove that an app is completely secure, only that it is not secure, you can’t test for scenarios beyond your wildest imagination.

What really would have helped is powerful diagnostic software to do a RAM autopsy. It didn’t exist. And that makes me wonder what we should be doing in terms of creating diagnostics for the apps we build. Do you do that? Share your thoughts, we’d like to hear from you.


February 29, 2016  9:23 AM

Are your cloud apps’ APIs secure?

Joel Shore Joel Shore Profile: Joel Shore
API, API Testing, Application security

It’s a good question to ask. You think your API calls are secure. But, how do you know? It’s likely you probably don’t.

The woeful case of the all-electric Nissan Leaf car has been beaten to a pulp recently by the tech news industry. It turns out the car can be accessed via an insecure API. Battery charging functions, driving history, can all be gotten to once you know the vehicle’s VIN, or vehicle identification number. And that, as you know, is embossed on an aluminum plate that’s always on display. All you need do is look through the windshield.

If this is really the best that a major automotive manufacturer can do, you’ve got to wonder about other companies whose pockets for developing and testing applications are not quite so deep. Nissan reacted by suspending its electric car app.

You can never prove that an application is secure. You can prove only that it is not secure.

This isn’t the first case of remote automotive hacking. You’ll remember last July that Chrysler issued an emergency software patch for 1.4 million vehicles once it became public that on-board software in Jeep, Ram, Durango, Chrysler 200 and 300, Challenger, and Viper models from 2013 to 2015 could be easily hacked.

Security researcher Troy Hunt, in a lengthy blog post, describes the entire Nissan Leaf scenario, complete with an embedded video and many code fragments. If you’re a developer, you should give the piece a thorough read. The issue isn’t that security was implemented incorrectly, but rather that it doesn’t seem to have been implemented at all.

I don’t know what’s worse — bad security or no security.

Roberto Medrano, executive vice president at Akana, a provider of tools for creating, securing, and managing APIs told me that applications, and integrations among them, are becoming increasingly API driven, making connections simple and straightforward. But, it’s security that must always be top of mind.

Here’s what I’ve been saying about application for years: You can never prove that an application is secure, you can prove only that it is not secure. How can that be? Think of it this way — If you run a million different attack scenarios on your app and none succeeds, you’ve proven only that those don’t work. But, maybe scenario 1,000,001, the one you hadn’t thought of, that will break in. Thus, you can prove unequivocally that something is not secure.

This is like a scientific hypothesis. You can never prove a hypothesis to be true, but you can prove it to be false.

What are you doing to test API security? Share your thoughts; we’d like to hear from you.


February 10, 2016  12:59 PM

Parse shutdown: Alternatives are asking for your business

Joel Shore Joel Shore Profile: Joel Shore
Application migration, MBaaS, Mobile Application Development

I’ve been writing opinion columns for various technology publications for more than a quarter century. Rarely have I seen anything touch a nerve to the degree of Facebook’s coming shutdown of the Parse mobile back-end as a service.

One thing that’s great to know is that developers are watching out for each other, offering up ideas for alternative services. As my own service, here’s a digest of some of what has crossed my inbox during the last week. Which are good, bad, or ugly products and services? That’s up to you to decide for your own mobile development projects.

Nimble Parse is a Parse-compatible API service from Nimble Stack that starts at $10 a month, including a half gig of memory and unlimited data. Its offers three service levels up to 2 GB of memory.

William Hoang, a mobile developer advocate at Couchbase, has written a blog piece with code fragments that demonstrates migration of a Parse app to Couchbase Mobile, backed by Digital Ocean.

Appcelerator, another mobile app development platform, feels your pain. To help, director of product architecture Rick Blalock is hosting a webcast on Feb. 17 to walk through a comparison of Parse and Appcelerator Arrow, showing how to migrate platforms and answer questions.

Syncano is a start-up that touts itself as a platform for creating serverless apps. In her blog post, Sara Cowie shares your sadness and confusion. The company is ready to provide its entire portfolio of features free for six months, including a dedicated support team to guide developers through the migration process.

GameSparks, another back-end services that seems to target developers of gaming apps, wants to provide you with an alternative integrated toolset for building, tuning, and managing server-side components.

I was contacted by apiOmat.com, yet another MBaaS, that appears to have more of an enterprise slant to its offerings. You can check it out and get started for free, though the pricing chart was in euros.

Other alternatives exist. These are the first few that reached out to me. No doubt we’ll all be looking into this a lot more in the next few months. Parse shuts down on Jan 28, 2017. Don’t wait. Start now. And share with us your woes, your outrage, and your plans.


February 3, 2016  8:35 AM

Parse shutdown: What now?

Joel Shore Joel Shore Profile: Joel Shore
Data migration, Mobile Application Development, MongoDB

Were you caught off guard by Facebook’s abrupt announcement on Jan. 28 statement that its Parse mobile backend as a service (MBaaS) was going to be shut down? You’re not alone. And we’d like to hear from you.

Outrage on the Twitter #parseshutdown page didn’t take long to get revved up. Developers who entrusted code or data to Parse bemoaned its impending demise, wondering what they would do next. Consultants and competing platform providers began to twee advice for migrating data and applications or for offering replacement mobile development platforms.

Burke Holland, director of developer relations at Progress Software, told me about the plight of one developer, victimized by the sudden and unexpected announcement. “I saw a message on Reddit from a developer who said he had deployed his app on Parse literally two hours before the announcement,” he said. He went on to say that there are small developers who may have built their entire business on Parse and who don’t have the latitude to take a hit like this. He’s right.

For Facebook, it may be that Parse simply wasn’t a profitable business, Richard Mendis, chief product officer of MBaaS provider AnyPresence told me.

What we may have lost sight of is how Facebook views the people who use its various services.

In my opinion, whether we’re posting photos of the new grandkid, linking to videos of cats playing piano, or using the company’s APIs to develop compatible applications, remember this: We are not Facebook’s customers, we are Facebook’s product.

Facebook is in the business of generating revenue, mostly through the sales of advertising that reaches the likes of you and me. If you are not paying money to Facebook, you are not a Facebook customer. What the company is doing is delivering eyeballs (that’s us users) to its paying advertisers.

Al Hilwa, program director of software development research at IDC believes the company had hundreds of developers working on Parse, at a cost approaching $50 million that could better be spent elsewhere. Apparently so.

Facebook is taking a year to wind down Parse. It released a database migration tool to ease the transition to any MongoDB database. It also published a migration guide and its open source Parse Server, which provides developers with the ability to run much of the Parse API from any Node.js server of their choosing. Final shutdown will occur on Jan. 28, 2017, exactly one year after the closure was announced.

If you were developing apps on the Parse platform, what’s the impact and what are you going to do about it? Where do you intend to move your code and data? Join the discussion, we’d like to hear from you.


January 28, 2016  6:43 PM

Walmart shakes up application lifecycle management

Joel Shore Joel Shore Profile: Joel Shore
Application Lifecycle Management, Cloud management, DevOps

When you think cloud computing platforms, do you think Walmart? I certainly didn’t. That’s changing.

In a WalmartLabs blog post, Jeremy King, CTO of Walmart Global eCommerce and Tim Kimmet, VP of Platform and Systems, announced that following more than two years of development and testing, the company is making its OneOps cloud management and application lifecycle management platform available to the open source community. Talk about pricing rollbacks. If you’re inclined to take a dive in, you can download the source code from GitHub.

Whatever you think of Walmart as a company, the company has long been a leader in advancing and leveraging technology in the retail industry. As a former director of point-of-sale systems at two national retailers, I saw the technology move from Kimball punch tickets to OCR-A to barcode, RFID, and beyond. Behind that are warehousing systems, sales analysis for merchandise replenishment, and much more. Today, with smartphones and omnichannel marketing strategies, shopping is a vastly different experience than it was even just a decade ago. OneOps is used to manage the Walmart and Sam’s Club e-commerce sites.

But, what WalmartLabs is doing is not just about retail, it’s about any organization that relies on the cloud for its IT needs. Add to that the idea of eliminating cloud provider vendor lock-in, and we might be in for quite a shake-up.

It appears a key benefit of OneOps is the ability for cloud users to avoid vendor lock-in to any one cloud provider. In the words of King and Kimmet, developers can use OneOps to “test and switch between different cloud providers to take advantage of better pricing, technology and scalability.” Impressive. One promise of the cloud has been ease of portability, but, in practice, that’s often not the case.

Four clouds are currently supported, including OpenStack, Rackspace, Microsoft Azure, and Amazon Web Services. CenturyLink support is said to be on the way. Nearly three dozen development products are currently supported, including Tomcat, Node.js, Docker, Ruby, Java, and JBoss, to name a few.

Other features include auto-scaling, metrics instrumentation, auto-healing with automatic instance replacement, and perhaps most important, the idea of out-of-the box readiness for multiple public and private cloud infrastructure providers.

What do you think about this? Would you try it out and perhaps place your business’s existence in the hands of software from Walmart? It’s going to be a fascinating ride. Share your thoughts about this, we’d like to hear from you.


January 19, 2016  11:58 AM

Spark is overtaking MapReduce. Are you ready?

Joel Shore Joel Shore Profile: Joel Shore
Apache Spark, Application development, Hadoop, MapReduce

If there’s one thing we can say with certainty about “IT,” it’s that both the information and the technology are constantly changing.

You know about the information; volume, velocity, and variety are exploding, forcing us to be ever more-vigilant about the fourth “v,” veracity. What’s has my interested to piqued is the other side, the technology of handling all that data. What’s more interesting is the way Apache Spark is pushing MapReduce aside for clustered processing of large data volumes.

There is no doubt that Spark’s swift growth is coming at the expense of the MapReduce component of the Apache Hadoop software framework. Consider this — In its December 2015 survey of 3,100 IT professionals, (59% of whom are developers), Typesafe, a San Francisco maker of development tools, noted that 22% of respondents are actively working with Spark.

So what’s the allure of Spark? I asked Anand Iyer, senior product manager at Cloudera, the first company to commercialize Apache Hadoop. “Compared with MapReduce, Spark is almost an order of magnitude faster, it has a significantly more approachable and extensible API, and it is highly scalable,” he said. “Spark is a fantastic, flexible engine that will eventually replace MapReduce.” Cloudera isn’t wasting time: In September 2015, the company revved up its efforts to position Spark as the successor to Hadoop’s MapReduce framework.

Even IBM is on board. In June 2015, IBM called Spark “potentially the most significant open source project of the next decade.” IBM is already working to embed Spark in its analytics and commerce platforms, and it is assigning 3,500 researchers to Spark-related projects. Yes, 3,500 is correct.

Research firm Gartner, reacting to the IBM initiative, said Information and analytics leaders must begin working to ensure that they have needed knowledge and skills.

And that’s the key. As a developer, you must work with an ever-changing toolbox. New skills must be acquired and mastered, and sometimes old ones left behind. Though MapReduce is not going to disappear anytime soon, the shift from MapReduce to Spark appears to be happening with astonishing speed. Are you ready?

What are your organization’s plans with respect to MapReduce and Spark? Are you planning to switch? Or maybe you’ve never even gone so far as to implement MapReduce. Share your opinions on this important topic. We’d like to hear from you.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: