Yesterday (Sept. 24), I attended a cloud summit seminar sponsored by the Object Management Group’s Cloud Standards Customer Council. This was not about scrutinizing lines of code, but rather, an examination of real-world situations as seen through the eyes of cloud consultants, architects, and select major vendors.
What I learned could have come from the mind of Forrest Gump, who famously said, “Stupid is as stupid does.”
David Linthicum, senior vice president at Cloud Technology Partners and a TechTarget contributor, was the first to say it: “If you take a crappy application that exists on a platform internally and put it in the cloud, it’s just a crappy application in the cloud.”
In her presentation, Pamela Wise-Martinez, senior enterprise architect at the Pension Benefit Guaranty Corporation, hammered home the same point. “You’ve got to see how cloud-ready your applications are. If it’s a crappy application and you put it in the cloud, it’s still crappy. You’ve got to make intelligent decisions about what goes to the cloud.”
In other words, crappy is as crappy does.
Sure, we all want to move everything, or nearly everything, into the cloud. These speakers, though, are making a couple of important points. First, some applications, not matter how well built, simply don’t belong in the cloud. That might be factory floor process control or retail back-room administrative functions. Second, applications that would benefit from being in the cloud might be built in a way that makes them cloud unfriendly. If you want the functionality of those apps in the cloud, the existing code base may need to be thrown out and replaced with something else.
What’s your plan for moving on-premises, legacy applications into the cloud? Have you figured out which apps are good candidates and which are not? Or did you learn the hard way, only after attempting a migration? Share your thoughts, we’d like to hear from you.
Sometimes the best ideas start with a rant. At least that’s what happened to Mallika Iyer, a Cloud Foundry specialist at Pivotal who helps clients either start or expand their cloud efforts. After a particularly frustrating day, Iyer was complaining to her partner about the mistakes that clients seemed to make over and over again. His advice: make it a speech.
So last week attendees at Boston’s 2015 DevOps Days got to hear Iyer explain “Cloud Anti-Patterns,” which are a collection of five areas cloud implementations can go wrong. It’s good practical advice for anyone in the cloud space.
1. Ignoring the sweet spot: The sweet spot is a platform as a service or PaaS offering that can make all of the many moving parts of cloud adoption so much easier, Iyer explained. “Companies start on their journey and they think you need one tool to manage microservices, one to manage containers and one (or more) for continuous integration (CI) and continuous development (CD) tool sets,” she said. “Usually what ends up happening is they’re already creating silos. Now they’ve got three separate areas to manage instead of just one PaaS.” Her bottom line: “A PaaS allows you to align strategy with Dev and Ops teams and unifies your cloud strategy. Otherwise you’re going to exist in a silo’d world.” So think this through ahead of time and you’ll save time.
2. The logic leak: Although it’s tempting to have a whole lot of different but closely related microservices around when developing a product, problems arise when they’re too closely related (sharing a core object library or common domain objects) and suddenly “you’re going to end up with an inconsistent object at the end of the day,” Iyer warned. That results in the charmingly named “code smell” and is an easy trap to fall in to. Instead, remember DRY, for “don’t repeat yourself,” she said. And really, really don’t repeat similar logic.
3. Monolithic microservices: It’s tempting to put different microservices in the same container, right? Why not save money, and they’re all kind of related, and it’s a great idea, right? Wrong, said Iyer, though she acknowledged this practice is way more common than most people are willing to admit and more often than not the driving force is cost. But this strategy will cause your code to “decompose prematurely,” she warned, because microservices sharing the same container develop dependencies and suddenly making code changes – or scaling – becomes very difficult to do. “They grow arms and legs in there, so you just can’t put more than one in the same container,” Iyer said.
4. Ignoring the 3 Musketeers: Iyer referred to APIs/API interfaces, service registries and fault tolerance as the 3 Musketeers. APIs need to talk to each other but are often at cross purposes language-wise. Her advice is to use language independent Swagger to bridge all the gaps. “You need to honor the contract between your microservices,” she said. Service discovery is important because it can be easy to loose track of what is where with all the communication between APIs and the addition of new microservices. Her advice: Use Eureka from Netflix or Spring Cloud services. And finally, that third Musketeer, fault tolerance cannot be ignored because problems at the API layer can spread and potentially cause a disruption of service. Iyer suggested using Netflix’ Hystrix as “an elegant way to solve this problem.”
5. Siloed Search: Search can get tricky – and go stale – when all an application’s microservices are operating multiple sets of data and potentially altering them. The solution, Iyer said, is a cloud native federated search model. An enterprise wide data bus using Rabbit MQ or Kafka is the answer. “Instead of going from a formerly stale data push model now you have more of a proactive real time data model pulling from the message bus and updating, thus giving users the most up to date information,” Iyer said.
Has your organization been guilty of any of these “anti-patterns”?
We’re all so busy reading (you) or writing (me) about developing apps for cloud and mobile platforms, it’s getting pretty tough to find much in the way of Windows coverage. If you’ve forgotten about Windows on the desktop, you may be making a mistake.
Certainly, we all know the PC is dead, right? Well, not so fast. Consider this market research nugget published just six months ago by IDC. According to its own March 2015 Worldwide Quarterly PC Tracker, “total 2015 volume is projected at 293.1 million PCs, slipping a little further to 291.4 million in 2019.” (That includes Macs.)
That’s more than a quarter-billion desktop and laptop PCs (a lot more) globally every year from now through 2019. And the number is not going to suddenly plummet to zero in 2020. There’s no question shipments are in an overall declining trend, but a quarter-billion PCs every year is not to be ignored. In June 2015, the last full month before Win 10 started shipping, the various versions of Windows together accounted for 76.9 percent of all those machines, according to Statista. (Mac OS was 10.03 percent; Linux 1.77 percent.)
Granted, the PC numbers pale in comparison with IDC’s Aug. 2015 Worldwide Quarterly Mobile Phone Tracker that says worldwide smartphone shipments in 2015 will top an astounding 1.43 billion units. In comparison, tablet shipments in 2015 will be a mere 233 million units worldwide, according to a Jan. 2015 report from Gartner. That’s a bit of a recovery after sales plummeted in 2014, a year that Gartner described as “troubling” for tablets. Note that the predicted tablet shipments lag behind PCs.
What spurred me to think about this is the torrent of press releases I see weekly announcing new development tools. To the best of my recollection, only one company, Embarcadero Technologies (which has its roots in the old Borland of Turbo Pascal and Delphi fame) is actively focusing on Windows and bringing out new generations of its tools.
Do you still develop for Windows? Or have you moved to cloud and mobile exclusively? Share your opinions; we’d like to hear from you.
We’re all aware that as cloud applications grow in importance, businesses are clamoring to hire more software engineers. They might work in mobile application development, database design and administration, security, communications, analytics, or somewhere else.
While most businesses acknowledge that simply finding software engineers can be a challenge, IBM, of all companies, admits something much more revealing: IBM doesn’t even know what a software engineer is.
During a presentation on big data analytics at a recent conference in Boston, Mike O’Rourke, vice president of product development for IBM’s cloud data services team, made a statement that I found rather startling:
“There are 400 different ways that people in IBM have described themselves as software engineers.”
That’s just inside one company (admittedly a very large one). Look online and you’ll find wildly varying definitions.
How do you define software engineer? Join the discussion and post your definition. Let’s see how well they line up or vary. Perhaps this is a new area crying out for standards.
And just like that, all the mobile cloud apps you worked so hard to build and deploy for iOS 8 throughout the past year are now woefully, utterly obsolete.
Sure they are. If your mobile apps don’t support iOS 9’s snazzy new 3D Touch technology, you are, well, out of touch. The technology was announced today (Wed., 9/9/2015) along with new iPhones, a new iPad, and a $99 stylus (the Apple Pencil), one thing Steve Jobs vowed never to bring to market.
3D Touch allows the newly announced iPhone 6s and 6s Plus models to sense how much pressure a fingertip applies to the display. That’s the basis for “Peek and Pop,” iOS 9’s marquee new feature that lets a user preview content within an app and act on it without actually opening it. (Not unlike the Preview Pane we’ve been using for years in Microsoft Outlook.)
To cite an Apple example, “With a light press you can Peek at each email in your inbox. Then when you want to open one, press a little deeper to Pop into it.” The feature can also be used on a link to preview a Web page without actually navigating to the site. Or press on a street address to peek at a map. Let go and the map goes away; press harder (“more deeply,” to use Apple’s vocabulary) and the map can be opened and zoomed for greater detail.
Here’s hoping that users don’t overdo the finger pressure gesture, though I’m sure we’ll soon be reading reports of Peek and Pop suddenly being transformed into Snap, Crackle, and Pop. If last year was “Bend-gate,” this year may descend into “Pop-gate.”
With “Quick Actions,” pressing on an app icon brings up a context sensitive menu of actions. (Think right clicking with a mouse.) On the native camera app, these options are Take Selfie, Record Video, Record Slo-Mo, and Take Photo.
For your job as an app developer or architect, here’s why your life is likely to get real busy real fast. Again, to quote Apple, “3D Touch works throughout iOS 9 to make the things you do every day more natural and intuitive. There are so many ways that simply pressing deeper can make whatever you’re doing a better experience.” (Update: Apple announced on Sept. 11 that it is now accepting for review applications developed for iOS 9, OS X El Capitan, and watchOS 2.)
As part of Apple’s launch event, it trotted out several companies that have already incorporated 3D Touch into their apps. If your apps are public facing, they’ll need 3D Touch functionality to stay competitive. After all, your competitors aren’t sitting still. Apps designed for in-house corporate and employee use are not under the same pressure (sorry).
With the public release of iOS 9 just a week away (Sept. 16), what is your organization doing to stay ahead of the feature curve? Maybe simply waiting is the most-prudent strategy. Or perhaps the race is on to broaden app capabilities. Tell us about your plans for iOS 9. There’s a lively discussion just around the corner. We’d like you to peek and then pop into it.
When someone using your mobile cloud app begins a transaction and communications is unexpectedly lost, what happens on both the device and server sides when that break is detected? And what actions follow when communications is restored, likely in a completely new session? These are important questions whether your app is operating in the financial, retail, or services industries.
This boils down to well-crafted specifications from a competent business systems analyst coded by an experienced developer in a clear, understandable way that holds up to extensive QA scenario testing and subsequent audit scrutiny. No one said it would be easy — that’s why up to 80% of the code in a typical application may be solely for exception handling.
It’s akin to a multiple-choice question with no right or wrong answer: cancel, complete, or suspend.
When a cloud session is interrupted, do you cancel an in-progress transaction? If yes, and that’s happening on the server side, you’ve got to do store-and-forward, eventually getting that cancellation to the mobile app to maintain end-to-end synchronization. Maybe your server side also needs to generate an e-mail or text message so the user knows what happened and what action ensued. Of course, no communications means the message might also be subject to delay. And, your app is logging all of this, right?
Perhaps, instead, the transaction gets processed and posted. If that’s the case, it might be happening without immediate confirmation to the user. Maybe that’s ok for a 99-cent music download, but, if the transaction is an application for a mortgage, for scheduling medical tests, purchasing airline tickets, or, to think more industrially, something that impacts a factory-floor production line, there’s much more at stake.
The third alternative is to suspend the transaction and wait for the user to re-appear under a new session ID, whereupon the transaction can presumably resume. This state that lies somewhere between cancel and complete may be the most complicated of all. Was the product deducted from the available inventory database? Was the payment portion transmitted to the third-party credit card processor? Was a pick ticket generated? There’s the issue of databases that may not know the correct record-locking states. And what if the user doesn’t re-appear within a specified time frame? It can get very messy very quickly.
Journaling and logging are essential components of any transaction-based system, but their true value may be more in post-mortem by a team of auditors than in dissecting the complexities and permutations of an interrupted event in real time. That was fine decades ago for nightly batch processing, but in today’s cloud age of expected instant results, users can’t — or won’t — wait.
Finally, there’s perception. Does the user blame the communications carrier or your app for the snafu? It takes only a moment for an angry customer to blast your app and company on social media, even though neither may be at fault.
What’s your philosophy when it comes to transactionus interruptus? What happens on your mobile app and on the server side? And how do you get the two sides back in sync? Tell us, we’d like to hear from you.
You’ve poured your heart and soul into a cloud-based mobile application development project. Thanks to APIs, Internet of Things, and the newest breed of development tools it works flawlessly and looks great. But, how do you react when the curtain is pulled back and the corporation behind your lovingly crafted app fails miserably in areas beyond your control, such as providing customer support?
When I recently needed to have a printer serviced, the first place I turned to for information was the company’s good-looking, well-regarded, easy-to-navigate mobile app. I’ve used it many times for accessing files residing on my home network’s NAS and printing them on printers connected either by Wi-Fi or an Ethernet cable. Works like a charm. The app can also query this manufacturer’s printers, report remaining levels of consumables, and even initiate an order to buy. Pretty handy and darn clever.
Not only is the app a nicely designed and implemented bit of software, it’s essentially the public face of this vendor. Your app should be no different.
Unfortunately, the app was useless for finding a local repair depot. Even worse, the company’s website was no better. The dropdowns where I selected my printer type (color laser) and model were confusing and obscured the map. And it not could find anyplace that could repair my product, just four years old. Pretty darn pathetic. As a last resort, I called the displayed phone number. After 22 minutes, most of which I was kept on hold, a “nearby” repair depot was found. In Arizona. I live in Boston.
My point is that there’s more to a company than the image of its online presence. You can build gorgeous apps that work beautifully. Your IT department can be behind you all the way. You can have at your disposal the latest tools for development, quality assurance, and deployment. Development might even occur on a robust platform as a service with production on a top-tier cloud service provider. And none of that may be good enough.
What does this means for your quality of work life? Can you still be proud of your work or might this be the sort of thing that spurs you to find a new opportunity? Or maybe it doesn’t matter at all — you just grin and deposit the paycheck.
Tell us about the quality of your work life and the company behind the app. We’d like to hear from you.
As the cynics are fond of saying, the nice thing about standards is that there are so many of them. Nevertheless, if you build applications that in any way deal with healthcare, you should consider adding yet another one, HL7 FHIR, to your vocabulary. It’s part of the movement to drive paper out of the healthcare system.
FHIR, short for Fast Healthcare Interoperability Resources and pronounced “fire,” is a standards framework, still in draft form, that describes modular data formats and elements (called “resources”). It also encompasses an application programming interface (API) for exchanging electronic health records. It’s being positioned as suitable for mobile phone apps, cloud communications, and server communications within institutional healthcare providers. Supporters include industry heavyweights Cerner, the Mayo Clinic, Meditech, McKesson, and the Partners HealthCare System (which includes Massachusetts General Hospital).
Resources are the standard’s building blocks, numbering just shy of 50 and organized into 12 groups. The Medications group, for example, is composed of seven resources: Medication, MedicationPrescription, MedicationAdministration, MedicationDispense, MedicationStatement, Immunization, and ImmunizationRecommendation.
FHIR is managed under the auspices of Health Level Seven International (HL7), a Michigan-based “not-for-profit, ANSI-accredited standards developing organization dedicated to providing a comprehensive framework and related standards for the exchange, integration, sharing, and retrieval of electronic health information that supports clinical practice and the management, delivery and evaluation of health services,” according to its website. To put it more simply, HL7 creates standards for the interchange of healthcare data.
Though still considered a draft standard for trial use that is yet to be finalized, FIHR looks promising. If it eliminates paper and advances the speed and accuracy of clinical healthcare, I’m all for it.
Do you develop applications for the healthcare industry? If so, share your opinions about FHIR and your plans for incorporating it (or not) into your development process. We’d like to hear from you.
Microservices are your future. Actually, they are your present — or should be. If you’re not already well-versed in microservices and containers, you’re running at the back of the pack. It’s time to study fast and furious.
We’re all used to giant monolithic applications that handle every last aspect of a solution, no matter how small. To be simplistic, think Microsoft Word or Adobe Photoshop. Each does hundreds of different things. Photoshop, probably closer to a thousand. In the case of Word, it’s pretty clear that file management, page layout, spell check, mail merge, and printing have nothing whatsoever to do with each other. So, why are they all shoehorned into one giant program?
To answer my own rhetorical question, it’s because that’s how we’ve done it for decades. Sure, Word runs as a launcher .exe and a handful of dynamic link libraries (.dll files), but that’s for memory management as much as anything else. That model isn’t really any different that the days of yore when mainframe computers were lucky to have 64K of magnetic core memory. To allow large, monolithic accounting systems to function, different modules were often written as a series of overlays, swapped into memory as needed. It was a real art form. Microservices are vastly different.
In a microservices architecture, large, complex programs are built from small processes that are each independent, communicating with each other — when necessary — through application program interfaces (APIs). Think of it as a suite of small, non-coupled modules, each running in its own little world.
The advantages are profound. Microservices solves the update problem. With a traditional monolithic application, a new version usually means replacing everything. Not so in a microservices architecture, where individual pieces can be updated as necessary. That’s especially useful with cloud and mobile applications where, for example, weekly user interface or functionality updates are fast becoming the accepted way of doing business.
Second is the issue of demand scalability. Should one aspect of a monolithic system become a bottleneck, say, checking inventory levels in a retail application, there’s no simple way to add capacity. But, in a cloud application designed as a series of uncoupled, containerized microservices, it’s a relatively simple matter to spin up additional instances to meet demand and alleviate the bottleneck.
We’ll be doing some spinning up of our own, as SearchCloudApplications writes more about microservices architecture in the coming months.
What’s your experience with microservices? We’d like to hear from you.
As Internet of Things sensors appear in more places, it is cloud applications that must handle the data packets they generate. Depending on the use, the data could be anything from a trickle to a raging torrent. Are the applications you’re building capable of handling such volumes? And in real time? Is your cloud infrastructure or provider service agreement robust enough to handle enormous volumes and instantly scale when those volumes spike? Hope so.
Gartner research vice president Kyle Hilgendorf looks at IoT in a unique way. One IoT sensor creates a miniscule amount of data. Think of it as a single raindrop. But, put enough raindrops together and you get a downpour. That’s where we’re headed, he says.
Consider one scenario. If a sensor is monitoring the temperature inside some device — refrigerator, toaster oven, NAS unit, industrial furnace, airliner jet engine, or nuclear power plant — and reporting the temperature every 15 seconds, the volume of data your application needs to handle is tiny. But, if your application tracks temperature reports for every jet engine that a major airline carrier has in the air at any point in time, that single raindrop of data has now become part of a massive torrent. Your application needs to handle that, and storage must be capable of keeping up with both the collective amount of data and its incoming speed.
But wait, there’s more. Your application also has to decide which sensor data to keep and which can be ignored. Maybe you should write every temperature reading to a file or maybe just the ones that are outside of a pre-defined acceptable range — good old exception reporting. And what about that span? A 10-degree temperature variance in a toaster oven is not a big deal, but for a jet engine that variance could spell disaster. Then there are decisions about sounding the alarms in real time or simply presenting aggregated reports periodically. As always, there is no right or wrong answer.
How are the applications you’re building processing, leveraging, and storing incoming IoT data? Share your strategies with us; we’d like to hear from you.