My grandson is just a tot, but he already understands the concept of multisource integration. Chicken nuggets from here, ketchup from there, a fork from, well, somewhere, and you’ve got yourself a comprehensive system. Call it the day-care version of APIs.
For us grown-ups, APIs make the world go round. My standard example is a culinary recipe app that pulls in data from numerous sources — weather, geographic, day of year, time of day, locally available farm-fresh veggies, supermarket promotions, user’s likes and dislikes — to recommend a recipe of the day to the app user. It’s a data integration example I can explain to anyone. And, of course, APIs make it possible.
Last week, Red Hat proved the increasing importance of APIs in app development beyond any doubt with its acquisition of API management tools vendor 3scale. In my exclusive joint interview with Mike Piech, Red Hat’s vice president of middleware, and 3scale CEO Steve Willmott, they explained why everything that developers touch is predicated upon APIs.
Willmott said over the last 18 months he had witnessed APIs becoming the backbone of many infrastructures, but that IT often lacked the means to manage, track, and secure their APIs. Piech agreed, saying that as recently as two years ago, API management rarely came up as a top-level requirement. That has changed as disparate infrastructures and services — on-premises, public and private cloud, software as a service, analytics as a service, data storage as a service, etc. all have to interoperate.
If it can be summed up in a single sentence, Willmott gets the gold medal, “APIs are the glue between on-premises and cloud-based components,” he said. I’ve thought of APIs more along the lines of conduits than glue, but Willmott is spot-on with his assessment.
The message here is not that APIs are hot or necessary. We all know that. What we’ve collectively not done superbly is manage the ever-growing collection or public and private APIs that enterprises purchase, subscribe to, or develop internally. Just as you need to know what keys are on that keychain in your pants pocket or handbag, you also need to know that your keychain is secure. API management is no different.
Cloud consultant David Linthicum is also an advocate of creating and instituting an active API management strategy. “API management should be a priority for any organization using the cloud,” he writes. He’s right.
It’s time for a robust discussion about APIs. What has APIs enabled that you could not do before? Who is testing your APIs for accuracy and security? And, to leverage the message of Red Hat’s 3scale acquisition, how are you managing your API portfolio. Share your thoughts; we’d like to hear from you.
Wi-Fi is what we all talk about. It’s what we all write about. It’s built into every wireless device. Your apps depend on it. But, what about Bluetooth? If you haven’t recently thought about leveraging the power of Bluetooth in your apps, the time to think anew is here.
The launch of Bluetooth 5 is just a few days away (June 16). This is a big deal. Bluetooth 5 offers the promise of up to double the range and transfer speeds up to four times faster than the current incarnation of the technology.
According to Mark Powell, executive director of the Bluetooth Special Interest Group (the organization that oversees the Bluetooth standard, its trademarks, and licensing), BT 5 will, “double the range and quadruple the speed of low energy Bluetooth transmissions.” But, that’s not all. BT 5 will also broaden its capabilities for connectionless services, such as location-relevant information and navigation.
That’s great news. A year ago, I wrote about how Florida’s Sarasota Memorial Health Care System was using Bluetooth Low Energy with iBeacon to pinpoint people’s precise current locations within the sprawling multistory facility and display that information on a tablet or smartphone using cloud-resident maps. Think of it as an interactive “you are here” handheld map to help prevent visitors from getting lost. Bluetooth works because it is highly precise. GPS, for example, doesn’t work nearly as well for determining elevation — which floor of a multistory building you’re currently on.
BT 5 is also beefs up its so-called “advertising packets” technology. This isn’t about product advertising, but about a device, such as a Bluetooth speaker being able to more easily say “I’m a speaker and I’m nearby,” in essence advertising its presence. This will help devices that aren’t already paired to more easily find each other.
I don’t yet know if BT 5 is an upgrade to existing hardware, or if new hardware will be required. If it’s the latter, the impact won’t happen until phones and other devices go through a couple of refresh cycles. The capability might be in Apple’s next set of phones, or not until 2017. It’s too early to know. Either way, Bluetooth shouldn’t be overlooked in your next mobile app development project.
Share the ways you’ve leveraged the power of Bluetooth and how extended range and faster transfer speeds could alter your thinking. We’d like to hear from you.
This week’s Cloud Computing Expo in New York isn’t among the IT industry’s largest gatherings, but it’s one of the most important. With nary a CFO or CIO among the attendees, this is a gathering aimed squarely at those of you working down in the trenches of cloud application development, testing, and operations. Walk into almost any educational session and you’d hear about IoT, Industrial IoT, storage and how to deal with huge amounts of it, testing, microservices, and containers.
One shift we are likely to see over the next few years is that of cognitive computing, where the nature of the data dictates how it should be handled. For decades, we’ve designed and built applications that process data in predetermined ways, based on our expectations of what the data looks like or should look like. What I’ve learned is that we are in a new era, where forces like social media, text messaging, multimedia, and sensor readings are delivering to applications data that is not just unstructured, but wildly variable in its content and format. What cognitive software needs to do is be ready when the data says “here’s what you need to do with me.” That is the essence of cognitive computing, where the data directs how the application should function. In other words, traditional applications are based on the past and cannot anticipate the future.
As cloud consultant Judith Hurwitz put it in her presentation, “cognitive computing is a problem-solving approach that uses hardware or software to approximate the form or function of natural cognitive processes.” What that means is systems designed based on data and letting the data lead the logic. Systems end up being designed to morph as they learn about the ever-changing patterns of data. And here’s a good one: Cognitive systems assume there is not just one correct answer, but is more probabilistic in nature, using hypotheses based on the data itself. At its core, this is machine learning where a system’s performance can improve based on exposure to patterns in the data rather than on explicit program code.
One thing I found interesting is that several presentations at Cloud Computing Expo touched on this idea of cognitive computing without ever using that particular label. As one presenter said, when you have an influx of text-based data, the processing applications must have the capability of differentiating between “feet” as a unit of length and as those things at the ends of your legs. Same word, different meaning. Cognitive systems can learn from such patterns, usages, and anomalies, and they can morph or evolve as more data is ingested and analyzed.
For the applications that you develop, does this thinking make sense? Are you building cognition facets into your work as a means of being able to do something, anything, with data that seemingly has no historical antecedents? Share your thoughts about the rapidly advancing field of cognitive computing. We’d like to hear from you.
Let’s have a little fun today and look at how the cloud is being used in ways that some might consider, well, out of the ordinary. Sure, anyone can conjure up a cloud-based system for purchasing books online, but what about using the ol’ smartphone to manage cows down on the farm?
SCR Dairy began in 1980 making equipment for the dairy-farming industry, stuff like detachers and pulsators, whatever they are. Today, the company offers cloud-based mobile applications for herd management, improving successful pregnancy rates, health tracking, and more. The latest advancement from the Israeli company is a series of mobile apps to — and stop me if you’ve “herd” this before — “improve productivity, reduce costs, and gain more control.” Whatever the industry, this golden troika of goals never seems to change. The company’s tagline is “make every cow count.” Like you, I didn’t know they could.
ETWater, which previously talked to me about the IoT talent shortage, designs cloud-based IoT smart lawn irrigation systems. The Novato, Calif. company builds Wi-Fi controllers that manage the zones of lawn sprinkler systems. It uses an integration analytics engine to calculate when and how much to water, based on dozens of metrics pulled in via APIs from a variety of sources. These include weather, humidity, time of year and day, sun and wind conditions, sensor readings of soil moisture levels, and a whole lot more. If you manage golf courses or office parks, it’s a clever and welcome use of cloud computing and data integration. At home, I can just look out the window.
If you’ve ever stood in the produce section and asked yourself, “Gee, are we out of kohlrabi,” Samsung has you covered. The Korean industrial giant’s new line of Family Hub refrigerators are equipped with, count ’em, three cameras that take a photo each time you close the fridge or freezer door. Use your smartphone to pull up an image while you’re at the supermarket or on the way home from work. Isn’t this what the cloud is all about? It certainly gives lends a whole new meaning to your subject saying “cheese” as the pic is snapped. Perhaps Samsung could link the cameras to SCR Dairy so we can have fresh milk from the farm on auto-replenishment. And, at last, you’ll know for sure if the light really does go out when you shut the door.
Finally, there’s the concept of paying by the foot. Literally. If those $400 running shoes you covet are out of your budget, you might be able to sign up for a pay-as-you-go plan based on how many steps you take or miles you run. The same might be true for those outrageously expensive high-performance tires you’d like to put on the family chariot. We’re not quite there for these two yet, but microbilling based on usage is very real. Without the cloud and Wi-Fi to do periodic data uploads, none of this works.
Well, enough of the fun stuff. The point is that what we can do with technology is limited only by our collective imaginations. Otherwise, we might still be carrying boxes of tape cassettes or three-ring binders filled with CDs on our journeys.
What are the truly unusual cloud apps that you’ve worked on and what was it about the development process that you found irresistible? Share your stories with us; we’d like to hear from you.
SAP and Apple. Apple and SAP. A good partnership offers benefits to each party. And that’s exactly what’s happening here. Let’s not forget the key third party not mentioned — cloud applications developers. For developers everywhere, there’s a lot to like.
SAP gets a lot from this union, starting with an express lane to develop its own enterprise apps for the iOS platform, encompassing iPhone and iPad (and perhaps Macs if OS X should eventually be supplanted by iOS). SAP also gets under-the-hood access to Apple’s XCode and Swift languages, a nice pairing with SAP’s own HANA in-memory applications platform and the Fiori UX (user experience) design environment. The real prize, however, is that SAP gets access to the 10 million enthusiastic developers writing for and who have expertise in iOS. That’s a lot of developers with a lot of non-traditional ideas about what apps can do.
Sam Yen, SAP’s chief design officer, is excited about the possibilities. “We’re trying to take the enterprise experience to the next level and capitalize on people using apps on their iPhones at home and the user experience they enjoy from that,” he said. “With our Fiori efforts, we want to optimize for mobile scenarios.”
For its part, Apple gets access to SAP’s global enterprise customer base, currently around 310,000 and an army of applications developers 2.5 million strong. Many of those developers are expected to create a new generation of purpose-built, enterprise-class applications that leverage SAP’s database and server-side power. It’s a huge vote of confidence in iPad and iPhone, both of which have seen sales wane in recent times. If the consumer market for tablets is near the saturation point, that’s likely not so in the enterprise. If a large, multinational corporation is going to equip thousands of in-the-field employees with tablets running custom apps, this partnership (and the similar one Apple struck with IBM in Dec. 2015) may sway the choice among tablets running iOS, Android, or Windows.
But wait, there’s more — a new SDK and a training academy.
The two companies are creating a new SAP HANA Cloud Platform SDK exclusively for iOS. It should provide developers with the tools to build custom iOS apps for iPhone and iPad based on the SAP HANA Cloud Platform, SAP’s open platform as a service. Add to that a new SAP Fiori for iOS design language to create apps that, in Yen’s words “look beautiful.”
Finally, expect to see a new SAP Academy for iOS by year’s end to help train developers.
Analysts are bullish. The union is seen as benefiting both companies. Jeffrey Hammond, a Forrester Research vice president and principal analyst who serves development professionals, told me, “This is good news for SAP customers who have struggled through many incarnations of SAP’s mobile strategy.”
What do you think? If your business is an SAP customer, we’d sure like to get your opinion. There’s a comment box just below that’s waiting for your feedback.
Ride-hailing service Uber is in an über-snit about how a handful of Harvard Business School students are using its developer API in a price-comparison mobile app. Uber, whose own app rained unsympathetic digital disruption and life-altering upheaval upon the taxi and limousine industry, apparently can’t take a dose of its own medicine.
According to the Boston Globe, an app called urbanhail ingests Uber pricing data via Uber’s API. As urbanhail puts it, “urbanhail gives you all of your ridesharing and taxi options in one click to help you save time and money.” The app currently serves only Boston.
From where I sit, this doesn’t seem any different than Kayak, Expedia, Travelocity, or even Google’s own useful ITA Matrix Airfare Search looking for favorable pricing among airline carriers. There’s also no shortage of apps that compare prices at my local supermarkets or tell me which gas station is the best bargain.
Is Uber right in prohibiting its API (and data accessed therefrom) from being used in a competitive manner? Should developers be allowed to access all of the data but only some of the time?
Clearly, there are legal and social issues involved. Uber certainly owns its own data and has every right to restrict its usage. Yet, this is a public API. Uber is not a federally regulated industry like air travel. And while Expedia, Travelocity, and dozens of other similar services are used for actually booking transportation and purchasing tickets, that’s not the case with urbanhail. The urbanhail app is merely informational.
So, what can’t you do with the Uber API? Let’s hail a ride to the Uber developer portal and find out! You cannot “compete with Uber.” You can’t “aggregate Uber with competitors.” Nor can you “store or aggregate Uber’s data, except as expressly permitted by Uber.” There’s also “You may not use the Uber API, Uber API Materials, or Uber Data in any manner that is competitive to Uber or the Uber Services, including, without limitation, in connection with any application, website or other product or service that also includes, features, endorses, or otherwise supports in any way a third party that provides services competitive to Uber’s products and services, as determined in our sole discretion.”
What’s your experience in using an API whose usage terms seem one-sided? Did you ever choose not to use a particular API because you couldn’t live within its terms? Perhaps your company publishes an API for use by others. Are its usage terms helping to broaden its use, or to restrict it? Share your thoughts; we’d like to hear from you.
Like it or not, no-code and low-code (I dub thee NCLC) application-development tools that allow line-of-business departments to navigate around IT’s army of highly trained, expert analysts and coders are not going away. It’s time to face reality.
According to a brand new study published today (May 2, 2016) by research house Vanson Bourne, The NCLC movement continues to gain traction. Among its key findings, the study notes that “63% of line-of-business respondents report that their department has either deployed, or considered initiating, application projects without the knowledge of the IT department.”
Now there’s a dose of new reality.
Just last week, Microsoft ended the private beta period for its PowerApps platform, opening up the NCLC tool as an unrestricted beta to anyone who wants to try it. Prominently displayed on the PowerApps website is a potent tagline: “Connect the things you already have, build apps without writing code, publish and use on Web and mobile.” Among the services to which you can connect are Salesforce, Dynamics CRM, Dropbox, Slack, Twitter, and Azure. And for the true evil scientists who really want to stick it to IT, custom APIs also make the cut.
“Power to the people,” is not just a cultural touchstone of the 1960s, it is the reality of our software as a service, mobile computing, distributed way of digital life.
PowerApps joins a long list of NCLC tools already widely available, but slapping the Microsoft name on something so potentially potent provides an enormous vote of validation for the technology. The CEO can finally go to the CIO and ask, “What about this?”
And there’s the rub. NCLC tools can benefit IT, or they can hurt. OK, Mr. or Ms. CIO, how do you plan to deal with it?
Ignore it. Woe unto you should you choose this strategy. NCLC is here to stay. You just can’t be seen as sticking your head in the sand. You can’t decree that all development must occur under the auspices of IT. You’ll be viewed as obstructionist, recalcitrant, and behind the times. You’ll be seen as an old-school CIO thinking in old-school terms. (Hint: You may already be viewed that way).
Ignore, but remain aware. That’s a half-step better. Departmental app builders are going to build apps. If you choose to consider those apps as a department’s illegitimate spawn, at least you should know what they’re doing and accessing. Stay wary, stay leery.
Acknowledge, but provide no aid. Admit it, denial is futile. Move beyond willful blindness. Sure, you’ll be secretly happy when an NCLC practitioner hangs himself or herself. Try not to rub your hands together too gleefully as you ride in on your white horse to rescue the project.
Embrace and be loved. Choosing to set people and departments up for success will win you friends and respect. When you admit that IT hasn’t the time nor budget (nor inclination?) to do a particular project, and follow that with support for a particular NCLC tool and maybe a written set of best-practices recommendations, you’ll rightfully be seen as a forward thinker that allows departments to react at the speed of business. Among the users of PowerApps that Microsoft is touting are Bose and Xerox. Clearly they’re on board with the concept of NCLC.
As an old-timer who grew up writing code in an MIS environment that wielded vise-grip total control over every aspect of apps and data, I admit embracing NCLC isn’t easy. But, it’s necessary. “Power to the people,” is not just a cultural touchstone of the 1960s, it is the reality of our software as a service, mobile computing, distributed way of digital life.
What’s your opinion of NCLC development tools empowering line-of-business departments? What do you see as the ramifications? Tell us, we’d like to hear from you.
Just a few days ago, In advance of a Gartner conference that’s slated for Sydney, Australia in May, Gartner research director Michael Warrilow, who is based in Australia, made a statement about the business of the cloud that is certain to trigger many discussions. Targeting his statements for that geography, Warrilow said that businesses are “too enthusiastic about cloud.” Though his geographical purview is limited, it’s reasonable to extrapolate his opinion to the wider global landscape.
“Some [businesses] are making dangerous assumptions that [the cloud] will always save them money, which it’s not necessarily going to do. What they will get is more agility and a different mix of capex and opex, which the business likes,” he said.
Mr. Warrilow is right, of course. Where I differ somewhat is that I don’t see some as “too enthusiastic,” but, rather, as enthusiastic for the wrong reason.
If a company is looking to cut costs, there are much easier ways to do it than launching the entirety of IT into the cloud. I’d start by dealing with output — how many printers are installed, which can be eliminated and replaced with a single departmental printer, duplex printing to halve paper consumption, outsourcing oversight and maintenance to a managed print services provider, just-in-time automatic toner cartridge replenishment to avoid hoarding, and so on. Follow that with imaging technology to eliminate most printing in the first place, and you’re saving serious money. It’s low-hanging fruit and it’s comparatively easy to do.
Cloud computing is a means to an end, not a business strategy unto itself.
I do not subscribe to the idea that cutting costs should be the primary reason for moving to the cloud. Cloud computing is a means to an end, not a business strategy unto itself. You embrace the cloud because you can deliver more services and better information to customers and employees more quickly. That leads to (in theory, anyway) more sales, greater revenue, and enhanced profitability.
If you’re reading this, you probably earn your living thanks to cloud computing. Without a doubt, cloud technology — and the cloud industry — have rocked IT to its very core and changed the way nearly every company on the planet conducts business. The allure of the cloud can’t be argued: More services, better security, increasingly powerful development tools, instant spin-up of resources, and plummeting prices combine to make the cloud, well, simply irresistible.
We’ve matured from years ago when applications that were the easiest to move into the cloud got the call, regardless of their impact upon or importance to the business. Today, the focus is rightfully on the applications that make the most business sense.
Do you agree? Is your organization’s reason for embracing the cloud to save money, or to improve the way in which it operates? Share your opinions; we’d like to hear from you.
What do developers worry about when creating an application? Performance. Data validation. Correct logic and processing. Memory use. Concise code. What they tend not to concern themselves with is cost. Perhaps that needs to change.
In doing research for a story on containers as a service (CaaS), both users I interviewed railed on the topic of license costs, at least as it pertains to Microsoft SQL Server. Both make a good case for paying close attention to how many instances of SQL Server are running, the number of servers on which they run, and, especially the proliferation of home-grown or purchased applications that expect their own personal instance of SQL Server.
The first hint of this came from Don Boxley, CEO of DH2i, a Fort Collins, Colorado company that makes SQL Server containerization software, primarily to endow databases with portability for easy movement from a development environment to production or from one cloud provider to another. Being able to stack containers on physical or virtual machines contributes to cost savings, he explained.
Well, that’s fine when a vendor pitching a product floats the idea, but how does this work out in the real world, in real businesses, with real applications? Turns out it’s a big deal.
Michael York, a systems engineer at Asante, a major regional healthcare system in Oregon, lives with the realities every day. “We have nearly 100 applications that have a SQL Server back end,” he told me. Most were purchased apps that stipulated a dedicated instance of SQL Server as a requirement. Add an app here and another there, and pretty soon you’re suffering from what York characterized as database sprawl. “It’s easy to stand up another instance,” he said. Getting the job done was the developer’s primary concern, not licensure costs. Running one instance per server sped development. Developers, he said, were often not even aware that instances could be stacked.
Through containerization, stacking instances, and de-provisioning of instances that were unnecessary, Asante saved more than $200,000 in 2015.
Tammy Lawson, a database administrator at Sonoco, the South Carolina global product-packaging giant (containers of a different sort), laid it out in very precise terms: “If each of my 61 container SQL instances were on its own server, the SQL Standard license for each server would be around $16K (using 2×8 AMD processors in my calculation since that is what my [DH2i] DxEnterprise physical servers are). That is a total of $976,000 just for Standard licenses. Buying SQL for my 4 Dx cluster nodes ($65K) + the DxEnterprise software came nowhere close to this number. Big savings in the licensing department.” Miss Lawson knows her stuff.
While it’s eminently clear that performance matters most, how much you spend year after year is important, too, especially if much of that spending could be avoided. If you properly stack instances on servers to mix and match database demand for the greatest efficiency, containers can save huge amounts of money. Developers need to know.
What strategies does your organization use to minimize licensure and maintenance fees? Share your ideas; we’d like to hear from you.
Chrysler killed off Plymouth. GM did it to Oldsmobile and Pontiac. Ford did it to Mercury. Microsoft even did it to Windows XP. Yet today, years after the demise of these products, they all continue to run. For the vehicles, parts remain available and dealers are happy to perform maintenance and take your money. Even for XP, Microsoft still issues the occasional security patch. Elsewhere, ink cartridges continue to be available for Epson inkjet printers discontinued long ago.
So, what’s the problem with the Internet of Things?
Consider Nest’s decision to kill off its $299 Revolv home-automation hub. Revolv (the company) was acquired by Nest in October 2014, which immediately stopped selling Revolv (the product). The service stayed up, however. Not for much longer. “As of May 15, 2016, your Revolv hub and app will no longer work,” states a message on the Revolv website from its founders. “The Revolv app won’t open and the hub won’t work.” Tough luck, buddy. And not exactly a PR-friendly warm and fuzzy writing style, either. (Nest itself had been snapped up by Google in January 2014.)
It’s reasonable to think that it was Revolv’s intellectual assets, especially its talented IoT developers, that Nest coveted. If you’re an IoT developer, or plan to become one, you should view the move as a signal that this is a great career path.
Initially, Nest (or Google or Alphabet) had no plans to compensate Revolv owners, but that now appears to be changing. In a statement provided to CNBC, a Nest spokesman said it will consider providing customers with compensation on a case-by-case basis. What form that compensation takes was not detailed. Either way, these customers are still left with nothing but a fancy paperweight. It’s the Abandonment of Things. Curiously, Google is refunding the full purchase price of its acquired Nik photo-editing software suite now that the company is making it free — and not shutting it down.
Do you think that’s fair? Shutting down the cloud component of your IoT product, turning faithful customers into orphans is a shameful business strategy. Can you imagine a maker of implantable cardiac pacemakers attempting a similar tactic? (Then again, no one jumped up to compensate me when it became impossible to buy blank VHS tapes for my VCR.)
I have an Acer laptop that’s nearly 20 years old. It runs Windows 95, still functions perfectly, and through an RS-232 serial-port powerline interface still runs an ancient version of the superb HomeSeer application, communicating with switch modules to control lighting in my home via the long-defunct X10 protocol. The laptop sits in a closet, out of sight, out of mind, plugged into a small UPS unit for power protection, and perfectly secure because it has no network connection. It’s all hilariously obsolete, but no one ever tried to shut any of these products down.
As developers, we understand that even the simplest of IoT products represents a significant investment. They contain embedded software to make the thing work, server side applications to process messages or send out alerts, databases for maintaining user accounts, iOS and Android mobile apps for controlling devices from your reclining chair, and more. There are license fees for software libraries, too.
I can understand the underlying economic reason for leaving the past behind, but in this connected age, before you arbitrarily put a bullet through your products and applications, you’d best provide a soft landing for the people who paid for the privilege of using them.
What do you think? Have you written code for products your company killed off, leaving customers with no escape route? Share your thoughts; we’d like to hear from you.