As legacy applications from the on-premises datacenter are retired and replaced by cloud-based software-as-a-service subscriptions, those services need to be integrated. That’s the job of developers, armed with APIs and working under the aegis of the CIO. It’s all very tidy. But, what about integrating SaaS implementations that secretly came in through the back door without IT ever being aware? And what about when you find yourself in the crazy scenario of connecting a dozen instances of the very same SaaS?
It happens more often than you’d think, according to Liz Herbert, a vice president and principal analyst at Forrester Research. We’re all aware that different SaaS implementations, such as CRM, order fulfillment, payroll, inventory management, and others need to be integrated. That’s neither news nor fodder for an opinion column. Much more interesting is that we never think about is the need to integrate multiple copies of the same SaaS.
“In a large enterprise, it is not uncommon to have 12 instances of Salesforce that are unaware of each other, usually because each one was brought in separately under the radar,” Herbert observed. It’s yet another aspect of that phenomenon we’ve come to know as Shadow IT or Citizen IT.
It happens because line-of-business departments want to do their own thing and not wait for IT to work its way down a list of pending projects. It can happen because IT doesn’t have the budget. Or because there’s a roadblock with a legacy CIO who thinks in legacy terms. It can happen because a department manager gets wind of what a different department is doing, likes the idea, but goes with his or her personal twist.
Typically, these SaaS instances become known to IT months after they were initiated, when the department manager asks IT to take over management and administration. Whether they conform to the corporate policies on security or governance was never considered — until now.
To be fair, this is not always due to illicit SaaS sign-ups. A more above-board scenario is when businesses go on acquisition sprees and have to meld all the pieces into a unified whole. Though each might use the same payroll processing SaaS, it’s a good bet that no two implementations are identical. That’s more likely to become an official IT project even though the same-but-different integration issues persist.
Have you found yourself in the midst of integrating multiple copies of the same SaaS? If so, we’d like to hear about your adventures in the discussion thread. And remember, it’s not just your organization. When it comes to cloud computing, “I thought it was just us” simply doesn’t apply.
You’re a developer and darned proud of the code you write. You follow the specs and build what the stakeholders and designers want. You’ve tested it and all the test scenarios work as expected. You deploy and the app goes into the wild. But, what happens when there’s a problem that no one anticipated? Not you, not the app owner, not QA, not ops, not anyone.
After having a problem with my mobile phone, I visited the local store of the phone manufacturer, a giant company named for a fruit. None of the store’s so-called geniuses could figure out why the docs and data on my phone kept ballooning up to fill — and even attempt to exceed — the 128 GB device’s available memory. It turns out that this giant hardware company (or is it a software company) had no diagnostic software that could peek into the device and see what was suddenly occupying so much RAM, more than 23 GB. Manually adding up the data use reported by all the installed apps plus recently deleted photos still in RAM totaled less than 1 GB.
Multiple reboots did not help. The only alternative, they said, was a reset and restore from my last backup. After doing that, I attempted to re-add a credit card to the phone’s wallet. That resulted in a “card is already in wallet” error though it clearly was not. Multiple calls with said fruit company’s tech support experts (on my land line) could not solve the problem. Again, there was no way to peer into the phone to get a snapshot.
Four calls later, one young man in the fruit company’s Jacksonville, Fla. call center suggested the purely undocumented move of logging out of my cloud account associated with my phone then logging back in. Voila! I could now add my credit card. Why do this? He couldn’t say.
Why did the phone operating software and its wallet app behave in this manner? No one knows. Why did logging out and back in solve the problem? No one knows. Was that the cloud equivalent of a three-fingered Ctrl-Alt-Del forced-reboot salute? No one knows. With tens of millions of phones in use that support wallets and credit cards, why hadn’t this been seen before? Or if it had been, why wasn’t it documented? No one knows. Web searches did yielded some highly convoluted suggestions, though.
The innocent bystander in this are the developers who build software based on a set of specs. I’m not saying that the developers were surrounded by fools to the left and jokers to the right, but it’s clear that neither the specs nor the test scripts anticipated this situation. Perhaps this couldn’t have been imagined. It’s not like the famous spec failure that says how to process a payment if it’s received less than 29 or more than 29 days after the due date, but which fails to specify what to do if the payment is received exactly 29 days after the due date. That’s just bad design.
Burke Holland, director of developer relations at Progress Software, has reminded me more than once that great software isn’t finished until it’s fully tested. But, just as you can never prove that an app is completely secure, only that it is not secure, you can’t test for scenarios beyond your wildest imagination.
What really would have helped is powerful diagnostic software to do a RAM autopsy. It didn’t exist. And that makes me wonder what we should be doing in terms of creating diagnostics for the apps we build. Do you do that? Share your thoughts, we’d like to hear from you.
It’s a good question to ask. You think your API calls are secure. But, how do you know? It’s likely you probably don’t.
The woeful case of the all-electric Nissan Leaf car has been beaten to a pulp recently by the tech news industry. It turns out the car can be accessed via an insecure API. Battery charging functions, driving history, can all be gotten to once you know the vehicle’s VIN, or vehicle identification number. And that, as you know, is embossed on an aluminum plate that’s always on display. All you need do is look through the windshield.
If this is really the best that a major automotive manufacturer can do, you’ve got to wonder about other companies whose pockets for developing and testing applications are not quite so deep. Nissan reacted by suspending its electric car app.
You can never prove that an application is secure. You can prove only that it is not secure.
This isn’t the first case of remote automotive hacking. You’ll remember last July that Chrysler issued an emergency software patch for 1.4 million vehicles once it became public that on-board software in Jeep, Ram, Durango, Chrysler 200 and 300, Challenger, and Viper models from 2013 to 2015 could be easily hacked.
Security researcher Troy Hunt, in a lengthy blog post, describes the entire Nissan Leaf scenario, complete with an embedded video and many code fragments. If you’re a developer, you should give the piece a thorough read. The issue isn’t that security was implemented incorrectly, but rather that it doesn’t seem to have been implemented at all.
I don’t know what’s worse — bad security or no security.
Roberto Medrano, executive vice president at Akana, a provider of tools for creating, securing, and managing APIs told me that applications, and integrations among them, are becoming increasingly API driven, making connections simple and straightforward. But, it’s security that must always be top of mind.
Here’s what I’ve been saying about application for years: You can never prove that an application is secure, you can prove only that it is not secure. How can that be? Think of it this way — If you run a million different attack scenarios on your app and none succeeds, you’ve proven only that those don’t work. But, maybe scenario 1,000,001, the one you hadn’t thought of, that will break in. Thus, you can prove unequivocally that something is not secure.
This is like a scientific hypothesis. You can never prove a hypothesis to be true, but you can prove it to be false.
What are you doing to test API security? Share your thoughts; we’d like to hear from you.
I’ve been writing opinion columns for various technology publications for more than a quarter century. Rarely have I seen anything touch a nerve to the degree of Facebook’s coming shutdown of the Parse mobile back-end as a service.
One thing that’s great to know is that developers are watching out for each other, offering up ideas for alternative services. As my own service, here’s a digest of some of what has crossed my inbox during the last week. Which are good, bad, or ugly products and services? That’s up to you to decide for your own mobile development projects.
Nimble Parse is a Parse-compatible API service from Nimble Stack that starts at $10 a month, including a half gig of memory and unlimited data. Its offers three service levels up to 2 GB of memory.
Appcelerator, another mobile app development platform, feels your pain. To help, director of product architecture Rick Blalock is hosting a webcast on Feb. 17 to walk through a comparison of Parse and Appcelerator Arrow, showing how to migrate platforms and answer questions.
Syncano is a start-up that touts itself as a platform for creating serverless apps. In her blog post, Sara Cowie shares your sadness and confusion. The company is ready to provide its entire portfolio of features free for six months, including a dedicated support team to guide developers through the migration process.
GameSparks, another back-end services that seems to target developers of gaming apps, wants to provide you with an alternative integrated toolset for building, tuning, and managing server-side components.
I was contacted by apiOmat.com, yet another MBaaS, that appears to have more of an enterprise slant to its offerings. You can check it out and get started for free, though the pricing chart was in euros.
Other alternatives exist. These are the first few that reached out to me. No doubt we’ll all be looking into this a lot more in the next few months. Parse shuts down on Jan 28, 2017. Don’t wait. Start now. And share with us your woes, your outrage, and your plans.
Were you caught off guard by Facebook’s abrupt announcement on Jan. 28 statement that its Parse mobile backend as a service (MBaaS) was going to be shut down? You’re not alone. And we’d like to hear from you.
Outrage on the Twitter #parseshutdown page didn’t take long to get revved up. Developers who entrusted code or data to Parse bemoaned its impending demise, wondering what they would do next. Consultants and competing platform providers began to twee advice for migrating data and applications or for offering replacement mobile development platforms.
Burke Holland, director of developer relations at Progress Software, told me about the plight of one developer, victimized by the sudden and unexpected announcement. “I saw a message on Reddit from a developer who said he had deployed his app on Parse literally two hours before the announcement,” he said. He went on to say that there are small developers who may have built their entire business on Parse and who don’t have the latitude to take a hit like this. He’s right.
For Facebook, it may be that Parse simply wasn’t a profitable business, Richard Mendis, chief product officer of MBaaS provider AnyPresence told me.
What we may have lost sight of is how Facebook views the people who use its various services.
In my opinion, whether we’re posting photos of the new grandkid, linking to videos of cats playing piano, or using the company’s APIs to develop compatible applications, remember this: We are not Facebook’s customers, we are Facebook’s product.
Facebook is in the business of generating revenue, mostly through the sales of advertising that reaches the likes of you and me. If you are not paying money to Facebook, you are not a Facebook customer. What the company is doing is delivering eyeballs (that’s us users) to its paying advertisers.
Al Hilwa, program director of software development research at IDC believes the company had hundreds of developers working on Parse, at a cost approaching $50 million that could better be spent elsewhere. Apparently so.
Facebook is taking a year to wind down Parse. It released a database migration tool to ease the transition to any MongoDB database. It also published a migration guide and its open source Parse Server, which provides developers with the ability to run much of the Parse API from any Node.js server of their choosing. Final shutdown will occur on Jan. 28, 2017, exactly one year after the closure was announced.
If you were developing apps on the Parse platform, what’s the impact and what are you going to do about it? Where do you intend to move your code and data? Join the discussion, we’d like to hear from you.
When you think cloud computing platforms, do you think Walmart? I certainly didn’t. That’s changing.
In a WalmartLabs blog post, Jeremy King, CTO of Walmart Global eCommerce and Tim Kimmet, VP of Platform and Systems, announced that following more than two years of development and testing, the company is making its OneOps cloud management and application lifecycle management platform available to the open source community. Talk about pricing rollbacks. If you’re inclined to take a dive in, you can download the source code from GitHub.
Whatever you think of Walmart as a company, the company has long been a leader in advancing and leveraging technology in the retail industry. As a former director of point-of-sale systems at two national retailers, I saw the technology move from Kimball punch tickets to OCR-A to barcode, RFID, and beyond. Behind that are warehousing systems, sales analysis for merchandise replenishment, and much more. Today, with smartphones and omnichannel marketing strategies, shopping is a vastly different experience than it was even just a decade ago. OneOps is used to manage the Walmart and Sam’s Club e-commerce sites.
But, what WalmartLabs is doing is not just about retail, it’s about any organization that relies on the cloud for its IT needs. Add to that the idea of eliminating cloud provider vendor lock-in, and we might be in for quite a shake-up.
It appears a key benefit of OneOps is the ability for cloud users to avoid vendor lock-in to any one cloud provider. In the words of King and Kimmet, developers can use OneOps to “test and switch between different cloud providers to take advantage of better pricing, technology and scalability.” Impressive. One promise of the cloud has been ease of portability, but, in practice, that’s often not the case.
Four clouds are currently supported, including OpenStack, Rackspace, Microsoft Azure, and Amazon Web Services. CenturyLink support is said to be on the way. Nearly three dozen development products are currently supported, including Tomcat, Node.js, Docker, Ruby, Java, and JBoss, to name a few.
Other features include auto-scaling, metrics instrumentation, auto-healing with automatic instance replacement, and perhaps most important, the idea of out-of-the box readiness for multiple public and private cloud infrastructure providers.
What do you think about this? Would you try it out and perhaps place your business’s existence in the hands of software from Walmart? It’s going to be a fascinating ride. Share your thoughts about this, we’d like to hear from you.
If there’s one thing we can say with certainty about “IT,” it’s that both the information and the technology are constantly changing.
You know about the information; volume, velocity, and variety are exploding, forcing us to be ever more-vigilant about the fourth “v,” veracity. What’s has my interested to piqued is the other side, the technology of handling all that data. What’s more interesting is the way Apache Spark is pushing MapReduce aside for clustered processing of large data volumes.
There is no doubt that Spark’s swift growth is coming at the expense of the MapReduce component of the Apache Hadoop software framework. Consider this — In its December 2015 survey of 3,100 IT professionals, (59% of whom are developers), Typesafe, a San Francisco maker of development tools, noted that 22% of respondents are actively working with Spark.
So what’s the allure of Spark? I asked Anand Iyer, senior product manager at Cloudera, the first company to commercialize Apache Hadoop. “Compared with MapReduce, Spark is almost an order of magnitude faster, it has a significantly more approachable and extensible API, and it is highly scalable,” he said. “Spark is a fantastic, flexible engine that will eventually replace MapReduce.” Cloudera isn’t wasting time: In September 2015, the company revved up its efforts to position Spark as the successor to Hadoop’s MapReduce framework.
Even IBM is on board. In June 2015, IBM called Spark “potentially the most significant open source project of the next decade.” IBM is already working to embed Spark in its analytics and commerce platforms, and it is assigning 3,500 researchers to Spark-related projects. Yes, 3,500 is correct.
Research firm Gartner, reacting to the IBM initiative, said Information and analytics leaders must begin working to ensure that they have needed knowledge and skills.
And that’s the key. As a developer, you must work with an ever-changing toolbox. New skills must be acquired and mastered, and sometimes old ones left behind. Though MapReduce is not going to disappear anytime soon, the shift from MapReduce to Spark appears to be happening with astonishing speed. Are you ready?
What are your organization’s plans with respect to MapReduce and Spark? Are you planning to switch? Or maybe you’ve never even gone so far as to implement MapReduce. Share your opinions on this important topic. We’d like to hear from you.
Today is the day that Microsoft discontinues all support for the loathed Windows 8 operating system, and stops issuing security patches for Internet Explorer versions 8, 9, and 10. Windows 7 and 8.1, along with IE 11 are safe — for now.
The message is clear: upgrade or risk the possibility of bad people doing bad things. If you think this is moot and that most people have already upgraded, well, think again. According to Statista’s Global market share held by operating systems Desktop PCs, Windows XP still had an 8.44% share as of Dec. 2015, even though support was killed off in April 2014. Even the despised Windows Vista still maintained a 1.78% share. It’s not as strange as you might think. Up until a couple of years ago, I knew of a small business that was still using Windows 2000 on roughly a dozen laptops.
There’s one small twist. Since systems running Vista cannot run IE 10 or 11, they still get support for IE 9. That ends in April 2017 when all support for Vista terminates.
It’s clear that Microsoft wants users to be on Win 10 and the Edge browser. Those monumentally annoying pop-ups admonishing me to “upgrade now” aren’t going away until I do. And I will. Soon. Really.
The rules for deep-pocketed corporations and government agencies are a bit different. Those that are willing to pay for continued support of these older products can get it, for now. No doubt you read in June 2015 that your United States Navy is shelling out $9 million a year to have Microsoft continue support for XP.
If your organization or your clients have applications that require these older versions of Windows and IE, the proverbial window is closing. Here are Microsoft’s official end-of-life dates that you need to keep in mind: Vista, Apr. 11, 2017; Win 7, Jan. 14, 2020; Win 8.1, Jan. 10, 2023; and Win 10, Oct. 14, 2025.
It’s time to start the process for movin’ on up to Windows 10 and Edge.
Are you running into issues with applications that require versions of Windows or IE that are no longer supported? What is your CIO doing about it? Tell us, we’d like to hear from you.
The move to smart devices can’t be stopped. That means new opportunities for developers, especially in transaction processing and payments. But, don’t lose sight of the legacy applications that drive many of those transactions.
By 2018, consumers in mature markets will expand, rather than consolidate their device portfolio, resulting in the use of more than three personal devices, according to researcher Gartner’s “Predicts 2016” report. In a separate report, Gartner expects that by 2018 a majority of users will, for the first time, turn to a mobile device for all of their online activities. Yes, Gartner says “all.”
Do all of your online activities from a smartphone or tablet, and you’re essentially turning away from desktop technology. For developers it means interface design, data integration, transaction processing, and querying home-automation devices via the Internet rather than just the local in-home Wi-Fi setup.
Two key concerns, according to cloud consultant Judith Hurwitz are scalability and compliance. Apps on the mobile device, and the corresponding back-end server processing and data serving must be capable of scaling to peaks that might seem unrealistic, she says. Compliance becomes increasingly important. It might an a portal app for managing a patient-physician relationship or filling prescriptions through an online pharmacy. It could be placing equities orders with an online broker. Eventually, it will be interacting with your car’s diagnostics system.
Without a doubt, the rush to these new technologies is on. Developers are learning new skills and new languages. Analytics is an increasingly big part of dealing with big data. Despite this, we should not lose sight of systems that have been running at corporations for years and even decades. “Old software never dies,” Hurwitz says.
It’s an excellent point. Batch processes, such as monthly statement rendering programs at banks, can go untouched for what seem like eternity. Updated only to reflect the appearance of printed statements (my bank recently added color and changed typefaces), the underlying logic can go for decades without being touched. The original programmers may have long since retired or died, yet these applications (we used to call them programs) remain vital. And, there may no financial gain for the business in throwing old, fully functional applications on the scrap heap.
Sure, new technologies to support the Internet of Things are vital. Learning Apache Spark for big data streaming, Hadoop for distributed data processing, or Docker for containerization are vital for today’s work (and tomorrow’s). It’s also prudent not to lose sight of where a lot of corporate data remains.
When explaining the concept of cloud computing to friends unfamiliar with it, I usually turn to my imaginary recipe-of-the-day mobile and Web app as an illustrative example. Something that should be seen by users as the very model of simplicity gets very complicated very fast under the hood. It’s enough to make any developer lose his or her appetite. Are your apps doing something similar?
The premise is simple: suggest a daily recipe based on a variety of factors. The genius is blending my ingredient list (multiple data sources) to produce a finished product that’s easy to use, sports a great-looking user interface, and that will entice users to take some revenue-generating action, such as ordering ingredients or cookware online, or subscribing to a magazine.
Data source #1: User info. To get recipes, the user has to sign up, at minimum, with a user name, password, e-mail address, and postal code. The postal code is crucial, because a key function of the app is to suggest recipes based on specific location and weather conditions. That ensures a user in Maine won’t get recipes calling for collard greens, and that users in Phoenix in July won’t get recipes for steaming hot soups. Also, recipes might be sent not only for today, but for several days out, allowing the user to acquire ingredients not on hand.
Data source #2: Weather forecast based on postal code to ensure that a hearty stew, best for a cold, snowy day isn’t sent in the midst of a rare December heatwave. Obtained via API calls from a third-party service, such as Weather Underground.
Data source #3: The app owner’s database of recipes along with photos and links to discussion threads. Maybe links to how-videos also.
Data source #4: Analytics that reveals which recipes are most popular by region and time of year. It’s another ingredient in determining which recipe to suggest.
Data source #5: This could also be fields in the user info database. It includes user preferences — favorite and least-favorite cuisine types, self-rated level of cooking expertise, food allergies, how often to suggest a recipe, ingredients to avoid (George H.W. Bush famously hated broccoli), etc. Capture family birthdates, and the app could suggest birthday cake recipes and gifts 10 days in advance.
Data source #6: Current and future farm-fresh ingredients availability by location. It’s no good to suggest a recipe calling for fresh cranberries if they’re out of season. Another API call to somewhere.
Data source #7: Coupon codes and other promotional enticements for purchasing non-perishable ingredients and cookware through the application.
Data source #8: Pricing comparisons at local supermarkets for meats and veggies, likely extracted via APIs from a service that collects this type of data.
Data source #9: If the user makes a purchase, multiple data sources come into play for credit-card processing, shipping address, shipment tracking, and so on.
Data source #10: History database of user actions, including which recipes were viewed, saved, printed, rated, and commented on. What items did the user purchase? Re-display a favorite recipe a year later? What other recipes did the user seek and display? Analytics could prevent five straight days of soups, even though the weather outside is frightful and might suggest that.
In addition, there might also be integration opportunities via APIs with retail sponsors: if you’re making clam chowder and the weather is snowy, can we suggest the following cold-weather apparel items, or winter sporting goods, or vacation trips to a tropical resort?
After all this comes the matter of designing and building an application that looks great, presents all the aforementioned data as a completely seamless experience, and performs blazingly fast.
The point here is that nothing is simple. The specs for this app would be complicated. And, it takes a huge amount of talent to build an app that users enjoy and look forward to using repeatedly.
Are you building cloud and mobile apps that integrate data from a large number of sources? What’s your data mashup process and how do ensure stellar performance? Teach us how you solved these problems; we’d like to hear from you.