It’s a good question to ask. You think your API calls are secure. But, how do you know? It’s likely you probably don’t.
The woeful case of the all-electric Nissan Leaf car has been beaten to a pulp recently by the tech news industry. It turns out the car can be accessed via an insecure API. Battery charging functions, driving history, can all be gotten to once you know the vehicle’s VIN, or vehicle identification number. And that, as you know, is embossed on an aluminum plate that’s always on display. All you need do is look through the windshield.
If this is really the best that a major automotive manufacturer can do, you’ve got to wonder about other companies whose pockets for developing and testing applications are not quite so deep. Nissan reacted by suspending its electric car app.
You can never prove that an application is secure. You can prove only that it is not secure.
This isn’t the first case of remote automotive hacking. You’ll remember last July that Chrysler issued an emergency software patch for 1.4 million vehicles once it became public that on-board software in Jeep, Ram, Durango, Chrysler 200 and 300, Challenger, and Viper models from 2013 to 2015 could be easily hacked.
Security researcher Troy Hunt, in a lengthy blog post, describes the entire Nissan Leaf scenario, complete with an embedded video and many code fragments. If you’re a developer, you should give the piece a thorough read. The issue isn’t that security was implemented incorrectly, but rather that it doesn’t seem to have been implemented at all.
I don’t know what’s worse — bad security or no security.
Roberto Medrano, executive vice president at Akana, a provider of tools for creating, securing, and managing APIs told me that applications, and integrations among them, are becoming increasingly API driven, making connections simple and straightforward. But, it’s security that must always be top of mind.
Here’s what I’ve been saying about application for years: You can never prove that an application is secure, you can prove only that it is not secure. How can that be? Think of it this way — If you run a million different attack scenarios on your app and none succeeds, you’ve proven only that those don’t work. But, maybe scenario 1,000,001, the one you hadn’t thought of, that will break in. Thus, you can prove unequivocally that something is not secure.
This is like a scientific hypothesis. You can never prove a hypothesis to be true, but you can prove it to be false.
What are you doing to test API security? Share your thoughts; we’d like to hear from you.
I’ve been writing opinion columns for various technology publications for more than a quarter century. Rarely have I seen anything touch a nerve to the degree of Facebook’s coming shutdown of the Parse mobile back-end as a service.
One thing that’s great to know is that developers are watching out for each other, offering up ideas for alternative services. As my own service, here’s a digest of some of what has crossed my inbox during the last week. Which are good, bad, or ugly products and services? That’s up to you to decide for your own mobile development projects.
Nimble Parse is a Parse-compatible API service from Nimble Stack that starts at $10 a month, including a half gig of memory and unlimited data. Its offers three service levels up to 2 GB of memory.
Appcelerator, another mobile app development platform, feels your pain. To help, director of product architecture Rick Blalock is hosting a webcast on Feb. 17 to walk through a comparison of Parse and Appcelerator Arrow, showing how to migrate platforms and answer questions.
Syncano is a start-up that touts itself as a platform for creating serverless apps. In her blog post, Sara Cowie shares your sadness and confusion. The company is ready to provide its entire portfolio of features free for six months, including a dedicated support team to guide developers through the migration process.
GameSparks, another back-end services that seems to target developers of gaming apps, wants to provide you with an alternative integrated toolset for building, tuning, and managing server-side components.
I was contacted by apiOmat.com, yet another MBaaS, that appears to have more of an enterprise slant to its offerings. You can check it out and get started for free, though the pricing chart was in euros.
Other alternatives exist. These are the first few that reached out to me. No doubt we’ll all be looking into this a lot more in the next few months. Parse shuts down on Jan 28, 2017. Don’t wait. Start now. And share with us your woes, your outrage, and your plans.
Were you caught off guard by Facebook’s abrupt announcement on Jan. 28 statement that its Parse mobile backend as a service (MBaaS) was going to be shut down? You’re not alone. And we’d like to hear from you.
Outrage on the Twitter #parseshutdown page didn’t take long to get revved up. Developers who entrusted code or data to Parse bemoaned its impending demise, wondering what they would do next. Consultants and competing platform providers began to twee advice for migrating data and applications or for offering replacement mobile development platforms.
Burke Holland, director of developer relations at Progress Software, told me about the plight of one developer, victimized by the sudden and unexpected announcement. “I saw a message on Reddit from a developer who said he had deployed his app on Parse literally two hours before the announcement,” he said. He went on to say that there are small developers who may have built their entire business on Parse and who don’t have the latitude to take a hit like this. He’s right.
For Facebook, it may be that Parse simply wasn’t a profitable business, Richard Mendis, chief product officer of MBaaS provider AnyPresence told me.
What we may have lost sight of is how Facebook views the people who use its various services.
In my opinion, whether we’re posting photos of the new grandkid, linking to videos of cats playing piano, or using the company’s APIs to develop compatible applications, remember this: We are not Facebook’s customers, we are Facebook’s product.
Facebook is in the business of generating revenue, mostly through the sales of advertising that reaches the likes of you and me. If you are not paying money to Facebook, you are not a Facebook customer. What the company is doing is delivering eyeballs (that’s us users) to its paying advertisers.
Al Hilwa, program director of software development research at IDC believes the company had hundreds of developers working on Parse, at a cost approaching $50 million that could better be spent elsewhere. Apparently so.
Facebook is taking a year to wind down Parse. It released a database migration tool to ease the transition to any MongoDB database. It also published a migration guide and its open source Parse Server, which provides developers with the ability to run much of the Parse API from any Node.js server of their choosing. Final shutdown will occur on Jan. 28, 2017, exactly one year after the closure was announced.
If you were developing apps on the Parse platform, what’s the impact and what are you going to do about it? Where do you intend to move your code and data? Join the discussion, we’d like to hear from you.
When you think cloud computing platforms, do you think Walmart? I certainly didn’t. That’s changing.
In a WalmartLabs blog post, Jeremy King, CTO of Walmart Global eCommerce and Tim Kimmet, VP of Platform and Systems, announced that following more than two years of development and testing, the company is making its OneOps cloud management and application lifecycle management platform available to the open source community. Talk about pricing rollbacks. If you’re inclined to take a dive in, you can download the source code from GitHub.
Whatever you think of Walmart as a company, the company has long been a leader in advancing and leveraging technology in the retail industry. As a former director of point-of-sale systems at two national retailers, I saw the technology move from Kimball punch tickets to OCR-A to barcode, RFID, and beyond. Behind that are warehousing systems, sales analysis for merchandise replenishment, and much more. Today, with smartphones and omnichannel marketing strategies, shopping is a vastly different experience than it was even just a decade ago. OneOps is used to manage the Walmart and Sam’s Club e-commerce sites.
But, what WalmartLabs is doing is not just about retail, it’s about any organization that relies on the cloud for its IT needs. Add to that the idea of eliminating cloud provider vendor lock-in, and we might be in for quite a shake-up.
It appears a key benefit of OneOps is the ability for cloud users to avoid vendor lock-in to any one cloud provider. In the words of King and Kimmet, developers can use OneOps to “test and switch between different cloud providers to take advantage of better pricing, technology and scalability.” Impressive. One promise of the cloud has been ease of portability, but, in practice, that’s often not the case.
Four clouds are currently supported, including OpenStack, Rackspace, Microsoft Azure, and Amazon Web Services. CenturyLink support is said to be on the way. Nearly three dozen development products are currently supported, including Tomcat, Node.js, Docker, Ruby, Java, and JBoss, to name a few.
Other features include auto-scaling, metrics instrumentation, auto-healing with automatic instance replacement, and perhaps most important, the idea of out-of-the box readiness for multiple public and private cloud infrastructure providers.
What do you think about this? Would you try it out and perhaps place your business’s existence in the hands of software from Walmart? It’s going to be a fascinating ride. Share your thoughts about this, we’d like to hear from you.
If there’s one thing we can say with certainty about “IT,” it’s that both the information and the technology are constantly changing.
You know about the information; volume, velocity, and variety are exploding, forcing us to be ever more-vigilant about the fourth “v,” veracity. What’s has my interested to piqued is the other side, the technology of handling all that data. What’s more interesting is the way Apache Spark is pushing MapReduce aside for clustered processing of large data volumes.
There is no doubt that Spark’s swift growth is coming at the expense of the MapReduce component of the Apache Hadoop software framework. Consider this — In its December 2015 survey of 3,100 IT professionals, (59% of whom are developers), Typesafe, a San Francisco maker of development tools, noted that 22% of respondents are actively working with Spark.
So what’s the allure of Spark? I asked Anand Iyer, senior product manager at Cloudera, the first company to commercialize Apache Hadoop. “Compared with MapReduce, Spark is almost an order of magnitude faster, it has a significantly more approachable and extensible API, and it is highly scalable,” he said. “Spark is a fantastic, flexible engine that will eventually replace MapReduce.” Cloudera isn’t wasting time: In September 2015, the company revved up its efforts to position Spark as the successor to Hadoop’s MapReduce framework.
Even IBM is on board. In June 2015, IBM called Spark “potentially the most significant open source project of the next decade.” IBM is already working to embed Spark in its analytics and commerce platforms, and it is assigning 3,500 researchers to Spark-related projects. Yes, 3,500 is correct.
Research firm Gartner, reacting to the IBM initiative, said Information and analytics leaders must begin working to ensure that they have needed knowledge and skills.
And that’s the key. As a developer, you must work with an ever-changing toolbox. New skills must be acquired and mastered, and sometimes old ones left behind. Though MapReduce is not going to disappear anytime soon, the shift from MapReduce to Spark appears to be happening with astonishing speed. Are you ready?
What are your organization’s plans with respect to MapReduce and Spark? Are you planning to switch? Or maybe you’ve never even gone so far as to implement MapReduce. Share your opinions on this important topic. We’d like to hear from you.
Today is the day that Microsoft discontinues all support for the loathed Windows 8 operating system, and stops issuing security patches for Internet Explorer versions 8, 9, and 10. Windows 7 and 8.1, along with IE 11 are safe — for now.
The message is clear: upgrade or risk the possibility of bad people doing bad things. If you think this is moot and that most people have already upgraded, well, think again. According to Statista’s Global market share held by operating systems Desktop PCs, Windows XP still had an 8.44% share as of Dec. 2015, even though support was killed off in April 2014. Even the despised Windows Vista still maintained a 1.78% share. It’s not as strange as you might think. Up until a couple of years ago, I knew of a small business that was still using Windows 2000 on roughly a dozen laptops.
There’s one small twist. Since systems running Vista cannot run IE 10 or 11, they still get support for IE 9. That ends in April 2017 when all support for Vista terminates.
It’s clear that Microsoft wants users to be on Win 10 and the Edge browser. Those monumentally annoying pop-ups admonishing me to “upgrade now” aren’t going away until I do. And I will. Soon. Really.
The rules for deep-pocketed corporations and government agencies are a bit different. Those that are willing to pay for continued support of these older products can get it, for now. No doubt you read in June 2015 that your United States Navy is shelling out $9 million a year to have Microsoft continue support for XP.
If your organization or your clients have applications that require these older versions of Windows and IE, the proverbial window is closing. Here are Microsoft’s official end-of-life dates that you need to keep in mind: Vista, Apr. 11, 2017; Win 7, Jan. 14, 2020; Win 8.1, Jan. 10, 2023; and Win 10, Oct. 14, 2025.
It’s time to start the process for movin’ on up to Windows 10 and Edge.
Are you running into issues with applications that require versions of Windows or IE that are no longer supported? What is your CIO doing about it? Tell us, we’d like to hear from you.
The move to smart devices can’t be stopped. That means new opportunities for developers, especially in transaction processing and payments. But, don’t lose sight of the legacy applications that drive many of those transactions.
By 2018, consumers in mature markets will expand, rather than consolidate their device portfolio, resulting in the use of more than three personal devices, according to researcher Gartner’s “Predicts 2016” report. In a separate report, Gartner expects that by 2018 a majority of users will, for the first time, turn to a mobile device for all of their online activities. Yes, Gartner says “all.”
Do all of your online activities from a smartphone or tablet, and you’re essentially turning away from desktop technology. For developers it means interface design, data integration, transaction processing, and querying home-automation devices via the Internet rather than just the local in-home Wi-Fi setup.
Two key concerns, according to cloud consultant Judith Hurwitz are scalability and compliance. Apps on the mobile device, and the corresponding back-end server processing and data serving must be capable of scaling to peaks that might seem unrealistic, she says. Compliance becomes increasingly important. It might an a portal app for managing a patient-physician relationship or filling prescriptions through an online pharmacy. It could be placing equities orders with an online broker. Eventually, it will be interacting with your car’s diagnostics system.
Without a doubt, the rush to these new technologies is on. Developers are learning new skills and new languages. Analytics is an increasingly big part of dealing with big data. Despite this, we should not lose sight of systems that have been running at corporations for years and even decades. “Old software never dies,” Hurwitz says.
It’s an excellent point. Batch processes, such as monthly statement rendering programs at banks, can go untouched for what seem like eternity. Updated only to reflect the appearance of printed statements (my bank recently added color and changed typefaces), the underlying logic can go for decades without being touched. The original programmers may have long since retired or died, yet these applications (we used to call them programs) remain vital. And, there may no financial gain for the business in throwing old, fully functional applications on the scrap heap.
Sure, new technologies to support the Internet of Things are vital. Learning Apache Spark for big data streaming, Hadoop for distributed data processing, or Docker for containerization are vital for today’s work (and tomorrow’s). It’s also prudent not to lose sight of where a lot of corporate data remains.
When explaining the concept of cloud computing to friends unfamiliar with it, I usually turn to my imaginary recipe-of-the-day mobile and Web app as an illustrative example. Something that should be seen by users as the very model of simplicity gets very complicated very fast under the hood. It’s enough to make any developer lose his or her appetite. Are your apps doing something similar?
The premise is simple: suggest a daily recipe based on a variety of factors. The genius is blending my ingredient list (multiple data sources) to produce a finished product that’s easy to use, sports a great-looking user interface, and that will entice users to take some revenue-generating action, such as ordering ingredients or cookware online, or subscribing to a magazine.
Data source #1: User info. To get recipes, the user has to sign up, at minimum, with a user name, password, e-mail address, and postal code. The postal code is crucial, because a key function of the app is to suggest recipes based on specific location and weather conditions. That ensures a user in Maine won’t get recipes calling for collard greens, and that users in Phoenix in July won’t get recipes for steaming hot soups. Also, recipes might be sent not only for today, but for several days out, allowing the user to acquire ingredients not on hand.
Data source #2: Weather forecast based on postal code to ensure that a hearty stew, best for a cold, snowy day isn’t sent in the midst of a rare December heatwave. Obtained via API calls from a third-party service, such as Weather Underground.
Data source #3: The app owner’s database of recipes along with photos and links to discussion threads. Maybe links to how-videos also.
Data source #4: Analytics that reveals which recipes are most popular by region and time of year. It’s another ingredient in determining which recipe to suggest.
Data source #5: This could also be fields in the user info database. It includes user preferences — favorite and least-favorite cuisine types, self-rated level of cooking expertise, food allergies, how often to suggest a recipe, ingredients to avoid (George H.W. Bush famously hated broccoli), etc. Capture family birthdates, and the app could suggest birthday cake recipes and gifts 10 days in advance.
Data source #6: Current and future farm-fresh ingredients availability by location. It’s no good to suggest a recipe calling for fresh cranberries if they’re out of season. Another API call to somewhere.
Data source #7: Coupon codes and other promotional enticements for purchasing non-perishable ingredients and cookware through the application.
Data source #8: Pricing comparisons at local supermarkets for meats and veggies, likely extracted via APIs from a service that collects this type of data.
Data source #9: If the user makes a purchase, multiple data sources come into play for credit-card processing, shipping address, shipment tracking, and so on.
Data source #10: History database of user actions, including which recipes were viewed, saved, printed, rated, and commented on. What items did the user purchase? Re-display a favorite recipe a year later? What other recipes did the user seek and display? Analytics could prevent five straight days of soups, even though the weather outside is frightful and might suggest that.
In addition, there might also be integration opportunities via APIs with retail sponsors: if you’re making clam chowder and the weather is snowy, can we suggest the following cold-weather apparel items, or winter sporting goods, or vacation trips to a tropical resort?
After all this comes the matter of designing and building an application that looks great, presents all the aforementioned data as a completely seamless experience, and performs blazingly fast.
The point here is that nothing is simple. The specs for this app would be complicated. And, it takes a huge amount of talent to build an app that users enjoy and look forward to using repeatedly.
Are you building cloud and mobile apps that integrate data from a large number of sources? What’s your data mashup process and how do ensure stellar performance? Teach us how you solved these problems; we’d like to hear from you.
The Internet of Things, it seems, has been 80% hype and 20% real products and services. That’s going to change in 2016. The technology is mature and reliable. Security is getting better. APIs to access and leverage data from IoT sensors are becoming more commonplace. And, most importantly, IoT, so far largely a consumer novelty, is expanding from the home to the industrial sector.
This is all great except for one thing. There aren’t enough developers with enough technical ability in IoT combined with an understanding of business principles. At least not in the United States.
It is an issue that ETwater has been dealing with for years. The company, based in Novato, Calif., designs cloud-based IoT smart lawn irrigation systems for the consumer, industrial, and commercial sectors. It builds Wi-Fi hardware controllers that manage the multiple zones of a typical lawn sprinkler system. It also does billing of customers and provides a breadth of reports about water usage and savings. In the middle is an integration analytics engine that calculates when and how much to water, based on dozens of factors pulled in via APIs from a variety of sources. These include weather forecasts, humidity levels, what type of plantings are in each sprinkler zone, time of year and day, sun and wind conditions, sensor readings of soil moisture levels, and a whole lot more. It’s not the kind of application that comes to mind when I think IoT, but, when you look at all the pieces, it’s an exquisite blend of data that results in specific actions.
But, there’s a problem, according to CEO Lee Williams. He can’t find enough qualified developers with expertise in IoT. Call it an IoT talent shortage or gap. The company develops its hardware, software, and analytics with a co-located technical staff, consisting of a primary engineering team located in the Ukraine, two development teams in India, and a group of architects and user experience designers in the San Francisco area.
Williams told me, “It is difficult to find talent in the U.S. that is as sophisticated and capable as what some of the European teams can do in radio and wireless technology in particular.” And he was even more blunt about developers specializing in cloud-based mobile apps. “I would not say good senior mobile developers are widely available in the U.S. where I do feel they are available elsewhere.”
I wouldn’t go so far as to label this an indictment of how we grow our talent on these shores, but, it should serve as something of an alarm. The U.S. is not alone; the talent gap exists in Europe, too.
What is your experience in finding qualified developer talent to work on your company’s IoT, mobile, or cloud-based projects? Are you able to fill your open positions? Are you forced to hire expensive outside contractors for temporary help? Or are you turning to offshore technical expertise to get the job done? Share your opinions and experiences; we’d like to hear from you.
You’ve built all kinds of apps for cloud and mobile — retail, medical, financial, navigational, IoT and more. Most have sign-ons with security and authentication. Almost all integrate data from numerous disparate sources and combine to create something entirely new. You’ve designed user interfaces. You’ve streamed music and video. You’ve built user experiences.
But, have you built a game for the cloud? Way back in 2010, it was a concept big enough to be covered by CNN. Even Forbes magazine said cloud gaming would be a “game changer,” and the Wall Street Journal called gaming the killer app of cloud computing. Would you believe that IBM is “creating a business infrastructure for games? Serious stuff, this game playing is.
Graphics card maker NVIDIA has a section of its website devoted to GaaS. Called NVIDIA GRID it promises the ability to stream video games like any other streaming media. It “renders 3D games in cloud servers, encodes each frame instantly and streams the result to any device with a wired or wireless broadband connection.” The company already lists nine middleware suppliers and four IaaS providers that are playing along. An SDK is available, if you’re ready to get into building cloud games.
GamingAnywhere describes itself as an open-source clouding gaming platform designed to be extensible, portable, and reconfigurable. In this environment, games run on cloud servers while players interact via networked thin clients. The biggest challenge may not even be technical: The site notes that gamers are hard to please. They demand high responsiveness and high video quality, “but do not want to pay too much.” Truer words may never have been spoken.
The giant on the block, Amazon Web Service is a player, too. To quote from the AWS gaming website, “Amazon offers a comprehensive suite of services and products for game developers in any games industry vertical, for every major platform: Mobile, Social/Online and PC/Console.”
Sure, you’re building great cloud apps for the bank or insurance company you work for. But, inside of many of us lurks a cape-wearing superhero who wants to save the world. If that’s you, share your experiences about building games for the cloud. We’d like to hear from you.