Head in the Clouds: SaaS, PaaS, and Cloud Strategy


May 16, 2017  3:14 PM

Forget the cloud. NASA wants your coding skills for outer space.

Joel Shore Joel Shore Profile: Joel Shore
Application modernization, FORTRAN, Legacy applications, Legacy software

NASA. Remember NASA? It’s the once-glorious government agency that put men on the moon, the agency whose Voyager I space probe left our solar system in 2013 for parts unknown, the agency that, in the immortal words of John F. Kennedy, did things not because they were easy, but because they were hard.

FORTRAN. Remember FORTRAN? Well, of course you don’t. And that’s precisely why, in mid-2017, maintaining ancient programs written in it isn’t easy. It’s hard. Really hard. It’s so hard, in fact, NASA is holding a contest featuring a prize purse of up to $55,000. It’s the sort of app dev challenge that would look good on any résumé — if you know FORTRAN, that is. And computational fluid dynamics, too.

According to NASA, all you need to do is “manipulate the agency’s FUN3D design software so it runs ten to 10,000 times faster on the Pleiades supercomputer without any decrease in accuracy.” It’s called the High Performance Fast Computing Challenge (HPFCC).

If you’re a U.S. citizen at least 18 years old, all you need do, NASA says, is download the FUN3D code, analyze the performance bottlenecks, and identify possible modifications that might lead to reducing overall computational time. “Examples of modifications would be simplifying a single subroutine so that it runs a few milliseconds faster. If this subroutine is called millions of times, this one change could dramatically speed up the entire program’s runtime.”

If you’ve ever asked what you can do for your country, this may be it.

It’s your chance to go far beyond mere cloud computing, your chance to do outer-space computing — perhaps to infinity and beyond.

Ok, let’s get serious… FORTRAN has suffered  mightily from the same ignominious fate as assembler language and COBOL (the language that paid my bills for many years). No one cares about FORTRAN, no one wants to learn it, few institutions bother to teach it, and many who were expert in it are long-since deceased.

Physicist Daniel Elton, in a July 2015 personal blog entry, suggests that FORTRAN remains viable (at least among physicists) because of the enormous amount of legacy code still in production, its superior array-handling capabilities, little need to worry about pointers and memory allocation, and its ability to catch errors at compile time rather than run time. In a March 2015 post in the Intel Developer Zone, Intel’s Steve Lionel (self-anointed “Dr. FORTRAN” and now recently retired) said a poll of FORTRAN users conducted at the November 2014 supercomputing conference indicated 100% of respondents would still be using the language five years later.

With good reason, we live in a world dominated by the likes of Java, C, C++, C#, Python, PHP, Ruby, Swift, R, Scala and scads of others. Visual Basic, Pascal, PL/I, ADA, APL, along with COBOL and FORTRAN have seen their day. The problem is that, to paraphrase Gen. Douglas MacArthur, old programming code never dies — and it doesn’t fade away, either.

How much ancient code from legacy languages do you come across in dealing with enterprise IT? Are you afraid to tinker with it? Does anyone know what those programs actually do? Has the documentation been lost to the ravages of time? Does the source code still exist? Tell us how you deal with it; we’d like to hear from you.

May 11, 2017  2:45 PM

Making error fixes after deploying cloud applications

Jan Stafford Jan Stafford Profile: Jan Stafford

Cloud deployments of software often pose the most ticklish error detection and repair problems. Customers are constantly using a cloud app developer’s products, at all times of day and night and across geographies. Meanwhile, it’s a safe bet that something in those releases will be breaking, and error fixes will needed, said Brian Rue, CEO of Rollbar, which provides real-time error monitoring services for developers. The trick is detecting errors quickly, rather than waiting for customers to report them.

“You’re releasing improvements, releasing bug fixes, and that constant state of change means that you need to have a constant state of monitoring,” said Rue. “If something is broken, and you don’t find out about it until a customer writes in days later, it could easily be days or weeks before you find a way to repeat the problem.  The development team gets caught up in a constant state of firefighting.”

Rue shares some best practices for error handling and making code errbrian-rueor fixes in this article. Rue co-founded Rollbar after experiencing the problems of error handling when developing gaming apps, at first on a kitchen table in a garage with three colleagues.

The vicious circle

“Imagine a circle starting from deployment,” explained Rue. From deployment, the next thing that happens, typically, is an error happens. Your team needs to discover if it’s a new error or a read error. A new one calls for alerting and prioritization. Once an error is prioritized, then the developers can go explore the data for the error. They can discover what uses the error affects, the values of the variables and other information about the cause of the error.

“Usually, that’s enough data to enable writing and deploying error fixes,” said Rue. Then it’s on to the next problem. “That wheel of release, error monitoring and error fixes is constantly spinning,” he said.

Structured data is good data

The better the data structure is, the more the developer can discover every detail about each code error. “Data really should be structured in terms of keys and values, as opposed to just raw strings,” said Rue. So for example, let’s say there’s an error message that says: “This user tried to log in and it failed.” That might be something that the cloud developer wants to log. That should be logged as, say:  “User login failed with the user ID as metadata.” That way, it’s both easier to group as it is, so there is just one message saying: “Login failed.” Then the cloud developer can see all those together and is closer to making error fixes.

“Once you have that structure, you can easily query data forward to see which logins failed. You can figure out how that correlates against other problems, and so on,” Rue said.

Add instrumentation to apps

The core of error monitoring is tracking the application from the perspective of the application, according to Rue. So, to use it the cloud app developer needs to be able to add the instrumentation into the application. Typically that’s as simple as installing a Ruby gem, installing a package from npm or installing a kind of Java middleware; all services most development team have used. “But, at a high level, this requires buy-in from the developers to identify what there is, and then make sure that each component is instrumented,” Rue said.


May 5, 2017  8:30 AM

Red Hat OpenShift.io puts the entirety of app development into your browser

Joel Shore Joel Shore Profile: Joel Shore
Application development, containers, Linux, Red Hat Enterprise Linux

The Red Hat Summit in Boston this week drew more than 5,000 developers, according to Paul Cormier, president of Red Hat’s products and technologies business. That’s impressive for a major software company that literally started out as a flea-market operation.

“It’s so much fun to watch this all roll out,” Cormier says. “I’ve been at Red Hat for 16 years and was employee #120.” And how did Red Get its start? Cormier says company founder Robert Young began by “downloading Linux off the ‘net, burning it to CDs, and selling it out of the trunk of his car at flea markets.” This is an outfit that’s come a long way, with a pervasiveness that extends into almost every home. At one time, there were seemingly dozens of free, open-source Linux distros in the early days, but it’s the one company that created tools, platforms, and enterprise-class support that is the premier survivor.

Two key announcements made at the Red Hat Summit were OpenShift.io, a complete development environment accessed through the browser, and the Red Hat Container Health Index, a method for scoring containers for several factors, including version currency and security. Other announcements were a tightening of Red Hat’s relationship with Amazon Web Services and an on-premises containerized API management platform, which I reported on last week.

OpenShift.io is a new, comprehensive, end-to-end development, test, and deploy environment in a browser. There’s nothing to install on developers’ local desktops, on-premises, or in a business’s private cloud. Everything needed to design, build, and deploy is available through the browser.

“I’ve said this until I was blue in the face — a container is Linux, it’s just Linux carved up in a different way.”
— Paul Cormier, president, Red Hat products and technologies

“Now that we’ve finally put Dev and Ops together, we’re making the tooling more intelligent and more intuitive for developers to be even more productive,” Cormier says. “The OpenShift.io stuff uses artificial intelligence from all the things we’ve learned over the last 15 years to guide developers through building their application and recommend what might be a better path to go than the path they’re on.” With nothing to install, Cormier says developers can begin building from day one, avoiding the weeks and months it can sometimes take to procure and spin up development resources and infrastructures.

Another major announcement was the Red Hat Container Health Index, a service that grades the containerization performance and security of Red Hat’s own products and the products of certified ISVs. It’s not a one-time examination of containers, but rather a way to track ongoing container health volatility, letting you know that container considered fully secure a month ago, earning an “A” rating is now vulnerable, dropping to a grade of D or F.

“I’ve said this until I was blue in the face — a container is Linux, it’s just Linux carved up in a different way,” Cormier says. “Container tools help you package just the pieces of the user space OS that you need with the application.” When people were playing with containers and yet betting their business on them, they pulled containers from everywhere. Now, customers want a commercial-grade system.

“What we’ve done is containerize all of our products into a RHEL (Red Hat Enterprise Linux) container. We can scan the pieces of the OS that are included and tell if there are known security vulnerabilities, bugs, or if there’s a new version available. We’ve built that into our back-end systems that we use to build all our products,” Cormier says.

Red Hat will now make those tools available to ISV partners to test their own containers. All results will be available through a portal. “If you’re going to be a container provider in the commercial world, this is what you have to do.”

Do you use Red Hat development tools and platforms? What do you think of the company’s announcements this week and how do you plan to leverage these technologies in your upcoming projects? Share your thoughts with us; we’d like to hear from you.


April 11, 2017  10:00 AM

Microsoft snaps up Deis in latest container play

Joel Shore Joel Shore Profile: Joel Shore
Application containerization, Application development, Kubernetes, orchestration

One thing we know for sure is that under CEO Satya Nadella, Microsoft — in both action and spirit — looks very little like the Windows Or Else empire from the days of Steve Ballmer. The latest move is Microsoft’s acquisition this week of Deis, a little-known San Francisco developer of open-source software that makes Kubernetes easier to use.

Deis, in its own words, “helps developers and operators build, deploy, manage, and scale their applications on top of Kubernetes.” We all want to do that.

Writing in an April 10, 2017 blog post, Scott Guthrie, executive vice president of Microsoft’s cloud and enterprise group, wrote “we’ve seen explosive growth in both interest and deployment of containerized workloads on Azure, and we’re committed to ensuring Azure is the best place to run them.” The post goes on to say, “Deis gives developers the means to vastly improve application agility, efficiency and reliability through their Kubernetes container management technologies.” Guthrie We expects the technology to make it easier for customers to work with existing Microsoft container technologies, including Linux and Windows Server Containers, Hyper-V Containers, and Azure Container Service, “no matter what tools they choose to use.”

Deis CTO Gabriel Monroy, perhaps put it best, saying “robust and open container orchestration, paired with new application architectures are giving organizations unprecedented flexibility and choice.” That could be a covert comment on the current Kubernetes vs. Docker Swarm competition.

Monroy goes on to issue something of a minor mea culpa, noting that the union with Microsoft  continues Deis’s mission “to make container technology easier to use.”

And there’s the rub. It’s not always easy to use. We’ve got a zillion cloud services and providers giving us an overabundance of tools, languages, technologies, platforms, and techniques. For all the problems legacy monolithic architecture presented, programmers (we didn’t call them developers back then) and SysOps staffers had few components to manage.

Today, here we are with lots and lots of pieces that need to be assembled, like a mosaic, into something that runs flawlessly, performs perfectly, provides unfettered access, supports instant change, and provides a business advantage. What do you do with all these little shards? You put them into containers and orchestrate their deployment and management so they, like a symphony orchestra, play together and become a whole that together is greater than the individual parts. After all, there’s a reason Kubernetes describes itself as “production-grade container orchestration.”

Where do you fall into line when it comes to containers and orchestration? For all the talk, it seems lots of IT operations have yet to dip their collective toes into the containerization waters? How about you? Actively using container technology in production? Working with an early proof-of-concept mini-project? Learning but haven’t taken the plunge yet? Share your experiences — and concerns — with us; we’d like to hear from you.


March 23, 2017  8:04 PM

Have you updated your iOS apps to 64 bits?

Joel Shore Joel Shore Profile: Joel Shore
32 or 64 bits, Apple iOS

With Apple’s early June Worldwide Developers Conference a little more than two months away, it’s time to get moving, if you haven’t already, on one of the big changes almost certainly coming to iOS 11 — the dropping of support for apps that are not written for 64-bit processors.

According to a report from metrics provider SensorTower, the number of ripe-for-banishment 32-bit apps in the Apple app store hovered around 170,000 as of mid-March 2017. A big number, indeed, but it represents only about 8% of the approximately 2.4 million apps currently available in the app store. The good news is that the other 92% of listed apps are already 64-bit compatible.

Perhaps not surprisingly, the category with the most non-conforming apps is games at nearly 39,000. That’s about 20.6% of all the problem apps and nearly double the number of apps in the next offending category, education, just shy of 20,000.  Other categories with a significant number of non-64-bit apps include entertainment, lifestyle, business, books, utilities, travel, and music. Only two categories have fewer than 1,000 offending apps, weather and shopping.

The high number of problem game apps is, of course, a reflection of the number of gaming apps in the app store in the first place. Gaming apps, many of them positively awful and almost always free, are often the domain of teenagers learning how to write code and do design. It makes sense then that these apps are the ones most likely to be abandoned as their creators mature, their coding skills evolve, and they move on to weightier projects. No doubt some of those apps were likely written to run on the now-defunct Parse platform — a popular choice for game development — but which were never migrated to another hosting environment and left to wither on the vine.

While this purge is all about 64-bitness, it’s not the first time Apple has made an attempt to clean house. In the first nine months of 2016, Apple deleted roughly 14,000 apps per month, according to SensorTower. That changed drastically in September 2016 when Apple started to notify developers that it would remove apps it considered outdated or which did not adhere to current various guidelines. You got 30 days to fix your app or it would be removed. The company wasn’t kidding — In October 2016, the number of apps purged soared to more than 47,000

The message is clear: If your app hasn’t been updated in eons, doesn’t comply with current standards, or is still mired in the 32-bit world, it’s headed for oblivion. Need some help to convert your app to a 64-bit binary? Fear not, Apple has an online guide that includes sample code. Better get busy.

Are your iOS apps up to date as 64-bit binaries? What difficulties to you encounter and how did you solve them? Do you have apps in the Apple app store that you simply choose to abandon? What tools do you use to build apps for iOS? Share your thoughts with us; we’d like to hear from you.


March 7, 2017  2:39 PM

The end is near for cloud computing. Or is it?

Joel Shore Joel Shore Profile: Joel Shore
Application development, Cloud Applications, Cloud Computing, Mesh network

Cloud computing is a long way from being fully mature, but its obsolescence may already be upon us. Is the cloud’s future really up in the air?

As Peter Levine, a partner at venture capital firm Andreessen Horowitz, puts it, “Everything that’s popular in technology always gets replaced by something else,” be it Microsoft Windows, minicomputers exemplified by Digital Equipment Corp., specialized workstations typified by Sun Microsystems, or, yes, even cloud computing.

As Levine explains it, cloud computing, which he views as the centralization of IT workloads into a small number of super-mega-huge datacenters, is an unsustainable, unworkable, slow-to-respond method. The need for instantaneous information makes the network latency associated with a device-to-datacenter model and the corresponding datacenter-to-device return trip simply too long and therefore unacceptable.

Computing, Levine suggests, will move to a peer mesh of edge devices, migrating away from the centralized cloud model. Consider smart cars. They need to continually exchange information with each other about immediate, hyperlocal traffic conditions. Smart cars need to know that an accident occurred 10 seconds ago a half-mile up the road, that a pedestrian is entering a crosswalk, or that a traffic light is about to turn red. For this to work requires realtime data collection, processing, and sharing with other vehicles in the immediate area. The round-trip processing in the cloud model isn’t even remotely (pun intended) fast enough.

Text messaging is similar in that messages exchanged between people sitting just feet apart are still routed through a distant datacenter. It’s inefficient, slow (in compute terms), and unsustainable. The centralization is needed only for logging and journaling.

The answer, Levine postulates, is pushing processing and intelligence out to the edge, using many-to-many relationships among vehicles for information exchange, along with edge-based processing based on super-powerful machine-learning algorithms. No wonder he describes the self-driving car as “a datacenter on wheels.” Similarly, a drone is a datacenter with wings and a robot is a datacenter with arms and legs. They all need to process data in real time. The latency of the network plus the amount of information needing to travel renders the round-trip on the cloud unsuitable, though that’s still plenty fast enough for a Google search, he says.

The cloud still plays a role; data eventually needs to be stored, after all. That makes this model not fully edge and not fully cloud. It’s perhaps closer to what Cisco dubs “fog computing.” It also speaks to the inevitability of how IoT-driven smart cities must operate, a concept explained to me by Esmeralda Swartz, vice president of strategy and marketing at Ericsson.

There’s a profound irony to this. We started the age of IT (MIS as it was then known) with the IBM mainframe as the centralized place where all programs ran, all processing was done, and all data was stored. That was blown apart by decentralization, driven by the client/server model, Ethernet (or Token Ring or ARCnet), network operating systems (NetWare, VINES, LAN Manager, 3+ Open, Windows for Workgroups, Windows NT, OS/2 Warp, etc.) and early network-aware databases, such as Btrieve. Cloud computing swings the pendulum back to the centralized data model of the past, albeit with a dose of edge processing.

It’s throwing out everything you know and seeing from a paradoxically different perspective — just like the young girl presented on Christmas morning with her great-grandmother’s heirloom wristwatch, only to declare, “A watch that doesn’t need batteries? Gee, what will they think of next!”

You can watch Levine’s presentation “Return to the Edge and the End of Cloud Computing” on YouTube.

No doubt you’ve already thought about this. Where do you think cloud computing is headed? Is this a technology that is ultimately doomed to be superseded by something different, better, faster, and cheaper? What does this mean for you as an application developer? Share your thoughts and fears; we’d like to hear from you.


February 27, 2017  12:34 PM

Oscars flub evokes recent cloud-computing snafus

Joel Shore Joel Shore Profile: Joel Shore
AWS, Azure, Cloud Disaster Recovery, Cloud outages

Now that La La Land Moonlight has won the Academy Award for best picture, this is as good a time as any to look back at some screw-ups in the world of cloud computing. May we all learn from our mistakes.

The Force is not with you: Take a trip back to May 9, 2016, less than a year ago. It was on that day the Silicon Valley NA14 instance of Salesforce.com went offline, a condition colloquially known as Total Inability To Support Usual Performance (I’m not going anywhere near the acronym). Customers lost several hours of data and the outage dragged on for nearly 24 hours. CEO Mark Benioff took to his Twitter account to ask for forgiveness. Shortly after, Salesforce moved some of its workloads to Amazon Web Services.

AWS giveth, AWS taketh away: Though transferring workloads to AWS helped Salesforce recover lost customer confidence (though not lost data), the opposite was true for Netflix. On Christmas Eve 2012, at a time when kids might be watching back-to-back-to-back showings of A Christmas Story, problems with AWS’s Elastic Load Balancing service caused Netflix to go down. This Grinch stole Christmas not just from little Cindy Loo Who, but from millions of paying subscribers waiting to see if Ralphie gets his dreamed-about Red Ryder BB rifle. Lessons were learned. Two years later, during a massive AWS EC2 update, Netflix rebooted 218 of its 2,700 production nodes. Alarmingly, 22 failed to reboot, but, the Netflix service never went offline. At the opposing end, Dropbox went old school in March 2016, dumping AWS and moving its entire operation onto its own newly built, enormous infrastructure.

Those darn updates’ll getcha every time: Amid verdant woodlands, beneath pure azure skies, protected by mountains, our cloud service lies. That bucolic portrait of the Pacific Northwest (or New Hampshire, perhaps) mattered little to Microsoft on Nov. 18, 2014 when the Azure Storage Service suffered a widespread outage traced back to the tiered rollout of software updates intended to improve performance. “We discovered an issue that resulted in storage blob front ends going into an infinite loop, which had gone undetected…” was the blogged explanation.  Another major outage occurred in Dec. 2015.

Eat in, Dyn out: The Oct. 21, 2016 wave of coordinated distributed denial-of-service attacks targeting Domain Name System provider Dyn impacted dozens of high-profile businesses to varying degrees. These included Airbnb, Twitter, Amazon, Ancestry, Netflix, PayPal, and a long list of others. Dyn’s own detailed post-mortem of the attack makes for fascinating reading. If you think it’s impossible for millions of geographically far-flung, seemingly unrelated IoT devices to attack in a coordinated manner, think again.

You’ve heard of Office 360? Sure you have. The name is favored among cynics who joke that Microsoft’s cloud-based productivity software should be called that because it is offline five days out of every year. Office 365’s e-mail service was down for many users for about 12 hours on June 30, 2016. That follows other outages in various geographies on Dec. 3, 2015; Dec. 18, 2015; Jan. 18, 2016; and Feb. 22, 2016.

Got healthcare? We all know the stories about how healthcare.gov kept crashing due to poor design, inadequate compute resources, demand that vastly exceeded expectations, and so on. Enough said.

What’s that one cloud disaster story you’ve been dying to share? Now’s your chance. Tell us all about it; we’d like to hear from you.


February 15, 2017  10:08 AM

Is 30% of your cloud spending wasted? Survey says yes.

Joel Shore Joel Shore Profile: Joel Shore
DevOps, Docker, Hybrid cloud, ITspending

One of the things I look forward to each year at this time is the release of the annual State of the Cloud report from cloud services provider RightScale. That may say more about the quality of my social life than anything else; nevertheless, the study always contains great insight into the psyche of cloud computing technology professionals. Let’s dive into some key findings. The survey is being published today (2/15/2017) and is based on research undertaken in January 2017. A compendium of fresher opinions you’ll not find.

Hybrid up, private down: A multi-cloud strategy exists in 85% of surveyed enterprises, up from 82% in Jan. 2016. But, look at the other side: Private cloud adoption fell to 72% from 77%. What that means is momentum is swinging to the public cloud.

The cloud is just as wasteful as traditional IT: Here’s a real shocker: Survey respondents estimate that 30% of cloud spending is wasted. RightScale’s own research pegs waste even higher, between 30% and 45%. How in the world — or in the cloud — did this happen, and happen so quickly? According to RightScale, despite growing scrutiny on cloud cost management, few companies are actively spinning down unused resources or selecting lower-cost clouds or regions.

Survey respondents estimate that 30% of cloud spending is wasted. How in the world — or in the cloud — did this happen, and happen so quickly?

IT is taking control: So-called citizen IT and shadow IT aren’t going away, nor are no-code / low-code technologies that empower line-of-business departments to build solutions. Despite this, it is IT that selects public cloud providers (cited by 65% of respondents) and it is IT that decides which apps to migrate into the cloud (63%).

The majority of enterprise workloads are now in the cloud. The study revealed that 41% of workloads run in public clouds and 38% in private clouds. Among larger enterprises the numbers differ slightly, with 32% of workloads in public cloud and 43% in private clouds.

It’s a multi-cloud world, after all: According to the study, public cloud users are already running apps in 1.8 public clouds while private cloud users are leveraging 2.3 private clouds. Those numbers strike me as being surprisingly low.

It’s a DevOps world, too: Whether you believe in DevOps or not, it’s here to stay, now embraced by 84% of large enterprises. The expansion of DevOps into BizDevOps that brings the business side into the mix is also on the rise, now used in 30% of enterprises, compared with 21% just a year ago.

It’s getting better every day: Even though the talent shortage remains the top challenge facing IT, it’s less of a concern than a year ago, falling to 25% from 32%. Security concerns also abated slightly, dropping to 25% from 29%. Mature cloud cite managing costs as a key concern while newbies worry more about security.

Docker is moving like a tremendous machine: Docker adoption is surging, leaving Chef and Puppet in the dust. Kubernetes use doubled. Interestingly, 35% of survey respondents indicated they’ve gone with the container-as-a-service approach from AWS ECS (35%), Azure Container Service (11%), and Google Container Engine (8%).

Azure cuts into AWS’s lead: Azure adoption soared to 34% from 20% a year ago as AWS stayed flat, but in the lead at 57%.

This annual study isn’t the only one out there, but it does provide a good snapshot of how the cloud changes from year to year. Now it’s time for you to respond. Where are you seeing wasteful cloud spending? Is your organization thoroughly immersed in DevOps and BizDevOps? Are your app development efforts all containers all the time? Finally, what cloud platforms are you using, to what extent, and how has that changed over time. Share your experiences with your counterparts; we’d like to hear from you.


January 30, 2017  5:08 PM

Is IoT turning into the Internet of Thugs?

Joel Shore Joel Shore Profile: Joel Shore
Application security, Hacking, iot security

In a story that circulated worldwide last week, it was reported that a hotel in Austria, the Seehotel Jaegerwirt, had been attacked by hackers who disabled the guestroom cardkey system, locking guests in their rooms until the hotel paid a Bitcoin ransom. That’s not exactly accurate, but enough of it is true to merit some serious discussion.

According to published reports correcting the initial misinformation, hackers did take control of the cardkey system, but only to the extent that the encoding of new key cards was disabled for guests in the process of checking in. Doors were never immobilized and guests were never trapped. Nevertheless, a ransom was indeed paid in Bitcoin currency by the hotel to have its systems and data released. And it’s apparently not the first time. The bottom line is that the hotel reportedly is planning a return to good, old-fashioned metal keys.

Think devious, think scheming, think cunning, because the people writing malware are doing exactly that, and they’re doing it better than you.

This situation probably has little to do with any of the three big makers of hospitality cardkey systems, Onity, OpenKey, and Salto Systems. It’s likely more about the bad guys being invited in on a red carpet right through a hotel’s front door. We’ve all heard the stories before — clicking on an innocent-looking link in an email message, inserting into a USB port a flash drive that contains malware, network hardware configured with default passwords, unprotected ports, and so on. One thing is for sure: We’re not far from IoT becoming an acronym for the “Internet of Thugs.”

Of course, there’s a cloud and mobile application development angle to this. We’re well beyond magstripe key cards or ones with embedded RFID tags. Indeed, the newest advancement in room-access technology is the complete elimination of the card. An app on your smartphone that uses proximity Bluetooth to communicate with the door lock is very much a reality and being installed in hotels worldwide. It’s yet another inevitable use of cloud and mobile computing technology.

While the vulnerability in this particular case may lie more in the area of network infrastructure management, it’s no less important for anyone cranking out lines of code to always keep security top of mind. It’s useful to approach any coding project with profound skepticism about its security and potential vulnerabilities. Think devious, think scheming, think cunning, because the people writing malware are doing exactly that, and they’re doing it better than you.

Consider this: According to a Dec. 2016 blog post by Amol Sarwate, director of vulnerability labs at security firm Qualys, Microsoft issued 155 security bulletins for the year, up 15% from 2015. Over the lifetime of Windows 7, it added up to many hundreds of security patches being issued. If a smart company like Microsoft (or Apple, or Adobe, or Android, or Oracle, any other company) can’t build software that’s secure, how in the world can you?

What glaring vulnerabilities were overlooked in the design of software that you coded? How were these vulnerabilities corrected and users notified? Share your horror stories; we’d like to hear from you.


January 19, 2017  5:05 PM

Why to use APIs, explained in 18 words

Joel Shore Joel Shore Profile: Joel Shore

Whether you read my site, SearchCloudApplications, another of the TechTarget family of websites, or any of the seemingly trillions of sites that write about application-development technology, three items stand atop the heap of coverage: containers, microservices, and APIs.

For the moment, let’s talks about APIs.

In a story I wrote this week about Shufflrr, a New York provider of SaaS-based presentation-management services, founder and CEO James Ontra revealed what’s under the hood. I always ask when interviewing a software company, because this is the sort of thing that other developers want to know.

Shufflrr’s business model is to provide businesses with a way to catalog and control their vast collection of PowerPoint and other presentations. Employees can view all and create new presentations through drag-and-drop of individual slides in the archive. Viewed online by potential customers, highly detailed tracking of which slides were viewed, for how long, along with other metrics are available. For enterprises with large salesforces, it’s a great idea.

“Every feature, function, and use is transmitted through APIs, which gives us the ability to grow our platform.”

Turns out this SaaS offering is hosted on Amazon Web Services. No surprise there. But, more interesting, the front end was built with Bootstrap, a platform developed at Twitter and which I don’t recall anyone ever speaking about before. Bootstrap is an open-source front-end framework based on HTML and CSS design templates for building web-based and mobile applications that work on and format properly for any device. Beyond that, the Shufflrr ecosystem employs the Microsoft stack on .NET using SQL.

Here’s the gem: Ontra explains the entire Shufflrr site is run by APIs and goes on to say, “Every feature, function, and use is transmitted through APIs, which gives us the ability to grow our platform.”

And there you have it in 18 words. Through the pervasive use of APIs, development is simplified. Internal process workflows and connections to external data sources are handled in a consistent manner no matter who the code jockey is. Customers can write their own extensions, if desired. Cheaper. Faster. Better. Consistent. Secure.

This, of course, is much easier when you are, like Shufflrr, a young company with zero legacy data and no legacy applications. The clean-sheet approach does have it advantages.

How pervasive is your company’s use of API technology? Share your thought on the good, the bad, and the ugly of designing, implementing, and managing APIs, either your own or those provided by third parties. We’d like to hear from you.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: