A key challenge in developing applications for the cloud age is dealing with the continually shrinking interval between updates. Why, then, is automation of release and deployment so rarely used?
In the mainframe days, applications were written to run on one and only one machine, not the billions of smartphones, tablets, and IoT devices we develop for today. Years could pass between updates. Even in the client-server days, apps were written to run on a small number of servers running network operating system. Application updates to add new functionality were still spaced far apart. Not so much anymore.
Today, it’s common for apps to get updated biweekly for competition-driven feature enhancements and seemingly daily for bug fixes. And we’re writing for billions of devices running a bunch of different operating systems, whose features change radically with each new version, and all sporting a veritable cornucopia of screen sizes and resolutions.
It makes you wonder why we’re all breaking our necks to develop apps faster if we’re not any good at shipping the code out the door.
You’d think that a faster time to market for would be competitively advantageous, or that rapid updates to fix the bugs that crept into yesterday’s update (due to inadequate testing) would drive any corporate or commercial developer to implement automated release management. But, no.
Theresa Lanowitz, CEO of the research firm voke (yes, with a lower-case “v”) opened my eyes to this in a new study just published by her firm called Market Snapshot Report: Release Management. In a lengthy conversation she expressed surprise that the use of automated release management isn’t more widespread.
Releasing software faster and with higher quality is a challenge for more than 60% of survey participants, Lanowitz said. Just 14% reported no issues. So, what are these challenges? Struggling to release faster was cited by two-thirds of respondents. Just behind was the struggle to release higher-quality software at 60%.
Separately, more than half of those surveyed admitted that their organizations had to delay one or more software versions due to problems with deployment or release. It makes you wonder why we’re all breaking our necks to develop apps faster if we’re not any good at shipping the code out the door.
For the very first time since voke initially started doing this longstanding recurring study, respondents indicated that quality is more important than release. Think about that.
It’s the case of a dog chasing its own tail. If apps were of better quality, they would likely not need to be released as often. And if you build something better in the first place, you have a better chance of satisfying the customer. Check your phone — does the near-daily frequency at which some apps release bug fixes lead to a fatigue factor among users? I think it does.
The voke survey also looked at the build and deploy phases of a development project. Regarding build approach, only 29% do continuous integration with automatic check-in of each build. Gated check-in in which check-ins are accepted only if the changes merge and build successfully, was practiced by just 19%. Further down the list are manual, scheduled, and rolling builds. As for deployment, automation through scripts was performed by 32% with manual scripts just behind at 31%. The use of containers, including as Docker, CoreOS, LXD, Kubernetes, and others, lagged far behind at just 9%.
Lanowitz characterized the lack of automation as surprising as well as damaging to the business, given that release management is not new. I’d call those adoption rates shockingly low.
How well does your organization do when it comes to release management? Are you fully or partially automated? Or are you still completely or primarily manual? What is the impact these practices has on bringing new versions to market quickly and on ensuring that releases are not being deployed to fix bugs in prior releases? Share your experiences with us; we’d like to hear from you.
Decades ago, legend has it, many programmers got paid based on the number of lines of code they wrote. The more you produced, the better you were perceived to be. The inevitable result, not surprisingly, was mountains of bloated, inefficient code. Are we coming back to it?
Once organizations wised up to the foolhardy belief that those who produced the most code were the best, the push was on to write tight, concise, efficient code. After all, in the mainframe days when you typically had only 64 kilobytes of magnetic core memory to work with, throwing more iron at slow-running applications was a very expensive — and usually impossible — proposition. Though tools existed to exercise all the code in a program for logic errors (including hopefully never used exception processing routines), analyzing code for inefficiencies — such as poorly designed “perform until varying after” loops in Cobol — was something of a magic act.
What eventually changed was the plummeting cost of compute resources and memory. Once you were able to throw a shelf full of inexpensive Compaq Systempro servers and NetWare 2.15 at a problem, it was often easier to solve slow execution with more hardware than it was to hunt down poorly written lines of code. And now with megabytes of memory available for programs to run in, the need for memory management (anyone remember writing overlays?) began to disappear.
I fear the problem of bloated code, slow execution, and software quality is not getting better. We’ve made it easy — and perhaps necessary — to create inefficient code.
Today, we have business cycles that demand huge changes in application functionality almost weekly instead of once every two years. There’s simply not enough time to go back and fix inefficient code that was rushed out the door. Compute resources, including processing power, gigabytes of memory, and petabytes of storage, are so cheap as to be nearly free in comparison to mainframes. With cloud, it’s easy to scale infrastructure resources by orders of magnitude and do it almost instantaneously. No-code / low-code tools are generating code for us, but how good is that code? With streaming analytics we can examine everything, whether it’s central to the direct creation of revenue or not. Developers can easily tap into an enormous number of reusable libraries with a full understanding of what they do but no insight into how well they do it. Even with the API explosion that is upon us, we scrutinize their security while their performance efficiency likely is never called into question.
Is the idea of writing phenomenally tight code simply passé? Are you continually under the gun to get your code working, ship it, and move on? Are you proud of the code you write? No doubt you’ve thought about this before. Share those thoughts with us; we’d like to hear from you.
We’ve written many stories over the past year about cognitive computing, machine learning, and artificial intelligence — which are all, for lack of a better term, kissin’ cousins of modern-day computing. These are all growing in importance and taking on a larger presence. That means the big boys are going all in.
This week’s entry into artificial intelligence (AI) is Microsoft, which just launched its new Microsoft AI and Research Group, staffed with more than 5,000 people. This follows closely Microsoft’s late-August acquisition of AI startup Genee and September’s revelation that field programmable gate array chips are now deployed in its Azure datacenters worldwide. That translates into highly scalable AI.
If you are an applications developer, You must add AI, cognitive computing, machine learning, and analytics expertise to your skills portfolio.
According to Microsoft, the mission behind this major investment is the “democratization” of AI for individuals and organizations, broadening accessability, increasing its usefulness, and “ultimately enabling new ways to solve some of society’s toughest challenges.” Keep that word, democratizing, close at hand; Microsoft is using it frequently in its corporate communications.
The buzzword is fine, but what does democratization encompass? According to the company’s statements, it’s comprised of four key aspects, agents, applications, services, and infrastructure. That doesn’t seem very different than the garden-variety cloud computing we’re all dealing with today, suggesting natural evolution.
- Agents, such as Microsoft’s digital personal assistant Cortana, is intended to harness AI’s capabilities to change human and computer interaction.
- Applications, ranging from smartphone photo apps to Skype and Office 365, will be infused with cognitive capabilities — vision and speech — though what that means in practical terms isn’t clear.
- Services, including the aforementioned vision and speech, along with analytics, will be made available to application developers.
- Infrastructure, essentially on Azure-based AI supercomputers, will be available to any individual and organization.
What’s really going on here? A likely underlying strategy is to inject new life into the Windows universe. We are living at a time where the importance, influence, and ubiquitousness of Windows is on the wane. With the failure of the Windows phone platform (several times over), it’s easy for businesses to go all in on iOS and Android for their mobile computing needs. The Microsoft Surface hardware business is a last gasp effort to keep Windows alive other than on the desktop.
We are living at a time where the importance, influence, and ubiquitousness of Windows is on the wane.
The path to future career success is coming into clear focus. Even the White House is requesting more information about AI, a clear indication of the technology’s importance. If you are an applications developer, You must add AI, cognitive computing, machine learning, and analytics expertise to your skills portfolio. Microsoft itself is going into a hiring frenzy to transform its AI vision into reality.
What is your comfort level with AI? Are you currently working on projects that involve AI and cognitive computing? What do you expect the future to look like? Share your thoughts and concerns; we’d like to hear from you.
It hasn’t taken long — only about a year — for so-called no-code/low-code application development tools to go from loathed to loved. But, as someone who makes your living as a professional software developer, how do you feel about departmental, line-of-business employees using NCLC tools to build applications? Is NCLC a boon or a threat?
The opinions I’ve heard from developers span a wide spectrum.
The current thinking is that if we can shift the building of mundane reports, query apps, or other batch processing to LOB workers who have some understanding of IT, let’s go ahead and do it. The LOB department gets what it wants (or at least what it thinks it wants) and IT’s developers are freed from the mundane to do development on projects that are genuine more interesting or crucial.
Sure, this makes sense, but there will always need to be a degree of oversight from IT, perhaps to make secure connections to the corporation’s databases or implement access control. And there’s the matter of who pays for the compute resources and whether they are provisioned in the most effective and economical way.
I’ve also heard opinions that building applications is the domain of developers and should remain that way. While that was nearly always the case in the ancient days of IT before the cloud, that view, in my opinion, is simply out of step with the times. In an era when apps are sometimes updated nearly daily (compared with perhaps every two years in the old mainframe world), whatever tools and staff get you there with speed and competence are going to win the day.
Python, Java, Swift, C++, R, Scala, and a horde of other languages are not going to meet their demise anytime soon, but it’s clear that NCLC tools , built on a foundation of templates, microservices, and APIs are insulating LOB departments from the raw plumbing that you and I know makes all of this go.
So, professional applications developer, let’s hear from you. What do really think of NCLC tools? Do they free you from tasks you don’t like doing so you can work on more-interesting projects? Are they threatening the security of your organization’s data and infrastructure? Are NCLC tools dumbing-down the art form of application development to a point where anyone can do it? None of that sugar coating — tell us what you really think; we’d like to hear from you.
It’s official. I’ve just returned from watching Apple announce the iPhone 7 and iPhone 7 Plus. And yes, the headphone jack is gone. This is a family-friendly blog, so, I won’t say what I’m really thinking about that change. There is news for developers, however. And that news is good.
While the talk was about the new Apple Watch, and the vast improvements to the camera and audio aspects of the iPhone 7 and iPhone 7 Plus, applications developers did get recognized: APIs were mentioned twice. Granted, this was a product launch, not a developer seminar, but any mention of APIs in a mainstream discussion is darn good.
The first major change of note to developers is with the iPhone’s Home button. It has been completely redesigned to make it, in Apple’s words, “customizable and more responsive.” Beginning with the iPhone 7 and 7 Plus, the Home button is now force sensitive and works with a new taptic engine to provide feedback. To accomplish that, the button is no longer the push-down/pop-up mechanical affair we’ve dealt with for the past several years. It’s all solid state. Fewer mechanical parts means fewer things to break and fewer places where water can infiltrate, helping Apple achieve an overall design goal of dust and water resistance that complies with the IP67 protection standard.
Developers can leverage the new Home button’s taptic capability through the Apple Taptic Engine API. That means, according to Apple, that third-party developers can “create new feelings and experiences,” whatever those might be. I’m not imaginative enough to conjure up how an app might use this new Home button capability, but, it’s there if you can find a use for it.
The second mention of APIs came during the presentation of several amazing new camera capabilities. In addition to supporting a wider color gamut than before, the iPhone 7 models (at last) support RAW and DNG (digital negative) formats.
JPEG is no good for serious photographic work because it is an 8-bit file format that uses the small sRGB color space and compresses the image every time it’s saved, throwing out data. RAW, on the other hand, is simply an unprocessed, uncompressed pass-through of what the camera sensor sees, essential for high-quality work. It’s usually in 16-bit format that contains 14 bits of data. And RAW doesn’t compress the vast range of colors the sensor sees into the miserably small sRGB color space. On top of that, DNG is Adobe’s attempt to place a standardized wrapper around the many hundreds of proprietary RAW formats that exist industrywide for various camera models.
That Apple is supporting RAW and DNG is thrilling to serious photographers. Even better, if you’re a developer of a photo-editing, post-processing, or special-effects app (and there are many, many hundreds of them), this is a golden opportunity to recast your application as something that’s on the leading edge of smartphone photography. Yes, indeed, Apple is opening up a whole new vista for app developers.
Curiously, not one word was said during Apple’s event about how the new iPhones function as telephones. That function seems to have been forgotten as smartphones evolve into communicators, controllers, music players, and game platforms. Yes, lots of games were demonstrated, including the arrival of Super Mario and Pokemon Go to the iPhone platform for the first time ever. If you are a game developer, the new, powerful graphics engine is right up your alley.
As for elimination of the headphone jack, I don’t like it one bit, even though there will be an audio-jack-to-Lightning adapter bundled with each phone to support legacy analog earpods and headphones. That doesn’t solve my desire to keep the device plugged into a charger and into my stereo system simultaneously. It’s partially a push to sell Apple’s new $169 wireless earpods. No one has explained to me how you’ll protect against loss if one (or both) pop out while you’re jogging.
What do you see as the app development opportunities in the new iPhone 7 and 7 Plus? Have you been working with the iOS 10 SDK? Share your opinions; we’d like to hear from you.
When technology vendors do dumb things, you can pretty much count on reading a gleeful retelling of the sordid details in this space. After all, if there’s one thing I’m never short of, it’s opinions. Conversely, when those same vendors do something laudable, I feel obligated to play it both ways and say well done. Today, it’s Apple’s turn over a smart move it’s making in the App Store.
Starting Sept. 7, Apple is pulling the plug on apps that appear to have been abandoned, are multiple iOS versions behind in compatibility updates, or simply don’t measure up to Apple’s grand vision. That’s good news all around. In Apple’s own words, “We are implementing an ongoing process of evaluating apps, removing apps that no longer function as intended, don’t follow current review guidelines, or are outdated.” I urge you to visit that link and read the information carefully.
If you’re a lazy developer who hasn’t invested the necessary time to keep your app up to date for changes in screen sizes and resolution, or leverage features added to iOS over the last several years, you don’t deserve to have your app listed. If that’s you, it’s time for some Swift action lest your app get the boot. Existing users will not lost any app functionality, services will not be interrupted, and in-app purchases will remain enabled. As for new users, well, there won’t be any.
For developers who do invest the time, this is great news. With more than two million apps currently listed and roughly 100,000 new app additions or updates submitted weekly, culling the catalog should help make those that remain a bit more discoverable. And, as you know, the inability for apps to be discovered easily has plagued the App Store for years.
If the long arm of Apple reaches out to you, be warned. You have but 30 days to make it right. After that, the app is zapped. There’s also an even more potentially dire situation for you: If it’s discovered your app crashes on startup, it gets removed from the app store immediately with no grace period.
But wait, there’s more. In the letter that Apple sent to developers (and made public by iOS developer and good samaritan Jake Marsh) the company said app names can no longer exceed 50 characters in length. That means no more stuffing desirable search terms in an app name, regardless of what that app does. It’s a good way to bring some discipline to a Wild West app shopping environment.
Though I don’t know what pushed Apple toward this posture, it’s welcome, especially if you’re a developer hoping and praying you can generate a revenue stream from your beloved app that just can’t seem to get noticed.
Do you develop for the Apple App Store? What do you think of Apple’s desire to clean house? Are your apps up to date? If not, why not? Share your thoughts with us; we’d like to hear from you.
No doubt you’ve read news stories about individual consumers, police departments, and now even hospitals having their computers and data victimized by ransomware , an exploit in which the attacker “kidnaps” and encrypts the victim’s data, demanding payment for the decryption key. As a developer, is there anything you can do about it? The short answer may be no.
So far in 2016, hospitals in California, Kentucky, Maryland, and Kansas have been hit with ransomware attacks. In February, according to NBC News, Hollywood Presbyterian Medical Center had no choice but to fork over $17,000 to the bad guys in order to get its systems back. That’s a big jump from the typical $300 that an individual consumer might be forced to pay. Even a NASCAR team was victimized, forced to pay up to get its data back. It’s enough to drive you in circles, or ovals in NASCAR’s case.
How widespread are ransomware attacks? Consider this June 2016 statement from security software maker Kaspersky. “The number of users attacked with encryption ransomware is soaring, with 718,536 users hit between April 2015 and March 2016: an increase of 5.5 times compared to the same period in 2014-2015.” Yikes. During the same period, users attacked with blockers (ransomware that locks screens) decreased by 13.03%, from 1,836,673 in 2014-2015 to 1,597,395 in 2015-2016, according to Kaspersky.
There’s more. Cisco warns that businesses are unprepared. Security vendor RSA says cloud service providers are becoming a popular target. The U.S. Department of Health and Human Services went so far as to publish information on ransomware, including how to tell if HIPAA has been violated.
Unfortunately, the ubiquitous nature of the Internet of Things technology makes it a completely new fertile ground for ransomware attacks. Earlier this month, two security researchers demonstrated how a residential thermostat can be taken over by ransomware, locking it until a ransom was paid in the form of a Bitcoin. Keep in mind this was nothing more than a proof-of-concept demonstration. But, you know where this is likely headed.
The only way to deal with ransomware is to prevent it in the first place, according Malwarebytes Labs. That means running security software, resisting the urge to click alluring links, and backing up one’s data frequently and regularly. Unfortunately, as an applications developer — and an honest one at that — there isn’t really any proactive or anticipatory defense you can build into an app.
As an app developer, are you advising your organization about the dangers, causes, and prevention of ransomware? Have you or your company been a victim? Share your thoughts; we’d like to hear from you.
Ask Siri “does this make me look fat,” and she’ll answer, “It seems like humans are preoccupied with this. In my dimension, we are more concerned with grey matter that corporeal matter.”
Sigh. I would be nice if developers of mobile apps had access to such profound insight. And now you do. At last, the forthcoming iOS 10 includes SiriKit and an API. You’ll also be able to create app extensions that let users interact with your app directly within messages.
As Apple puts it, “SiriKit enables your iOS 10 apps to work with Siri, so users can get things done with your content and services using just their voice.” But wait, there’s more. “In addition to extending Siri’s support for messaging, photo search and phone calls to more apps, SiriKit also adds support for new services, including ride booking and personal payments.”
Through SiriKit, your app builds an extension that communicates with Siri, as Apple explains, even when your app isn’t running. That extension registers with specific domains and intents. As an example, Apple discusses a messaging app that would register to support the Messages domain, and the intent to send a message. Siri does the heavy lifting, handling the user interaction, including voice and natural language recognition.
Six areas are predefined by Apple for third-party Siri interaction, including ride booking, photo search, VoIP call initiation, messaging, payments, and workouts.
To jump in the fray, you’ll need to download the Xcode 8 beta, which includes the iOS 10 SDK. At the bottom of the SiriKit page, you’ll also want to check out links to SiriKit programming guide, intents framework reference, intents UI reference, and tutorials.
As for messages, according to Apple, “Users will have easy access to your apps without having to leave Messages. They can conveniently share content, edit photos, play games, send payments, and collaborate with friends within a custom interface that you design.”
I certainly don’t care about the game play aspect or the ability to paste in stickers, but collaboration with work colleagues could provide powerful new capabilities for enterprise-class applications. Editing photos can also be leveraged for applications in, say, the insurance industry, where an adjuster can make annotations when photographing a damaged vehicle.
There’s a ton of documentation for creation of applications with iOS 10, including code samples, framework references, DemoBots for games, and much more. Time to get reading. And coding.
Have you already started building new apps or updating existing apps for iOS 10? What capabilities are you adding? Let’s get the discussion going; we’d like to hear from you.
And no, I don’t think it makes me look fat.
With the ability to quickly conjure up an online interactive survey thanks to software as a service (SaaS) technology, any developer or business can almost instantly start polling potentially hundreds or millions of respondents with dozens of questions. Do you really need all that data?
Designing surveys whose questions do not introduce bias or ambiguity on the part of the survey sponsor is not an easy task. It’s a skill, a profession. You’ve got to figure out what questions to ask and how to word them properly. You need to provide for all possible answers (including “prefer not to answer” and “don’t know/don’t care”). You can’t allow answer choices to overlap (0-10 and 10-20 instead of 11-20). If you’re tasked with building a survey, you might want to brush up on the seven sins of survey question writing and how to avoid them. Of course, there are hundreds of other online resources.
Questioning the questions might not be an app developer’s job, but your logical mind is likely better-suited for finding the kinds of mistakes that could render a survey’s results useless.
What got me thinking about applying a developer’s keen eye to the logic of survey questions? It’s the surprisingly invasive nature of a survey I just took on the subject of interchangeable lens DSLR (digital single-lens reflex) cameras.
At 40 minutes, the survey was way too long and complex. But, what really bothered me was the invasive nature of some introductory questions: race, marital status, household income, number of children, technology devices in my home (including Segway — really?), and my favorite, “general statements about different attitudes you might have toward life in general.”
Are these really germane to a survey on cameras and lenses? Perhaps. Even if they are valid from a statistical analysis perspective, they sure seem nosy. Let me put it another way. If this was a survey about which cloud application development tools you like best, would you be willing to answer those very same questions?
Have you been asked to build surveys for your company? Did you find logic errors or other problems with any of the questions? If so, did you speak up? Share your thoughts; we’d like to hear from you.
When you write an app for Apple’s iOS there’s no ambiguity. To say the operating system and its distribution are tightly controlled is an understatement. It’s Apple’s way or the highway. Not so with Android. Fragmentation and lack of corresponding offerings from device makers is out of control. It’s a complete mess for developers.
How bad is it? It’s bad enough for Salesforce to declare that it will support only Samsung Galaxy and Google Nexus devices. That’s pretty drastic and should send a clear and frightening message to all that play in the Android space to get their acts in sync. Indeed, Android fragmentation is hurting its case for widespread enterprise adoption, a problem that iOS does not face.
The problem is years in the making. With each maker of devices that run Android able to tweak the user interface as they see fit, and their power as to when — or even if — to launch any new version, the permutations in terms of the operating system, its many versions, and the cornucopia of devices on which it runs are likely in the thousands. It’s not something app developers should have to put up with.
Consider some recent research from Statista. For the period May 2 to May 9, 2016, the distribution of Android versions that accessed the Google Play store looked like this: KitKat 4.4, 32.5%; Lollipop 5.1, 19.4%; Lollipop 5.0, 16.2%; Jelly Bean 4.1.x, 7.2%; Jelly Bean 4.3, 2.9%; and smaller numbers from Gingerbread, Ice Cream Sandwich, and Froyo.
Yes, it’s a mess. It’s a lot of different versions being used actively and concurrently. What really sticks out is that the largest group, KitKat, is two generations behind the most-recent Android version.
For the very same weeklong period, the breakdown of iOS devices that accessed the Apple App store looked like this: iOS 9, 84%; iOS 8, 11%, all earlier versions, 5%. (That last group includes my iPod touch 4th generation, which ceased getting updates after iOS 6.1.3.) As long as your app is compatible with iOS 8, you’ve got 95% of the market covered.
This is especially problematic because developers are writing for a base of billions of devices — 7.1 billion mobile phones in 2015 worldwide heading to 8.6 billion in 2021, according to Ericsson. That doesn’t include tablets, which adds nearly another two billion. This does contrast with legacy corporate applications that were intended to run on only one mainframe computer.
Fragmentation is an old story
OS fragmentation isn’t anything new. Many years ago, at a trade event at the New York Marriott Marquis, between mouthfuls of baby lamb chops and jumbo shrimp, I asked Bill Gates about his perception of the then-current state of Unix and what it meant for developers. “Unix is a hundred different things that don’t talk to each other,” he said.
The sentiment, if not the exact number, was right, of course. With Hewlett-Packard’s HP-UX, IBM’s AIX, Silicon Graphics’ IRIX, Sun Microsystems’ Solaris, Compaq’s Tru64, Apple’s A/ux, AT&T’s System V, SCO’s UnixWare, and others all in competition with each other, writing an application that ran well on all platforms bordered on impossibility. Add the messy ownership wars with Unix Systems Labs and Novell into the mix, and it gets even uglier.
As a mobile app developer, what are the roadblocks you run into when developing for Android? Does the proliferation of different versions from different vendors create problems? And in comparison, what is your experience with iOS? Share your thoughts; we’d like to hear from you.