Head in the Clouds: SaaS, PaaS, and Cloud Strategy

February 15, 2017  10:08 AM

Is 30% of your cloud spending wasted? Survey says yes.

Joel Shore Joel Shore Profile: Joel Shore
DevOps, Docker, Hybrid cloud, ITspending

One of the things I look forward to each year at this time is the release of the annual State of the Cloud report from cloud services provider RightScale. That may say more about the quality of my social life than anything else; nevertheless, the study always contains great insight into the psyche of cloud computing technology professionals. Let’s dive into some key findings. The survey is being published today (2/15/2017) and is based on research undertaken in January 2017. A compendium of fresher opinions you’ll not find.

Hybrid up, private down: A multi-cloud strategy exists in 85% of surveyed enterprises, up from 82% in Jan. 2016. But, look at the other side: Private cloud adoption fell to 72% from 77%. What that means is momentum is swinging to the public cloud.

The cloud is just as wasteful as traditional IT: Here’s a real shocker: Survey respondents estimate that 30% of cloud spending is wasted. RightScale’s own research pegs waste even higher, between 30% and 45%. How in the world — or in the cloud — did this happen, and happen so quickly? According to RightScale, despite growing scrutiny on cloud cost management, few companies are actively spinning down unused resources or selecting lower-cost clouds or regions.

Survey respondents estimate that 30% of cloud spending is wasted. How in the world — or in the cloud — did this happen, and happen so quickly?

IT is taking control: So-called citizen IT and shadow IT aren’t going away, nor are no-code / low-code technologies that empower line-of-business departments to build solutions. Despite this, it is IT that selects public cloud providers (cited by 65% of respondents) and it is IT that decides which apps to migrate into the cloud (63%).

The majority of enterprise workloads are now in the cloud. The study revealed that 41% of workloads run in public clouds and 38% in private clouds. Among larger enterprises the numbers differ slightly, with 32% of workloads in public cloud and 43% in private clouds.

It’s a multi-cloud world, after all: According to the study, public cloud users are already running apps in 1.8 public clouds while private cloud users are leveraging 2.3 private clouds. Those numbers strike me as being surprisingly low.

It’s a DevOps world, too: Whether you believe in DevOps or not, it’s here to stay, now embraced by 84% of large enterprises. The expansion of DevOps into BizDevOps that brings the business side into the mix is also on the rise, now used in 30% of enterprises, compared with 21% just a year ago.

It’s getting better every day: Even though the talent shortage remains the top challenge facing IT, it’s less of a concern than a year ago, falling to 25% from 32%. Security concerns also abated slightly, dropping to 25% from 29%. Mature cloud cite managing costs as a key concern while newbies worry more about security.

Docker is moving like a tremendous machine: Docker adoption is surging, leaving Chef and Puppet in the dust. Kubernetes use doubled. Interestingly, 35% of survey respondents indicated they’ve gone with the container-as-a-service approach from AWS ECS (35%), Azure Container Service (11%), and Google Container Engine (8%).

Azure cuts into AWS’s lead: Azure adoption soared to 34% from 20% a year ago as AWS stayed flat, but in the lead at 57%.

This annual study isn’t the only one out there, but it does provide a good snapshot of how the cloud changes from year to year. Now it’s time for you to respond. Where are you seeing wasteful cloud spending? Is your organization thoroughly immersed in DevOps and BizDevOps? Are your app development efforts all containers all the time? Finally, what cloud platforms are you using, to what extent, and how has that changed over time. Share your experiences with your counterparts; we’d like to hear from you.

January 30, 2017  5:08 PM

Is IoT turning into the Internet of Thugs?

Joel Shore Joel Shore Profile: Joel Shore
Application security, Hacking, iot security

In a story that circulated worldwide last week, it was reported that a hotel in Austria, the Seehotel Jaegerwirt, had been attacked by hackers who disabled the guestroom cardkey system, locking guests in their rooms until the hotel paid a Bitcoin ransom. That’s not exactly accurate, but enough of it is true to merit some serious discussion.

According to published reports correcting the initial misinformation, hackers did take control of the cardkey system, but only to the extent that the encoding of new key cards was disabled for guests in the process of checking in. Doors were never immobilized and guests were never trapped. Nevertheless, a ransom was indeed paid in Bitcoin currency by the hotel to have its systems and data released. And it’s apparently not the first time. The bottom line is that the hotel reportedly is planning a return to good, old-fashioned metal keys.

Think devious, think scheming, think cunning, because the people writing malware are doing exactly that, and they’re doing it better than you.

This situation probably has little to do with any of the three big makers of hospitality cardkey systems, Onity, OpenKey, and Salto Systems. It’s likely more about the bad guys being invited in on a red carpet right through a hotel’s front door. We’ve all heard the stories before — clicking on an innocent-looking link in an email message, inserting into a USB port a flash drive that contains malware, network hardware configured with default passwords, unprotected ports, and so on. One thing is for sure: We’re not far from IoT becoming an acronym for the “Internet of Thugs.”

Of course, there’s a cloud and mobile application development angle to this. We’re well beyond magstripe key cards or ones with embedded RFID tags. Indeed, the newest advancement in room-access technology is the complete elimination of the card. An app on your smartphone that uses proximity Bluetooth to communicate with the door lock is very much a reality and being installed in hotels worldwide. It’s yet another inevitable use of cloud and mobile computing technology.

While the vulnerability in this particular case may lie more in the area of network infrastructure management, it’s no less important for anyone cranking out lines of code to always keep security top of mind. It’s useful to approach any coding project with profound skepticism about its security and potential vulnerabilities. Think devious, think scheming, think cunning, because the people writing malware are doing exactly that, and they’re doing it better than you.

Consider this: According to a Dec. 2016 blog post by Amol Sarwate, director of vulnerability labs at security firm Qualys, Microsoft issued 155 security bulletins for the year, up 15% from 2015. Over the lifetime of Windows 7, it added up to many hundreds of security patches being issued. If a smart company like Microsoft (or Apple, or Adobe, or Android, or Oracle, any other company) can’t build software that’s secure, how in the world can you?

What glaring vulnerabilities were overlooked in the design of software that you coded? How were these vulnerabilities corrected and users notified? Share your horror stories; we’d like to hear from you.

January 19, 2017  5:05 PM

Why to use APIs, explained in 18 words

Joel Shore Joel Shore Profile: Joel Shore

Whether you read my site, SearchCloudApplications, another of the TechTarget family of websites, or any of the seemingly trillions of sites that write about application-development technology, three items stand atop the heap of coverage: containers, microservices, and APIs.

For the moment, let’s talks about APIs.

In a story I wrote this week about Shufflrr, a New York provider of SaaS-based presentation-management services, founder and CEO James Ontra revealed what’s under the hood. I always ask when interviewing a software company, because this is the sort of thing that other developers want to know.

Shufflrr’s business model is to provide businesses with a way to catalog and control their vast collection of PowerPoint and other presentations. Employees can view all and create new presentations through drag-and-drop of individual slides in the archive. Viewed online by potential customers, highly detailed tracking of which slides were viewed, for how long, along with other metrics are available. For enterprises with large salesforces, it’s a great idea.

“Every feature, function, and use is transmitted through APIs, which gives us the ability to grow our platform.”

Turns out this SaaS offering is hosted on Amazon Web Services. No surprise there. But, more interesting, the front end was built with Bootstrap, a platform developed at Twitter and which I don’t recall anyone ever speaking about before. Bootstrap is an open-source front-end framework based on HTML and CSS design templates for building web-based and mobile applications that work on and format properly for any device. Beyond that, the Shufflrr ecosystem employs the Microsoft stack on .NET using SQL.

Here’s the gem: Ontra explains the entire Shufflrr site is run by APIs and goes on to say, “Every feature, function, and use is transmitted through APIs, which gives us the ability to grow our platform.”

And there you have it in 18 words. Through the pervasive use of APIs, development is simplified. Internal process workflows and connections to external data sources are handled in a consistent manner no matter who the code jockey is. Customers can write their own extensions, if desired. Cheaper. Faster. Better. Consistent. Secure.

This, of course, is much easier when you are, like Shufflrr, a young company with zero legacy data and no legacy applications. The clean-sheet approach does have it advantages.

How pervasive is your company’s use of API technology? Share your thought on the good, the bad, and the ugly of designing, implementing, and managing APIs, either your own or those provided by third parties. We’d like to hear from you.

January 3, 2017  5:16 PM

Is application code walking out your door when developers jump ship?

Joel Shore Joel Shore Profile: Joel Shore
application and data security, Application development, Application security

People change jobs. It’s a fact of life. And it’s dangerous.

While departing employees routinely stuff their pockets with Sharpies and paper clips to stock their home offices, it’s those piles of ones and zeroes walking out the door with them that should have us all terrified.

Consider this one finding cited in a brand new January 2017 white paper from Osterman Research: Fully 87% of departing employees take data they created and 28% take data created by others.

What are they taking, you ask? Nearly 90 took presentations or strategy documents, 31% took customer lists, and 25% took intellectual property. That last category is where program code fits. (And we’re not even talking about hackers.)

Some of this is intentional, some isn’t. The white paper notes that departmental so-called citizen developers are likely to have content on their personal devices. Part- or full-time telecommuters who use their home computers for work often have content stored locally. And yes, of course there are those who abscond with content on purpose. Limiting access does no good as these are the people who are supposed to have access.

But, some is intentional. The white paper discusses one software developer who learned she was to be terminated and began downloading “trade secrets,” which I interpret as code. The company initiated emergency legal action to prevent competitors from accessing the data. It happened at Goldman Sachs and even at security vendor Symantec.

Bob Spurzem, the go-to-market guru at Archive360 notes that it is common for developers that leave a company to take code with them. Beyond merely protecting a business’s data and other intellectual property when employees leave, “software developers require special attention,” he says.

“While we would like to believe this would never happen, a disgruntled developer leaving a business organization could steal code that equates to months, even years of work — putting a company’s competitive edge at serious risk,” Spurzem says.  “These threats are very real.  Dismissing them to the back burner is a dangerous mistake.  Businesses must plan for and take the appropriate steps to mitigate the risk.”

As I see it, it’s not just access to code. It’s also about access to design specs, test scripts, and subscription-based public cloud platform-as-a-service development environments. It’s about spinning up servers and database instances. Who’s in charge of disabling the departed one’s accounts? Or is he or she still using these development tools? Who is administering the administrators?

Have you known colleagues to take application code? (Of course, you would never do this.) What did your company do about it? And what measures does your organization have in place to prevent theft of code? Share your thoughts, we’d like to hear from you.

December 16, 2016  4:59 PM

With AWS Managed Services, what will be left for IT to do?

Joel Shore Joel Shore Profile: Joel Shore
Applications management, DevOps, Managed Services

The cloud, to varying degrees, did away with the need to manage huge, on-premises IT infrastructures. Fortunately, IT staffers on company payrolls were still needed to migrate apps and data, and manage these new-fangled, cloud-based, virtual infrastructures. Now, with 2017 just days away, it’s fair to ask if that management role is on the cusp of disappearing, too,

Not surprisingly, it’s Amazon shaking things up again. On Dec. 12, 2016, Amazon launched AWS Managed Services (AWSMS), essentially Amazon’s offer to provide fee-based infrastructure operations management for your enterprise.

In his blog post announcing the service, AWS chief evangelist Jeff Barr said organizations want to “relieve their staff of as many routine operational duties as possible.” You’ve got to wonder if the CFO interprets that as “relieving as many staff as possible.”

Targeting the Fortune 1000 and Global 2000 enterprises (yes, it’ll trickle down eventually), AWSMS, according to Barr is “backed up by a dedicated team of Amazon employees” ready to provide incident monitoring and resolution, change control, provisioning, patch management, security and access management, backup and restore, along with reporting. An IT department can connect AWSMS to its own management tools (if you still opt to have any) via a new API and command-line interface.

So, Amazon can host your entire IT operation and now manage every aspect of it. It can warehouse and fulfill customer orders for the products you sell. With its own in-the-making fleet of trucks, drones, and aircraft, it can package and ship to your customer’s door. It can provide credit-card processing.

With drone delivery now a reality after a successful tryout in the U.K., there’s isn’t much that Amazon can’t do, except, perhaps, for the actual act of coding new applications. And, of course, there are tools to vastly simplify that process, too.

After all of this, the only ones left standing could be application developers, despite — or thanks to — Amazon’s vast array of development tools. No matter how much of a business’s IT operation Amazon hosts, operates, or manages, Amazon can’t know what it is you want your application to do. For that reason, I can’t imagine AWS wanting to build applications for you.

The managed services aspect was previously the domain of specialized IT staffers or other third-party managed service providers (MSPs), typified by Rackspace, but Amazon — at least for now — has them covered. Instead of cutting MSPs out of the ecosystem, AWSMS is positioned to embrace them. Partners have the opportunity to provide four different services specific to AWSMS, including onboarding, integration with customer ITSMs, application migration, and application operations.

Where do you come down on this? Is your organization ready to cede ops management to AWSMS? How does this change your IT plans for 2017 and beyond? No doubt have pretty strong opinions about this. It’s the season for sharing, so share those opinions with us. We’d like to hear from you.

December 4, 2016  10:57 PM

Have you driven embedded software, lately?

Joel Shore Joel Shore Profile: Joel Shore
Embedded software

Here’s a line I’ve been writing for many years: Hardware is nothing more than software that breaks if you drop it. It’s true because everything from a toaster oven to thermostat to, well, just about anything else is loaded with embedded software. Even today’s vehicles are essentially little more than highly complex mobile computers with seating for five and cargo space.

While we’re busy gushing about the latest mobile and cloud applications, it is the software embedded in dishwashers, IoT sensors, microwave ovens, digital cameras, vehicles, and even self-synchronizing wall clocks that may be real stars. There’s a lot more to software than user-facing applications, after all.

According to data published in June 2016 by Global Market Insights, the embedded software market size, valued at $10.46 billion in 2015, is predicted to register a 7% compound annual growth rate (CAGR) through 2023, rising to about $18 billion.

One key driver is automotive. According to Global Market Insights, the automotive embedded systems market accounted for roughly 22% share in 2014, with CAGR gains estimated at 5.5% from 2016 to 2023. Smart vehicles, navigation capability, and car-to-road communication, along with the rise of hybrid and electric vehicles are behind the growing numbers.

Another obvious growth market is wearable devices. “Growing use of wearable embedded equipment across many applications like medical, security, fitness and safety is predicted to promote embedded software industry trends,” the report notes. Increasing customer demand for electronic equipment like computers, tablets and smartphones is predicted to enhance the demand for the industry further.

The report defines embedded software as consisting of tools, middleware, and operating systems. There’s a rise in the use of Java in mobile devices behind technologies that include near-field communications.

This is also about highly specialized real-time operating systems, such as VxWorks from Wind River, ThreadX from Express Logic, and the open-source Fusion Embedded RTOS from Unicoi Systems for starters.

If you’ve worked on embedded software of any kind, we’d like to hear from you. What is the nature of the software you’ve written and on what kinds of devices is it running? There’s lots to talk about and plenty of opportunity for software engineers looking to expand their horizons. Join the conversation.

November 11, 2016  10:59 AM

Developers are driving the advancement of cloud culture

Joel Shore Joel Shore Profile: Joel Shore
Application development, IT culture

Everyone asks me about “the cloud.” My barber. The supermarket cashier. Neighbors. They’ve all heard of it, though none has a clear understanding of what it is, precisely. My comeback is that I don’t know what it is, precisely, either. But, I do know that the concept of the cloud and, by extension, cloud culture, has become part of our societal fabric. As we approach the holiday season with a new year just around the corner, it’s worth taking a moment to look at the increasingly prominent role developers play.

Think about what we’re building. Every mobile app. Text messaging. Streaming movies. Paying bills online. Christmas shopping. Remote medical patient monitoring. Factory floor process control. Home and commercial building environmental control and automation. IoT. And there are new technologies — cognitive computing and machine learning, to name two. We’re awash in APIs. New languages seem to appear monthly. Even the advent of no-code / low-code products is freeing developers from mundane projects to tackle those are breaking new ground.

It’s all very good for developers. You get to continually look at new technologies, new languages, and new opportunities to profoundly impact a business’s operations and profitability.

It wasn’t that many years ago that developers were largely writing programs to do nightly batch updates of sales reports, inventory management, or statement rendering. Today, with exceptions becoming increasingly rare, transaction processing happens in real time with API calls that touch multiple data stores and systems, aggregate information on the fly, and present the results to an app with a carefully designed UI/UX.

There are downsides, of course.

The pressure is on to ship feature updates, often biweekly, with little time for thorough testing, fixing bugs, or optimizing code. Unfortunately, it’s part of functioning at “cloud speed.” And with developers now expected to take a larger collaborative role in working with business decision makers and IT operations, there’s precious little time to learn new skills. It’s the world of BizDevOps.

The news this week is filled with stories about hundreds of “fake apps” that have appeared in the Apple app store, pretending to be from well-known retailers, but which are total scams. (They’re not really fake apps — they are apps, after all, though of a fraudulent nature.)

Without a doubt, the role of developer is evolving. In your work as a cloud and mobile application developer, how have your responsibilities grown? What new technologies and languages are you working with? What are the new solutions that you’re being asked to build? Gaze into your crystal ball and share what you see ahead. We’d like to hear from you.



October 28, 2016  6:16 PM

Why are so few enterprises implementing automated software release management?

Joel Shore Joel Shore Profile: Joel Shore
Application deployment, Application performance, Release management

A key challenge in developing applications for the cloud age is dealing with the continually shrinking interval between updates. Why, then, is automation of release and deployment so rarely used?

In the mainframe days, applications were written to run on one and only one machine, not the billions of smartphones, tablets, and IoT devices we develop for today. Years could pass between updates. Even in the client-server days, apps were written to run on a small number of servers running network operating system. Application updates to add new functionality were still spaced far apart. Not so much anymore.

Today, it’s common for apps to get updated biweekly for competition-driven feature enhancements and seemingly daily for bug fixes. And we’re writing for billions of devices running a bunch of different operating systems, whose features change radically with each new version, and all sporting a veritable cornucopia of screen sizes and resolutions.

It makes you wonder why we’re all breaking our necks to develop apps faster if we’re not any good at shipping the code out the door.

You’d think that a faster time to market for would be competitively advantageous, or that rapid updates to fix the bugs that crept into yesterday’s update (due to inadequate testing) would drive any corporate or commercial developer to implement automated release management. But, no.

Theresa Lanowitz, CEO of the research firm voke (yes, with a lower-case “v”) opened my eyes to this in a new study just published by her firm called Market Snapshot Report: Release Management. In a lengthy conversation she expressed surprise that the use of automated release management isn’t more widespread.

Releasing software faster and with higher quality is a challenge for more than 60% of survey participants, Lanowitz said. Just 14% reported no issues. So, what are these challenges? Struggling to release faster was cited by two-thirds of respondents. Just behind was the struggle to release higher-quality software at 60%.

Separately, more than half of those surveyed admitted that their organizations had to delay one or more software versions due to problems with deployment or release. It makes you wonder why we’re all breaking our necks to develop apps faster if we’re not any good at shipping the code out the door.

For the very first time since voke initially started doing this longstanding recurring study, respondents indicated that quality is more important than release. Think about that.

It’s the case of a dog chasing its own tail. If apps were of better quality, they would likely not need to be released as often. And if you build something better in the first place, you have a better chance of satisfying the customer. Check your phone — does the near-daily frequency at which some apps release bug fixes lead to a fatigue factor among users? I think it does.

The voke survey also looked at the build and deploy phases of a development project. Regarding build approach, only 29% do continuous integration with automatic check-in of each build. Gated check-in in which check-ins are accepted only if the changes merge and build successfully, was practiced by just 19%. Further down the list are manual, scheduled, and rolling builds. As for deployment, automation through scripts was performed by 32% with manual scripts just behind at 31%. The use of containers, including as Docker, CoreOS, LXD, Kubernetes, and others, lagged far behind at just 9%.

Lanowitz characterized the lack of automation as surprising as well as damaging to the business, given that release management is not new. I’d call those adoption rates shockingly low.

How well does your organization do when it comes to release management? Are you fully or partially automated? Or are you still completely or primarily manual? What is the impact these practices has on bringing new versions to market quickly and on ensuring that releases are not being deployed to fix bugs in prior releases? Share your experiences with us; we’d like to hear from you.

October 11, 2016  1:26 PM

Is it acceptable to write sloppy, inefficient, bloated code — or just easy?

Joel Shore Joel Shore Profile: Joel Shore
Application development, Application performance, Cloud Applications

Decades ago, legend has it, many programmers got paid based on the number of lines of code they wrote. The more you produced, the better you were perceived to be. The inevitable result, not surprisingly, was mountains of bloated, inefficient code. Are we coming back to it?

Once organizations wised up to the foolhardy belief that those who produced the most code were the best, the push was on to write tight, concise, efficient code. After all, in the mainframe days when you typically had only 64 kilobytes of magnetic core memory to work with, throwing more iron at slow-running applications was a very expensive — and usually impossible — proposition. Though tools existed to exercise all the code in a program for logic errors (including hopefully never used exception processing routines), analyzing code for inefficiencies — such as poorly designed “perform until varying after” loops in Cobol — was something of a magic act.

What eventually changed was the plummeting cost of compute resources and memory. Once you were able to throw a shelf full of inexpensive Compaq Systempro servers and NetWare 2.15 at a problem, it was often easier to solve slow execution with more hardware than it was to hunt down poorly written lines of code. And now with megabytes of memory available for programs to run in, the need for memory management (anyone remember writing overlays?) began to disappear.

I fear the problem of bloated code, slow execution, and software quality is not getting better. We’ve made it easy — and perhaps necessary — to create inefficient code.

Today, we have business cycles that demand huge changes in application functionality almost weekly instead of once every two years. There’s simply not enough time to go back and fix inefficient code that was rushed out the door. Compute resources, including processing power, gigabytes of memory, and petabytes of storage, are so cheap as to be nearly free in comparison to mainframes. With cloud, it’s easy to scale infrastructure resources by orders of magnitude and do it almost instantaneously. No-code / low-code tools are generating code for us, but how good is that code? With streaming analytics we can examine everything, whether it’s central to the direct creation of revenue or not. Developers can easily tap into an enormous number of reusable libraries with a full understanding of what they do but no insight into how well they do it. Even with the API explosion that is upon us, we scrutinize their security while their performance efficiency likely is never called into question.

Is the idea of writing phenomenally tight code simply passé? Are you continually under the gun to get your code working, ship it, and move on? Are you proud of the code you write? No doubt you’ve thought about this before. Share those thoughts with us; we’d like to hear from you.

October 4, 2016  10:52 AM

It’s time for developers to become experts in artificial intelligence

Joel Shore Joel Shore Profile: Joel Shore
Application development, Artificial intelligence, Windows Azure

We’ve written many stories over the past year about cognitive computing, machine learning, and artificial intelligence — which are all, for lack of a better term, kissin’ cousins of modern-day computing. These are all growing in importance and taking on a larger presence. That means the big boys are going all in.

This week’s entry into artificial intelligence (AI) is Microsoft, which just launched its new Microsoft AI and Research Group, staffed with more than 5,000 people. This follows closely Microsoft’s late-August acquisition of AI startup Genee and September’s revelation that field programmable gate array chips are now deployed in its Azure datacenters worldwide. That translates into highly scalable AI.

If you are an applications developer, You must add AI, cognitive computing, machine learning, and analytics expertise to your skills portfolio.

According to Microsoft, the mission behind this major investment is the “democratization” of AI for individuals and organizations, broadening accessability, increasing its usefulness, and “ultimately enabling new ways to solve some of society’s toughest challenges.” Keep that word, democratizing, close at hand; Microsoft is using it frequently in its corporate communications.

The buzzword is fine, but what does democratization encompass? According to the company’s statements, it’s comprised of four key aspects, agents, applications, services, and infrastructure. That doesn’t seem very different than the garden-variety cloud computing we’re all dealing with today, suggesting natural evolution.

  • Agents, such as Microsoft’s digital personal assistant Cortana, is intended to harness AI’s capabilities to change human and computer interaction.
  • Applications, ranging from smartphone photo apps to Skype and Office 365, will be infused with cognitive capabilities — vision and speech — though what that means in practical terms isn’t clear.
  • Services, including the aforementioned vision and speech, along with analytics, will be made available to application developers.
  • Infrastructure, essentially on Azure-based AI supercomputers, will be available to any individual and organization.

What’s really going on here? A likely underlying strategy is to inject new life into the Windows universe. We are living at a time where the importance, influence, and ubiquitousness of Windows is on the wane. With the failure of the Windows phone platform (several times over), it’s easy for businesses to go all in on iOS and Android for their mobile computing needs. The Microsoft Surface hardware business is a last gasp effort to keep Windows alive other than on the desktop.

We are living at a time where the importance, influence, and ubiquitousness of Windows is on the wane.

The path to future career success is coming into clear focus. Even the White House is requesting more information about AI, a clear indication of the technology’s importance. If you are an applications developer, You must add AI, cognitive computing, machine learning, and analytics expertise to your skills portfolio. Microsoft itself is going into a hiring frenzy to transform its AI vision into reality.

What is your comfort level with AI? Are you currently working on projects that involve AI and cognitive computing? What do you expect the future to look like? Share your thoughts and concerns; we’d like to hear from you.

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: