Microservices Matters


November 2, 2017  12:54 PM

Machine learning skills are lacking, CIOs lament

Fred Churchville Fred Churchville Profile: Fred Churchville
Artificial intelligence, CIO, Machine learning

Like it or not, it appears that the continuing skills gap that continues to plague many sections of the software world, including development, testing and more, has found a new victim: digital transformation through the use of machine learning.

A survey conducted by ServiceNow looked at the eagerness of organizations to incorporate machine learning as part of their digital transformation. Mainly, senior executives want to buy into machine learning in order to support faster and more accurate decision making. But the survey polled some interesting numbers that point to what appears to be a significant lack of machine learning skills needed to manage intelligent machines within organizations.

The report shows that 72% of CIOs surveyed said they are leading their company’s digitalization efforts, and just over half agree that machine learning plays a critical role in that. Nearly half (49%) say their companies are using machine learning and 40% said that they plan to adopt.

However, as ambitious as these CIOs are, a serious machine learning skills gap is occurring. Only 27% of those surveyed report having hired employees with skill sets related to intelligent machines, and just 40% of respondents have redefined job descriptions to highlight work with intelligent machines. Furthermore, 41% say they lack the skills to manage intelligent machines, and 47% of CIOs surveyed said they lack the budget for new skills development.

While a good portion of these companies appear to at least be making an effort to find the skills they need to make the most of machine learning, it’s striking that just over half of those CIOs who strongly believe machine learning is essential have managed to acquire the skills they need to make it happen within their organization.

But it’s no surprise that CIOs say they lack the budget for machine learning skills: According to Glassdoor.com, the national average salary in the U.S. for machine learning engineers is $128,000 a year. That’s a serious chunk of change, and it’s probably not enough just to hire one. You also have to consider the upfront cost of machine learning when you add in the money you inevitably will have to spend on software, hardware or various other services your engineers will demand to use.

Again, it seems that a bright sector of the software world suffers again from the ongoing skills gap in the market, one that continues to drive up the cost of engineers and leave those who lack large budgets stuck behind the goliaths. And yet again, it shows that it’s high time to push for better computer science curriculums in schools and also considering the value software development and management talent coming out of developing countries, two things that may help both level the playing field and open up job opportunities for millions of people.

October 16, 2017  4:41 PM

Docker and HPE have an opinion on public cloud hosting

Fred Churchville Fred Churchville Profile: Fred Churchville
Deployment, Public Cloud
Mike Brito, sales engineering manager at HPE

Mike Brito, sales engineering manager at HPE, explains to attendees of the Docker MTA Roadshow the basics of their MTA program.

Docker and Hewett Packer Enterprise (HPE) hit the road this month to demonstrate their recently launched Modernize Traditional Applications (MTA) program, designed to help enterprises update their existing legacy apps and move forward with their plans for digital transformation. The presentation was focused on their new program, but Docker’s MTA Roadshow also revealed some common misconceptions C-level execs have when it comes to public cloud hosting for workloads.

One misconception, explained John Orton, workloads in order to determine where they are best suited to live and will perform their functions most efficiently. While some have been in a rush to migrate almost all of their workloads to public clouds, Orton pointed out that many organizations should not forget that there are a large number of applications that may work at least equally well, if not better, on a private cloud or even bare-metal infrastructure.

“I think there’s a lot of confusion in the market about workloads and where they should live,” Orton said, adding that too many companies who have migrated workloads to the public cloud did so without an “exit strategy” that allows them to easily move those workloads back to a private cloud or bare-metal infrastructure.

“Before we decide where these workloads live, let’s take a look at what existing value there is in our infrastructure,” Orton said.

Ken Lavoie, solution engineer at Docker, went so far as to liken C-level executives still pushing initiatives to pursue public cloud hosting to modern day “Don Quixotes,” chasing after a technology that he said has gone past its prime. By leveraging the power of container technology, he said, these enterprises can make use of private clouds or bare-metal infrastructure in a way that surpasses performance in the public cloud.

There’s certainly no reason to say that the days of the public cloud hosting are over, but when a vendor like Docker who basically “covers the bases” when it comes to deploying on the public vs. private cloud, it makes you think that maybe they’re onto something. That certainly does not bode well for those still deploying in the public cloud, or the vendors invested in it.


September 11, 2017  5:51 PM

Rounding up a simple definition of microservices

Fred Churchville Fred Churchville Profile: Fred Churchville
Microservices

Breaking your monolith out to a distributed architecture is certainly a complex task. But having a solid perspective of what microservices are at a basic level can go a long way in terms of forming your migration and development strategies as you shift to this new architectural paradigm.

We asked three software pros actively working with microservices to give us their simplest definition of microservices, and also to provide a little food for thought when it comes to microservice methodologies and planning. These engineers, architects and CTOs have all given presentations at software conferences about moving to microservices and have some fundamental advice for those getting started.

Continued »


August 31, 2017  7:39 PM

Oracle OpenWorld may put spotlight on serverless

Fred Churchville Fred Churchville Profile: Fred Churchville
Infrastructure management, Infrastructure services, Serverless computing

We’ve barely caught our breath from the emergence of containers, and already serverless is picking up as a notable trend in the world of enterprise applications. And big vendors are not wasting any time in catching on. As software pros gear up for Oracle OpenWorld 2017, they may want to keep their ears open for announcements around what the goliath company plans to do around serverless.

“We do believe that serverless is the future,” Said Bob Quillin, vice president of the Oracle container group, in an interview I recently conducted with him about the company’s plans around contianers and other infrastructure trends. “There’s a whole set of technologies that we’re investing in currently. We’ll be making some announcements around that at OpenWorld, so stay tuned there.”

Interesting that the company is already making announcements around serverless when the industry is still in the throes of adopting Docker and other technologies needed for application container management. And while Quillin does admit that while the sample size for utilization is still relatively small, the potential makes it worthwhile to the company to support.

“Probably 5% to 10% of applications may fall into the serverless kind of patterns that you could use today,” Quillin said. “But it’s tremendously exciting, and I think it’s an area that we’re definitely investing in.”

Why serverless?

While it’s hard to say how much of a role serverless plays in handling enterprise-grade production application workloads, the buzz around this trend may be rapidly pushing it into enterprise application environments. In fact, 451 Research has already released a report that found that this infrastructure approach offers a lower cost of ownership than both virtual machines and containers for the majority of new applications — quite an endorsement.

“Obviously, it all depends on that precise situation,” explained Owen Rogers, research director at 451 Research. “But if you’ve built in a brand new application, maybe in that situation, serverless is going to be better value because you can build it the serverless way.”

So how does it save organizations cash? While it does not eliminate the need for servers, as the name might suggest, a serverless approach will utilize a third-party’s servers for the purposes of application development. This means that the organization will not have to rent or provision servers or virtual machines to development applications. The developers can write their code in response to events as they arise, and won’t have to worry about building and maintaining their own infrastructure.

Tempering expectations

Though Rogers says that large organizations are putting serverless to use for some day-to-day operations, such as automating processes based on events, identifying and logging events and transferring files, it seems like serverless is still a little ways off from becoming the standard for enterprises. This is especially true, he said, considering that there are still plenty of organizations who have yet to make the transition from virtual machines to containers.

“I don’t think serverless is going to take on the world,” Rogers said. “People have been saying that about containers for the past year, but we still see virtual machines everywhere. There’s so much legacy, so much stuff on virtual machines and containers, not all of them are going to shift to serverless overnight.”

Quillin also said that since many are still playing catch up with other development trends, serverless may take its fair share of time to catch on. However, he is inspired by the work he sees being done even at a small scale by some of today’s organizations.

“It’s at breakneck pace, and many companies are just getting into Docker and DevOps. But there are people who are leveraging serverless in a very powerful way,” Quillin said. “The potential is huge; the less you have to deal with infrastructure in the long-term the better. But there is some work that needs to be done.”


August 11, 2017  4:37 PM

Agile, DevOps … are they only a dream?

Fred Churchville Fred Churchville Profile: Fred Churchville
Agile, Agile Methodologies, Agile software development, DevOps, DevOps - testing / continuous delivery

You can’t go to an application development conference these days without the terms Agile and DevOps being thrown around more than free t-shirts from vendors. But for all our talk about these prized development approaches and structures, how many of us actually implement Agile or DevOps in a way that helps development and isn’t just the waterfall approach with a fancy new title?

Such is the subject of senior TechTarget editor Valerie Silverthorne’s article on the state of Agile in 2017, where she questions industry analyst Jeffery Hammond about the state of Agile today. His outlook, it seems, isn’t so pleasant when it comes to how people are actually implementing Agile, saying that often teams get bogged down by focusing on “process purity” rather than the actual results — a practice that he think flies directly in the face of the Agile Manifesto. Hammond said that the conversations surrounding Agile are so focused on process that he can’t even stand going to Agile conferences anymore.

DevOps practitioners may fall into the same trap, though perhaps in a different way. Often you hear about companies implementing a “DevOps team” or hiring “a DevOps leader.” Both of those things run opposite of what DevOps is meant to be: an ecosystem of small, independent teams that can carry an app from concept through production as a single unit without having to throw anything to another team.

Maybe the truth is that ideas like Agile and DevOps are just too romantic for reality. For instance, how many letters can we add on to DevOps until it actually encompasses every phase of the application lifecycle? BizDevOps? DevSecOps? BizDevSecOps? And for Agile — is there really going to be a case where development teams are allowed to simply do “whatever it takes” to continuously deploy apps at a breakneck pace? Companies are bound to have rules, and rules mean procedures. It seems to me that most organizations aren’t quite ready to completely let developers off the leash, less they end up with a Mad Max-esque version of what used to be their software development department.

What do you think? Do you agree? Disagree? Let me know with your comments.


July 24, 2017  4:57 PM

What caused the applet to fall…and what’s next?

Fred Churchville Fred Churchville Profile: Fred Churchville
Applet, Applets, Java applets

Recently I was tasked with rewriting TechTarget’s definition for “applet,” one that I admittedly was not too familiar with. But after a little research, I realized I must have interacted with applets thousands, if not millions, of times before.

An applet, essentially, is a very small application designed to perform a very specific function within another application. Often these are web applications, and the applet runs through a plugin. They are typically used for things like checkboxes or buttons, but can be used to create small animations, fetch data, run threads, buffer videos and more — and they were a popular option for a long time.

But as is common in history, new technological players came along to usurp the web development throne. JavaScript, HTML5, JavaFX and other scripting languages have managed to beat out applets both in terms of browser support and extra functionality. Browser and device diversity have also hurt applets, as there are fewer guarantees than ever that users will have the right plugins installed. In fact, Google Chrome has already phased out support for certain plug-ins, which will render many applets pretty useless.

In addition to stiff competition, there are also some security-related issues surrounding applets. Applets, like many things, can be used for malicious purposes. As such, applets almost always trigger a security prompt asking the user if they want to run it, which may concern and turn away some users.

Of course, if you’re willing to do the legwork, it’s still possible to run a Java applet on a webpage. However, while it’s possible, the consensus surrounding their use seems to be: “Why would you?”

It seems like no matter how popular a technology becomes, there’s nothing that can last forever in the development world. It does make me wonder what will be the next methodology to bite the dust — and for what.

What do you think? Have you used applets in the past and ditched them in favor of new approaches? Do you think the applet is still viable today? What do you think is the next technology on the chopping block? Let us know with your comments.


July 7, 2017  6:34 PM

Digital transformation puts middleware on the mind

Fred Churchville Fred Churchville Profile: Fred Churchville

However organizations choose to interpret the meaning behind digital transformation, there is at least one thing that appears consistent: it’s got business decision makers thinking about their integration middleware.

A survey report titled The Great Middleware Transition put out in cooperation between Aberdeen Group and the cloud-based integration provider Liaison Technologies shows that a vast majority of organizations plan to make a change when it comes to their middleware. The report, authored by analyst Michael Caton, specifically found that:

  • 76% of those surveyed plan to fully or partially replace their integration middleware platforms
  • 84% of all middleware will be replaced in the next four years
  • 84% of companies surveyed have 50 or more business applications to integrate

Caton said that the high numbers garnered from the survey were surprising. However, it is not surprising that more organizations are looking to the cloud. As such, he said the industry should prepare to witness a paradigm shift in how organizations manage their software infrastructure.

“IT organizations are about to undertake one of the most dramatic infrastructure shifts we’ve seen in 20 years, and the transition will be to the cloud,” Caton said. “This shift makes sense as companies look for greater flexibility, scalability and predictable cost models.”

However, while the cloud appears to be a popular destination, the report also found that 30% of organizations are still considering on-premises middleware management services. It makes sense that there are plenty of security-conscious organizations out there — financial institutions and the like — who no doubt prefer the on-premises approach. Still, it seems a little surprising that organizations would actually plan a middleware migration to another on-premises system as part of their so-called digital transformation.

Why are organizations making the switch? The survey found that many IT departments feel pressure to keep integration costs down while the number of integrations that need to occur between increasingly disparate applications and data sources grows. With compliance and security also a growing concern, for many this means shifting from DIY approaches like iPaaS to leveraging managed services like Red Hat’s JBoss platform, Oracle’s Fusion platform or Liason’s ALLOY platform.

This is certainly not the first we at SearchMicroservices.com have heard of organizations feeling the pressure to ditch their old application middleware, not just in favor of the cloud but also in favor of API-centric approaches. And it’s not just about keeping internal applications integrated; experts have been talking about the need for organizations to consider API-based approaches for B2B data integration as well.

But as organizations think about changing their middleware technology, they should also think about how that middleware ultimately fits into their application infrastructure. Specifically, organizations should think about how to keep middleware from becoming an application performance bottleneck. They should also think critically about who should be in charge of that middleware if an organization chooses to manage it themselves.

What do you think about the changing middleware landscape, or digital transformation in general? Let us know with your comments.


June 9, 2017  4:48 PM

OpenStack certification: Taking the COA exam

Fred Churchville Fred Churchville Profile: Fred Churchville

Are you using OpenStack? Maybe not yet, but it may be in your future. According to the OpenStack Foundation Annual Survey, the number of full production OpenStack deployments rose from 49% in April 2015 to 65% in April 2017. And the trend shows no sign of significantly slowing down.

As the adoption of OpenStack increases, the need for management skills and tooling increases as well. Organizations will start looking for OpenStack expertise within their own organization. And those with OpenStack skills listed on their résumé are likely to be in high demand.

Unfortunately, the OpenStack infrastructure can still be a little complex for those new to it to jump into due to the variety of features and its sheer scale. However, by utilizing the right tooling and educational resources, becoming an OpenStack expert is within reach.

If you think OpenStack may be on your doorstep soon, it’s worth looking into taking the Certified OpenStack Administrator (COA) exam. Even if you’ve been using OpenStack for a while, this is a good opportunity to verify your expertise and add another in-demand skill to your résumé.

What is the Certified OpenStack Administrator exam?

The COA exam is a vendor-neutral test of an IT professional’s familiarity and competency with the core components of OpenStack. The exam avoids including the nuances found between the ever-changing versions of OpenStack, but rather seeks to determine if someone can use OpenStack at the most basic level.

How does it work?

The COA exam is a skills-based test. Takers are put into a miniature production environment and are required to perform tasks or solve problems using the command line interface and Horizon dashboard, based on OpenStack Liberty. Proctors monitor the exam by streaming audio, video and screen-sharing feeds. Candidates will not be graded on the specific commands they use, but rather on the final state of the environment.

The exam can be taken on SuSE or Ubuntu. Candidates are tested on 10 specific aspects of OpenStack:

  • Getting to know OpenStack
  • Identity management
  • Dashboard
  • Compute
  • Object storage
  • Block storage
  • Networking
  • Heat/Orchestration
  • Troubleshooting
  • Image management

Find more specifics about the requirements on the COA requirements page.

How to prepare

It’s recommended that candidates have six months of professional experience before they take the exam. However, there are plenty of trainings and classes available that can help get you prepared, perhaps sooner if need be.

For example, the OpenStack Foundation marketplace offers training resources, including courses from vendors like HP. If you are new to OpenStack, The Linux Foundation offers an OpenStack course that provides videos, downloadable study guides and hands-on lab training aimed at OpenStack certification.

Requirements and OpenStack certification time

Test takers will have to provide their own front-end hardware with Chrome or Chromium browser, internet access and a microphone. You do not need your own Linux installation or VM. Use the compatibility check tool to verify that you meet the requirements.

Right now the exam costs $300, but check the exam info page to verify the cost in case that changes. Once scheduled, you are given 12 months to complete the exam, with one free retake allowed during the 12-month period.

The OpenStack certification lasts for three years. After that, candidates will have to re-take the COA exam to remain certified.


May 12, 2017  6:14 PM

Three questions to ask when forming an API strategy

Fred Churchville Fred Churchville Profile: Fred Churchville
API, API development, API management, APIs

There are plenty of tools out there promising to help make the most of your APIs. But without a proper API strategy, you risk spending time and money on processes and investments that don’t really serve your business as a whole and keep you from making the most of APIs.

Manfred Bortenschlager, director of business development for API management at Red Hat, gave a presentation at the company’s 2017 Summit in Boston where he talked to attendees about how they should think about their API strategy.

In his talk, Bortenschlager laid out three questions that you should ask yourself when it comes to your API strategy.

Question #1: Why do we want to implement APIs?

First, he explained, it’s important to find an answer to how the API can align with an organization’s goals. This includes identifying your most valuable cases. For example, do you need mobile and IoT support? Are you worried about a partner or customer ecosystem? Are you going to charge directly for the use of your API if you make it available?

It helps to “think outside the box,” when asking this question as well. For instance, consider the API use case of Amsterdam’s Schiphol Airport, which published its API platform offering in hopes that developers, including those from other travel companies and airlines, will use API access to improve the airline passenger experience in creative ways.

Question #2: What concrete outcomes do we want to achieve?

What exactly do you want your API to achieve? In order to answer this question, Bortenschlager explained, it needs to be thought about from two distinct perspectives: an external one and an internal one.

From an external perspective, is there an API available either through open source or licensing that can help you achieve a specific goal? Don’t spend your time reinventing the wheel if there is already an option out there that meets your needs.

From an internal perspective, do you have capabilities or unique data that could serve your company well from either a revenue or marketing perspective? If so, it may make sense to expose that data or those unique applications as APIs that the community can either use freely or license from you.

Considering these two perspectives should help to establish the tactics you employ as part of your API strategy, such as your plans for operations or your marketing strategy.

Question #3: How will we execute the API program?

Once the need and concrete objectives of the API strategy have been established, it’s time to determine execution. Bortenschlager advised that there are a few factors to take into account here, including what the actual value of the API is, how the API will be delivered and how you will capitalize on the API.

All of these things, he said, should be covered in API management, which is a key part of making any API strategy work. But, he warned that too many companies fail to think about the entire API lifecycle, from concept to end-of-life. Comprehensive API management requires that an organization think not just about the conception, creation, distribution and marketing of that API, but how it will ultimately either be updated or, if necessary, retired. Otherwise, you risk creating a mish-mosh of either useless or poorly performing APIs that will hurt your business or may make your company seem inattentive.

Bortenschlager also pointed out the importance of utilizing a centralized API manager and creating a developer portal that is easily accessible to your software teams. Look for API management products that offer this centralized management and access.

Answering these three questions alone may not be enough to nail down your entire API strategy, but it can at least put you on the right path towards driving more business value through your APIs. Bortenschlager’s site offers a variety of insights and information regarding API strategies and management that are worth browsing and learning from, including roundups of API articles from around the web.


April 19, 2017  4:10 PM

Oracle nudges enterprises towards using Docker

Fred Churchville Fred Churchville Profile: Fred Churchville
Application containerization, containers, Docker

Oracle has pushed itself further into the Docker community by allowing developers to now pull images of their flagship databases and developer tools through the Docker Store.

Effective immediately, developers can pull images of Oracle products including Oracle Database, Oracle MySQL, Oracle Java 8 SE Runtime Environment and Oracle Coherence. These arrive alongside over 100 images of Oracle products that are already available in the Docker Hub, including Open JDK and Oracle Linux. This move is reportedly occurring through the Docker Certification Program, a framework for partners to integrate and certify their technology to the Docker EE commercial platform.

Encouragement to the enterprise

According to Mark Cavage, vice president of software development at Oracle, this move is aimed at encouraging enterprises to lower their guard when it comes to using Docker for the building and deployment of mission-critical applications and systems.

“Docker is revolutionizing the way developers build and deploy modern applications, but mission-critical systems in the enterprise have been a holdout until now,” Cavage said. “Together with Docker, Oracle is bringing bedrock software to millions of developers enabling them to create enterprise-grade solutions that meet stringent security, performance and resiliency SLAs with the high level of productivity and low friction that they have come to expect from Dockerizing their application development stack.”

A hesitant enterprise, for better or worse

Surveys about the use of application containers in the enterprise have shown that a rapidly increasing number of enterprises are interested in Docker, with its popularity outpacing other explosive trends like PaaS and DevOps.

However, there has traditionally been hesitancy amongst enterprises to use application containers like Docker for mission-critical applications, due, for instance, to concerns pertaining to multi-tenant security and data persistence. As such, many have opted to either stick with virtual machines or find some sort of combination of the two rather than deploying solely using Docker containers.

However, Docker has made many changes to its services that have well prepared it for enterprise use, and the addition of high-level support from Oracle may just be the nudge enterprises need to come out from hiding behind their VMs.

What do you think? Does Oracle’s new availability in the Docker Hub change your mind about using Docker in the enterprise? Let us know with your comments.

Read the full copy of the Oracle press release about this new release.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: