Written by Alan Earls, SearchMicroservices.com contributor
The movement towards Java-based microservices led some to wonder whether this development might curtail the use of .NET and negatively affect .NET developers. There is still a sprawling Java ecosystem, and many see Linux as the de facto standard for containers. There’s an argument that the future of Java and the Java virtual machine in a containerized, orchestrated world could eventually pose a threat to .NET.
“Java makes a much more compelling argument as the enterprise language for writing high-scale microservices,” said Tal Weiss Tal Weiss, CTO and co-founder of OverOps, a software analytics company focused on large-scale Java and Scala code bases.
But there isn’t necessarily any evidence that microservices will drive .NET into obscurity. Microsoft made a concerted effort to improve the process of developing microservices in .NET with Azure.
“They are putting a lot of weight behind promoting .NET for Azure adoption, and they’ve made it easier to spin up a microservices environment in .NET than it was five years ago,” said David McCarter, a Microsoft consultant. For instance, Microsoft pushes .NET and its Azure platform for organizations that want to re-factor an application and run them in the cloud.
But it’s important to keep in mind that many organizations want to remain vendor-agnostic and work with a broad scope of technology providers. It used to be that companies would select one or the other — or perhaps reserve one for only certain roles. But microservices are conceptual, and not tied to a specific application framework like Java or .NET. Expect Go, Python and Node.js to remain big names in the microservices space.
Industry experts say that ESBs are dead, but as Ira Gershwin wrote — it ain’t necessarily so.
Gartner VP Roy Schulte coined the term enterprise service bus (ESB) in 2002 during the technology’s early days, and just two years later some IT experts already declared it dead. More recently, ESB product sales have flatlined, as the centralized, hub-and-spoke IT environments ESBs support are replaced by loosely-coupled cloud middleware, namely iPaaS and MWaaS, said Elizabeth Golluscio, a Gartner analyst .
But enterprises aren’t dumping ESB products willy-nilly, because ESB products handle requirements newcomer technologies can’t. Also, enterprise IT has found that breaking up with monolithic ESB-based middleware is hard to do.
Middleware revenues will peak at $30 billion this year, but despite the double-digit pace of iPaaS and MWaaS investments their sales will account for only a fraction of that market. Most middleware spending will go to maintain and upgrade ESB and SOA infrastructures, as large enterprises continue to do complex integrations and messaging that iPaaS can’t handle such as data-intensive and hybrid integration processes, said Saurabh Sharma, principal analyst for Ovum, a UK IT research firm.
On the messaging side, an ESB product is best-suited for pub/sub messaging in asynchronous communications and message queues, Golluscio said. “The messaging model for integration or event handling is not something that the iPaaS tools do well yet,” she said. However, iPaaS vendors have pulled together classic integrations, API management and event mediations, and on the open source messaging side, Confluence and Kafka are coming on strong, she said.
Despite the strengths of on-premise ESB products, most of their capabilities will inevitably move to the cloud — but this is not a simple process. “It is very hard to move away from ESBs, which literally have tentacles connected to every important system in an enterprise IT environment,” Golluscio said. Change management factors and reaching consensus between business and IT ultimately slow down MWaaS adoption, Sharma noted. Both analysts say a shortage of ESB expertise is a hurdle.
To break up ESB product monoliths, some IT teams adopt a phased approach to focus on new integration projects with only MWaaS, Sharma said. Integration workloads that run on-premises gradually migrate to iPaaS, and developers will chip off ESB features into microservices.
In many cases, enterprise IT adds an iPaaS and an API management tool to their integration infrastructure, Golluscio said. They start with an ESB-based system, and use an Extract-Transfer-Load (ETL) tool for data integration, and then migrate to iPaaS and implement API management.
Reduced dependence on legacy ESB products is a challenge but pays dividends, with lower costs for both technology and staff. “The old joke was that the ESB middleware market was like the haves and the have-nots; only a certain number of companies had the money to buy and the resources to deal with ESBs,” Golluscio said.
iPaaS’ ease of use and pay-per-use attracts companies that lack the internal expertise or money for legacy ESBs. They certainly don’t require a deep bench of rocket scientists to use, Golluscio said.
“Finally, the democratization of enterprise tech has finally hit the integration space,” she said.
Some businesses start software modernization projects to add sophisticated features, such as artificial intelligence and mobility, but that may be akin to icing a moldy cake. Software teams can add new application architectures, technologies and platforms as separate projects, but experts say they should use modernization projects to address the core weaknesses of legacy applications.
Traverse Clayton, a Gartner analyst, explained the reasons organizations should tackle software modernization projects in a recent interview.
“The primary drivers for application modernization are encroaching obsolescence and massive amounts of technical debt,” he said. If a legacy app doesn’t foster agility, likely culprits are platform, language or developer skill obsolescence.
It’s a mistake to fold mobility and AI adoption into modernization projects, Clayton added. He said that less than five percent of current or in-flight modernization programs mark mobility as a key concern or goal. Likewise modernization has almost zero to do with AI (artificial intelligence).
“You can modernize the data from your application, which is then available for analytics and AI workloads,” Clayton said. But the only time AI and BI rear their heads is when the organization looks to do something different than it’s ever done before, which is a small percentage of companies, he added.
Who’s ready for software modernization?
Typically, large corporations have aging systems and massive amounts of technical debt, Clayton said. Because of that debt, many companies aren’t nearly ready to modernize. Their Agile and DevOps skills are not adequate to support modern architectures.
Some alternatives to software modernization include a rehost (e.g. lift and shift) or a replace strategy, since those require less of an engineering effort, Clayton said. However, there is some backlash to this approach, as rehost projects often fall short of the goals. Typically, the stakeholder’s goals and expectations are greater than what a rehost can deliver.
Many enterprise DevOps teams go back to the drawing board and assess refactor, re-architect and rebuild strategies to realize better ROI and reap the benefits of cloud, Clayton said. This often involves an assessment of various PaaS options.
Software modernization tactics
In successful software modernization projects, teams prioritize the portfolio and rationalize which initiatives will provide the most value, Clayton said. During this process, project teams should use maturity assessment approaches, such as the Capability Maturity Model (CMM) and PACE (Product and Cycle-Time methodology). These assessments can help organizations create business value heat maps. Gartner features its own maturity assessment methodology called the Gartner IT Score.
When developers and architects modernize a particular platform, somebody needs to understand everything that is in the current environment, Clayton said. This is traditionally called applications rationalization. In software modernization, application rationalizations need to document every data flow, code flow and connection between every application.
“This data, which is normally stowed in a CMDB (configuration management database), needs to be overlaid with the business processes of the organization,” Clayton said.
Once business processes have been overlaid atop the CMDB, the software modernization team must determine which business flow uses particular assets within the company. These are the only assets an organization should move during modernization, Clayton said.
Applications that are not subject for investment or modernization must be consolidated on the remaining platforms which exist in ecosystem today. All of these applications must then be targeted for retirement or replacement or integration into other applications, to prevent having to run legacy platforms in a modern data center.
In a stealthy way, serverless APIs have moved out of the “shiny new thing” phase to being enterprise software developers’ favorite way to bypass development overhead when delivering applications, APIs and microservice functions
Rich Sharples, Red Hat senior product management director, said he was surprised when he found out about the popularity of serverless APIs at Red Hat Summit 2017, last May. During his presentation there, he claimed he hadn’t spoken to many customers who were deploying business apps and APIs on serverless frameworks. That changed after his presentation.
“By the time I left the hall, I’d spoken to 15 customers who were using serverless APIs,” he said, adding that since then he’s talked to dozens more and heard of many other serverless API adopters. “Interest and adoption has ramped up very quickly.”
In serverless computing, a form of utility computing, a cloud service provider offers a pay-per-use compute runtime with a back-end that is invisible to the user. The cloud provider supplies and manages the servers and infrastructure, so that developers can focus on creating code.
However, businesses are not building, for instance, their next airline booking system or CRM system using serverless, Sharples said.
“It isn’t at that level,” he explained. “But serverless is there in terms of doing critical business functions.”
What’s driving increased serverless API use? The main attractors are freedom from provisioning servers, autoscaling capabilities, a small learning curve for developers, increased speed of development and the pay-as-you-go model, Sharples said. With serverless, a developer just has to care about a small fragment of code.
“They can be more task-focused and really push the responsibility of scaling and managing the infrastructure to somebody else,” Sharples said.
Developers deploy APIs on serverless frameworks to bypass the complexity of creating and deploying microservices whenever possible, he said.
“As adoption of microservices increased, developers realized that building them is hard,” Sharples said, adding that they see an API can cover some functions or features that a microservice does.
Serverless APIs win when the choice is between building that function within minutes or hours as an API versus taking more time to build a microservice, Sharples explained.
“I task anybody to go build a resilient measure of microservices in less than a few days,” Sharples said. He also said that serverless API users are making future plans around integration, design plans, event-driven architecture and next generation applications.
Tempering his enthusiasm with reality, Sharples stressed that serverless is not a silver bullet. Deploying APIs on serverless frameworks is good for certain use cases that can benefit from the runtime efficiency.
“There is an envelope where serverless makes sense,” he said. “Outside the envelope, a more traditional long-running microservice would probably make a lot more sense.”
Many businesses have mature API management practices in place for integration via REST APIs and application APIs. Unfortunately, those API strategies can create data silos for most business intelligence and analytics practices. That’s a problem that will only grow as demand for advanced analytics increases, according to industry experts.
Today, analytics pros face the challenge of programmatically answering questions using a limited data set available from an API. API strategies at most organizations have a singular function: to give application developers the ability to extend functionality of applications through process integration.
“These APIs often have limited surface areas and leave many of the data developer roles behind [without resources], because they lack rich query capabilities and are not designed for the purposes of data integration,” said Sumit Sarkar, chief data evangelist for Progress, an independent software vendor.
Data APIs in 2018
Sarkar has seen businesses with well-built data API practices successfully provide continuous operations and real-time capabilities. These functions strengthen analytics efficiency by provisioning data access specifically designed for data managers and business analysts, he said.
There have been encouraging developments in the area of data API management, Sarkar said. There are three in particular that stand out.
One of these developments is the IT industry’s adoption of OData, an industry standard RESTful data API. OData provides standard query capabilities and works without additional code to access data from popular analytical tools, especially those in data visualization, such as Tableau, Qlik or Microsoft’s Power BI.
A second development is better support for third-party analytics tools. SQL access is now increasingly published to developer portals for cloud applications, such as Microsoft Dynamics, NetSuite or ServiceNow.
Finally, as API management strategies continue to centralize authentication, authorization and business logic processes, there are more enterprise demands for SQL access to enterprise REST APIs. To that end, analytics tool vendors may release products that support REST APIs directly this year. However, they must first figure out how to accommodate the wide range of differences between REST APIs. This includes support for different security schemes, varying payloads, invocation methods and query capabilities.
Strategies for data APIs
Organizations should determine the requirements from all stakeholders as a first step in their API strategy planning, Sarkar explained. Teams also need to come to a consensus on the processes that govern access to analytics. For example, APIs that expose detailed data for the purposes of data discovery or compatibility should be provisioned with enterprise reporting tools, such as SAP Business Objects.
“Expect those [stakeholder] teams to ultimately become consumers of your APIs, even if there are no use cases today,” he said.
There are a few questions Sakar suggests teams ask when evaluating tools for managing data APIs. These include:
- Can it work with existing API strategies?
- Can it work together seamlessly with your existing security policies?
- Can it support necessary operational integrations to scale?
- Does it have support for open data standards such as Open Database Connectivity, Java Database Connectivity, and OData?
Sarkar predicts 2018 will bring a surge of interest among companies to merge management processes for app and data APIs. He also predicted more collaboration between business intelligence and analytics teams and the API design teams. By year’s end, the distinction between application-oriented APIs and data APIs will be blurred, which will topple data silos.
There’s a lot of information about DevOps out there, making it hard to determine exactly what it means. However, there are still key principles you can stick to in order to achieve success through DevOps. Here are four questions that, according to our experts, you should be asking when thinking about your DevOps strategy.
1. Have we established a zone of business impact?
DevOps tasks should be clearly connected to the applications they support, and businesses should in turn identify the business processes those applications support. This lets organizations map out what is referred to as a zone of business impact for each DevOps process, and it’s a fundamental part of DevOps documentation. That way, as expert Tom Nolle points out in his piece on the connection between DevOps and enterprise architecture, the impact any DevOps strategy has on business processes — even if the impact is simply a risk of disruption — can be planned for ahead of time. This will also ensure that development teams understand the business process lifecycle, or even lifecycles, that their application may impact.
2. Are we deploying continuously?
Implementing continuous delivery means that changes to your application are created and deployed on an ongoing basis. If your apps are not being delivered continuously, it’s important to start thinking about building a continuous delivery pipeline for them by making sure that everyone involved in app development and delivery is working together. According to expert Chris Tozzi in his piece on applying DevOps principles to app modernization, part of this means ensuring you have the infrastructure tools in place to roll out changes quickly from development to production.
3. Are non-IT departments involved?
Everyone who plays a role in software delivery, including those not in the IT department, should be plugged into a project, advises Tozzi in his piece on DevOps principles to apply to software architecture. Those in customer support, legal and HR departments have a stake in software production. A DevOps-friendly environment should ensure these non-IT stakeholders can collaborate with developers and IT operations staff as needed. Otherwise, nontechnical issues could hinder continuous delivery efforts.
4. Are we going all the way?
Those who do not go all out in terms of cultural acceptance and fully transitioning to all the required tools are going to have a hard time adopting a DevOps strategy, says Twain Taylor in his piece on common mistakes made when transitioning to DevOps. Some of this, he says, involves making significant changes to your infrastructure, like transitioning from servers and virtual machines to containers.
Like it or not, it appears that the continuing skills gap that continues to plague many sections of the software world, including development, testing and more, has found a new victim: digital transformation through the use of machine learning.
A survey conducted by ServiceNow looked at the eagerness of organizations to incorporate machine learning as part of their digital transformation. Mainly, senior executives want to buy into machine learning in order to support faster and more accurate decision making. But the survey polled some interesting numbers that point to what appears to be a significant lack of machine learning skills needed to manage intelligent machines within organizations.
The report shows that 72% of CIOs surveyed said they are leading their company’s digitalization efforts, and just over half agree that machine learning plays a critical role in that. Nearly half (49%) say their companies are using machine learning and 40% said that they plan to adopt.
However, as ambitious as these CIOs are, a serious machine learning skills gap is occurring. Only 27% of those surveyed report having hired employees with skill sets related to intelligent machines, and just 40% of respondents have redefined job descriptions to highlight work with intelligent machines. Furthermore, 41% say they lack the skills to manage intelligent machines, and 47% of CIOs surveyed said they lack the budget for new skills development.
While a good portion of these companies appear to at least be making an effort to find the skills they need to make the most of machine learning, it’s striking that just over half of those CIOs who strongly believe machine learning is essential have managed to acquire the skills they need to make it happen within their organization.
But it’s no surprise that CIOs say they lack the budget for machine learning skills: According to Glassdoor.com, the national average salary in the U.S. for machine learning engineers is $128,000 a year. That’s a serious chunk of change, and it’s probably not enough just to hire one. You also have to consider the upfront cost of machine learning when you add in the money you inevitably will have to spend on software, hardware or various other services your engineers will demand to use.
Again, it seems that a bright sector of the software world suffers again from the ongoing skills gap in the market, one that continues to drive up the cost of engineers and leave those who lack large budgets stuck behind the goliaths. And yet again, it shows that it’s high time to push for better computer science curriculums in schools and also considering the value software development and management talent coming out of developing countries, two things that may help both level the playing field and open up job opportunities for millions of people.
Docker and Hewett Packer Enterprise (HPE) hit the road this month to demonstrate their recently launched Modernize Traditional Applications (MTA) program, designed to help enterprises update their existing legacy apps and move forward with their plans for digital transformation. The presentation was focused on their new program, but Docker’s MTA Roadshow also revealed some common misconceptions C-level execs have when it comes to public cloud hosting for workloads.
One misconception, explained John Orton, workloads in order to determine where they are best suited to live and will perform their functions most efficiently. While some have been in a rush to migrate almost all of their workloads to public clouds, Orton pointed out that many organizations should not forget that there are a large number of applications that may work at least equally well, if not better, on a private cloud or even bare-metal infrastructure.
“I think there’s a lot of confusion in the market about workloads and where they should live,” Orton said, adding that too many companies who have migrated workloads to the public cloud did so without an “exit strategy” that allows them to easily move those workloads back to a private cloud or bare-metal infrastructure.
“Before we decide where these workloads live, let’s take a look at what existing value there is in our infrastructure,” Orton said.
Ken Lavoie, solution engineer at Docker, went so far as to liken C-level executives still pushing initiatives to pursue public cloud hosting to modern day “Don Quixotes,” chasing after a technology that he said has gone past its prime. By leveraging the power of container technology, he said, these enterprises can make use of private clouds or bare-metal infrastructure in a way that surpasses performance in the public cloud.
There’s certainly no reason to say that the days of the public cloud hosting are over, but when a vendor like Docker who basically “covers the bases” when it comes to deploying on the public vs. private cloud, it makes you think that maybe they’re onto something. That certainly does not bode well for those still deploying in the public cloud, or the vendors invested in it.
Breaking your monolith out to a distributed architecture is certainly a complex task. But having a solid perspective of what microservices are at a basic level can go a long way in terms of forming your migration and development strategies as you shift to this new architectural paradigm.
We asked three software pros actively working with microservices to give us their simplest definition of microservices, and also to provide a little food for thought when it comes to microservice methodologies and planning. These engineers, architects and CTOs have all given presentations at software conferences about moving to microservices and have some fundamental advice for those getting started.
We’ve barely caught our breath from the emergence of containers, and already serverless is picking up as a notable trend in the world of enterprise applications. And big vendors are not wasting any time in catching on. As software pros gear up for Oracle OpenWorld 2017, they may want to keep their ears open for announcements around what the goliath company plans to do around serverless.
“We do believe that serverless is the future,” Said Bob Quillin, vice president of the Oracle container group, in an interview I recently conducted with him about the company’s plans around contianers and other infrastructure trends. “There’s a whole set of technologies that we’re investing in currently. We’ll be making some announcements around that at OpenWorld, so stay tuned there.”
Interesting that the company is already making announcements around serverless when the industry is still in the throes of adopting Docker and other technologies needed for application container management. And while Quillin does admit that while the sample size for utilization is still relatively small, the potential makes it worthwhile to the company to support.
“Probably 5% to 10% of applications may fall into the serverless kind of patterns that you could use today,” Quillin said. “But it’s tremendously exciting, and I think it’s an area that we’re definitely investing in.”
While it’s hard to say how much of a role serverless plays in handling enterprise-grade production application workloads, the buzz around this trend may be rapidly pushing it into enterprise application environments. In fact, 451 Research has already released a report that found that this infrastructure approach offers a lower cost of ownership than both virtual machines and containers for the majority of new applications — quite an endorsement.
“Obviously, it all depends on that precise situation,” explained Owen Rogers, research director at 451 Research. “But if you’ve built in a brand new application, maybe in that situation, serverless is going to be better value because you can build it the serverless way.”
So how does it save organizations cash? While it does not eliminate the need for servers, as the name might suggest, a serverless approach will utilize a third-party’s servers for the purposes of application development. This means that the organization will not have to rent or provision servers or virtual machines to development applications. The developers can write their code in response to events as they arise, and won’t have to worry about building and maintaining their own infrastructure.
Though Rogers says that large organizations are putting serverless to use for some day-to-day operations, such as automating processes based on events, identifying and logging events and transferring files, it seems like serverless is still a little ways off from becoming the standard for enterprises. This is especially true, he said, considering that there are still plenty of organizations who have yet to make the transition from virtual machines to containers.
“I don’t think serverless is going to take on the world,” Rogers said. “People have been saying that about containers for the past year, but we still see virtual machines everywhere. There’s so much legacy, so much stuff on virtual machines and containers, not all of them are going to shift to serverless overnight.”
Quillin also said that since many are still playing catch up with other development trends, serverless may take its fair share of time to catch on. However, he is inspired by the work he sees being done even at a small scale by some of today’s organizations.
“It’s at breakneck pace, and many companies are just getting into Docker and DevOps. But there are people who are leveraging serverless in a very powerful way,” Quillin said. “The potential is huge; the less you have to deal with infrastructure in the long-term the better. But there is some work that needs to be done.”