Microservices Matters


September 11, 2017  5:51 PM

Rounding up a simple definition of microservices

Fred Churchville Fred Churchville Profile: Fred Churchville
Microservices

Breaking your monolith out to a distributed architecture is certainly a complex task. But having a solid perspective of what microservices are at a basic level can go a long way in terms of forming your migration and development strategies as you shift to this new architectural paradigm.

We asked three software pros actively working with microservices to give us their simplest definition of microservices, and also to provide a little food for thought when it comes to microservice methodologies and planning. These engineers, architects and CTOs have all given presentations at software conferences about moving to microservices and have some fundamental advice for those getting started.

Continued »

August 31, 2017  7:39 PM

Oracle OpenWorld may put spotlight on serverless

Fred Churchville Fred Churchville Profile: Fred Churchville
Infrastructure management, Infrastructure services, Serverless computing

We’ve barely caught our breath from the emergence of containers, and already serverless is picking up as a notable trend in the world of enterprise applications. And big vendors are not wasting any time in catching on. As software pros gear up for Oracle OpenWorld 2017, they may want to keep their ears open for announcements around what the goliath company plans to do around serverless.

“We do believe that serverless is the future,” Said Bob Quillin, vice president of the Oracle container group, in an interview I recently conducted with him about the company’s plans around contianers and other infrastructure trends. “There’s a whole set of technologies that we’re investing in currently. We’ll be making some announcements around that at OpenWorld, so stay tuned there.”

Interesting that the company is already making announcements around serverless when the industry is still in the throes of adopting Docker and other technologies needed for application container management. And while Quillin does admit that while the sample size for utilization is still relatively small, the potential makes it worthwhile to the company to support.

“Probably 5% to 10% of applications may fall into the serverless kind of patterns that you could use today,” Quillin said. “But it’s tremendously exciting, and I think it’s an area that we’re definitely investing in.”

Why serverless?

While it’s hard to say how much of a role serverless plays in handling enterprise-grade production application workloads, the buzz around this trend may be rapidly pushing it into enterprise application environments. In fact, 451 Research has already released a report that found that this infrastructure approach offers a lower cost of ownership than both virtual machines and containers for the majority of new applications — quite an endorsement.

“Obviously, it all depends on that precise situation,” explained Owen Rogers, research director at 451 Research. “But if you’ve built in a brand new application, maybe in that situation, serverless is going to be better value because you can build it the serverless way.”

So how does it save organizations cash? While it does not eliminate the need for servers, as the name might suggest, a serverless approach will utilize a third-party’s servers for the purposes of application development. This means that the organization will not have to rent or provision servers or virtual machines to development applications. The developers can write their code in response to events as they arise, and won’t have to worry about building and maintaining their own infrastructure.

Tempering expectations

Though Rogers says that large organizations are putting serverless to use for some day-to-day operations, such as automating processes based on events, identifying and logging events and transferring files, it seems like serverless is still a little ways off from becoming the standard for enterprises. This is especially true, he said, considering that there are still plenty of organizations who have yet to make the transition from virtual machines to containers.

“I don’t think serverless is going to take on the world,” Rogers said. “People have been saying that about containers for the past year, but we still see virtual machines everywhere. There’s so much legacy, so much stuff on virtual machines and containers, not all of them are going to shift to serverless overnight.”

Quillin also said that since many are still playing catch up with other development trends, serverless may take its fair share of time to catch on. However, he is inspired by the work he sees being done even at a small scale by some of today’s organizations.

“It’s at breakneck pace, and many companies are just getting into Docker and DevOps. But there are people who are leveraging serverless in a very powerful way,” Quillin said. “The potential is huge; the less you have to deal with infrastructure in the long-term the better. But there is some work that needs to be done.”


August 11, 2017  4:37 PM

Agile, DevOps … are they only a dream?

Fred Churchville Fred Churchville Profile: Fred Churchville
Agile, Agile Methodologies, Agile software development, DevOps, DevOps - testing / continuous delivery

You can’t go to an application development conference these days without the terms Agile and DevOps being thrown around more than free t-shirts from vendors. But for all our talk about these prized development approaches and structures, how many of us actually implement Agile or DevOps in a way that helps development and isn’t just the waterfall approach with a fancy new title?

Such is the subject of senior TechTarget editor Valerie Silverthorne’s article on the state of Agile in 2017, where she questions industry analyst Jeffery Hammond about the state of Agile today. His outlook, it seems, isn’t so pleasant when it comes to how people are actually implementing Agile, saying that often teams get bogged down by focusing on “process purity” rather than the actual results — a practice that he think flies directly in the face of the Agile Manifesto. Hammond said that the conversations surrounding Agile are so focused on process that he can’t even stand going to Agile conferences anymore.

DevOps practitioners may fall into the same trap, though perhaps in a different way. Often you hear about companies implementing a “DevOps team” or hiring “a DevOps leader.” Both of those things run opposite of what DevOps is meant to be: an ecosystem of small, independent teams that can carry an app from concept through production as a single unit without having to throw anything to another team.

Maybe the truth is that ideas like Agile and DevOps are just too romantic for reality. For instance, how many letters can we add on to DevOps until it actually encompasses every phase of the application lifecycle? BizDevOps? DevSecOps? BizDevSecOps? And for Agile — is there really going to be a case where development teams are allowed to simply do “whatever it takes” to continuously deploy apps at a breakneck pace? Companies are bound to have rules, and rules mean procedures. It seems to me that most organizations aren’t quite ready to completely let developers off the leash, less they end up with a Mad Max-esque version of what used to be their software development department.

What do you think? Do you agree? Disagree? Let me know with your comments.


July 24, 2017  4:57 PM

What caused the applet to fall…and what’s next?

Fred Churchville Fred Churchville Profile: Fred Churchville
Applet, Applets, Java applets

Recently I was tasked with rewriting TechTarget’s definition for “applet,” one that I admittedly was not too familiar with. But after a little research, I realized I must have interacted with applets thousands, if not millions, of times before.

An applet, essentially, is a very small application designed to perform a very specific function within another application. Often these are web applications, and the applet runs through a plugin. They are typically used for things like checkboxes or buttons, but can be used to create small animations, fetch data, run threads, buffer videos and more — and they were a popular option for a long time.

But as is common in history, new technological players came along to usurp the web development throne. JavaScript, HTML5, JavaFX and other scripting languages have managed to beat out applets both in terms of browser support and extra functionality. Browser and device diversity have also hurt applets, as there are fewer guarantees than ever that users will have the right plugins installed. In fact, Google Chrome has already phased out support for certain plug-ins, which will render many applets pretty useless.

In addition to stiff competition, there are also some security-related issues surrounding applets. Applets, like many things, can be used for malicious purposes. As such, applets almost always trigger a security prompt asking the user if they want to run it, which may concern and turn away some users.

Of course, if you’re willing to do the legwork, it’s still possible to run a Java applet on a webpage. However, while it’s possible, the consensus surrounding their use seems to be: “Why would you?”

It seems like no matter how popular a technology becomes, there’s nothing that can last forever in the development world. It does make me wonder what will be the next methodology to bite the dust — and for what.

What do you think? Have you used applets in the past and ditched them in favor of new approaches? Do you think the applet is still viable today? What do you think is the next technology on the chopping block? Let us know with your comments.


July 7, 2017  6:34 PM

Digital transformation puts middleware on the mind

Fred Churchville Fred Churchville Profile: Fred Churchville

However organizations choose to interpret the meaning behind digital transformation, there is at least one thing that appears consistent: it’s got business decision makers thinking about their integration middleware.

A survey report titled The Great Middleware Transition put out in cooperation between Aberdeen Group and the cloud-based integration provider Liaison Technologies shows that a vast majority of organizations plan to make a change when it comes to their middleware. The report, authored by analyst Michael Caton, specifically found that:

  • 76% of those surveyed plan to fully or partially replace their integration middleware platforms
  • 84% of all middleware will be replaced in the next four years
  • 84% of companies surveyed have 50 or more business applications to integrate

Caton said that the high numbers garnered from the survey were surprising. However, it is not surprising that more organizations are looking to the cloud. As such, he said the industry should prepare to witness a paradigm shift in how organizations manage their software infrastructure.

“IT organizations are about to undertake one of the most dramatic infrastructure shifts we’ve seen in 20 years, and the transition will be to the cloud,” Caton said. “This shift makes sense as companies look for greater flexibility, scalability and predictable cost models.”

However, while the cloud appears to be a popular destination, the report also found that 30% of organizations are still considering on-premises middleware management services. It makes sense that there are plenty of security-conscious organizations out there — financial institutions and the like — who no doubt prefer the on-premises approach. Still, it seems a little surprising that organizations would actually plan a middleware migration to another on-premises system as part of their so-called digital transformation.

Why are organizations making the switch? The survey found that many IT departments feel pressure to keep integration costs down while the number of integrations that need to occur between increasingly disparate applications and data sources grows. With compliance and security also a growing concern, for many this means shifting from DIY approaches like iPaaS to leveraging managed services like Red Hat’s JBoss platform, Oracle’s Fusion platform or Liason’s ALLOY platform.

This is certainly not the first we at SearchMicroservices.com have heard of organizations feeling the pressure to ditch their old application middleware, not just in favor of the cloud but also in favor of API-centric approaches. And it’s not just about keeping internal applications integrated; experts have been talking about the need for organizations to consider API-based approaches for B2B data integration as well.

But as organizations think about changing their middleware technology, they should also think about how that middleware ultimately fits into their application infrastructure. Specifically, organizations should think about how to keep middleware from becoming an application performance bottleneck. They should also think critically about who should be in charge of that middleware if an organization chooses to manage it themselves.

What do you think about the changing middleware landscape, or digital transformation in general? Let us know with your comments.


June 9, 2017  4:48 PM

OpenStack certification: Taking the COA exam

Fred Churchville Fred Churchville Profile: Fred Churchville

Are you using OpenStack? Maybe not yet, but it may be in your future. According to the OpenStack Foundation Annual Survey, the number of full production OpenStack deployments rose from 49% in April 2015 to 65% in April 2017. And the trend shows no sign of significantly slowing down.

As the adoption of OpenStack increases, the need for management skills and tooling increases as well. Organizations will start looking for OpenStack expertise within their own organization. And those with OpenStack skills listed on their résumé are likely to be in high demand.

Unfortunately, the OpenStack infrastructure can still be a little complex for those new to it to jump into due to the variety of features and its sheer scale. However, by utilizing the right tooling and educational resources, becoming an OpenStack expert is within reach.

If you think OpenStack may be on your doorstep soon, it’s worth looking into taking the Certified OpenStack Administrator (COA) exam. Even if you’ve been using OpenStack for a while, this is a good opportunity to verify your expertise and add another in-demand skill to your résumé.

What is the Certified OpenStack Administrator exam?

The COA exam is a vendor-neutral test of an IT professional’s familiarity and competency with the core components of OpenStack. The exam avoids including the nuances found between the ever-changing versions of OpenStack, but rather seeks to determine if someone can use OpenStack at the most basic level.

How does it work?

The COA exam is a skills-based test. Takers are put into a miniature production environment and are required to perform tasks or solve problems using the command line interface and Horizon dashboard, based on OpenStack Liberty. Proctors monitor the exam by streaming audio, video and screen-sharing feeds. Candidates will not be graded on the specific commands they use, but rather on the final state of the environment.

The exam can be taken on SuSE or Ubuntu. Candidates are tested on 10 specific aspects of OpenStack:

  • Getting to know OpenStack
  • Identity management
  • Dashboard
  • Compute
  • Object storage
  • Block storage
  • Networking
  • Heat/Orchestration
  • Troubleshooting
  • Image management

Find more specifics about the requirements on the COA requirements page.

How to prepare

It’s recommended that candidates have six months of professional experience before they take the exam. However, there are plenty of trainings and classes available that can help get you prepared, perhaps sooner if need be.

For example, the OpenStack Foundation marketplace offers training resources, including courses from vendors like HP. If you are new to OpenStack, The Linux Foundation offers an OpenStack course that provides videos, downloadable study guides and hands-on lab training aimed at OpenStack certification.

Requirements and OpenStack certification time

Test takers will have to provide their own front-end hardware with Chrome or Chromium browser, internet access and a microphone. You do not need your own Linux installation or VM. Use the compatibility check tool to verify that you meet the requirements.

Right now the exam costs $300, but check the exam info page to verify the cost in case that changes. Once scheduled, you are given 12 months to complete the exam, with one free retake allowed during the 12-month period.

The OpenStack certification lasts for three years. After that, candidates will have to re-take the COA exam to remain certified.


May 12, 2017  6:14 PM

Three questions to ask when forming an API strategy

Fred Churchville Fred Churchville Profile: Fred Churchville
API, API development, API management, APIs

There are plenty of tools out there promising to help make the most of your APIs. But without a proper API strategy, you risk spending time and money on processes and investments that don’t really serve your business as a whole and keep you from making the most of APIs.

Manfred Bortenschlager, director of business development for API management at Red Hat, gave a presentation at the company’s 2017 Summit in Boston where he talked to attendees about how they should think about their API strategy.

In his talk, Bortenschlager laid out three questions that you should ask yourself when it comes to your API strategy.

Question #1: Why do we want to implement APIs?

First, he explained, it’s important to find an answer to how the API can align with an organization’s goals. This includes identifying your most valuable cases. For example, do you need mobile and IoT support? Are you worried about a partner or customer ecosystem? Are you going to charge directly for the use of your API if you make it available?

It helps to “think outside the box,” when asking this question as well. For instance, consider the API use case of Amsterdam’s Schiphol Airport, which published its API platform offering in hopes that developers, including those from other travel companies and airlines, will use API access to improve the airline passenger experience in creative ways.

Question #2: What concrete outcomes do we want to achieve?

What exactly do you want your API to achieve? In order to answer this question, Bortenschlager explained, it needs to be thought about from two distinct perspectives: an external one and an internal one.

From an external perspective, is there an API available either through open source or licensing that can help you achieve a specific goal? Don’t spend your time reinventing the wheel if there is already an option out there that meets your needs.

From an internal perspective, do you have capabilities or unique data that could serve your company well from either a revenue or marketing perspective? If so, it may make sense to expose that data or those unique applications as APIs that the community can either use freely or license from you.

Considering these two perspectives should help to establish the tactics you employ as part of your API strategy, such as your plans for operations or your marketing strategy.

Question #3: How will we execute the API program?

Once the need and concrete objectives of the API strategy have been established, it’s time to determine execution. Bortenschlager advised that there are a few factors to take into account here, including what the actual value of the API is, how the API will be delivered and how you will capitalize on the API.

All of these things, he said, should be covered in API management, which is a key part of making any API strategy work. But, he warned that too many companies fail to think about the entire API lifecycle, from concept to end-of-life. Comprehensive API management requires that an organization think not just about the conception, creation, distribution and marketing of that API, but how it will ultimately either be updated or, if necessary, retired. Otherwise, you risk creating a mish-mosh of either useless or poorly performing APIs that will hurt your business or may make your company seem inattentive.

Bortenschlager also pointed out the importance of utilizing a centralized API manager and creating a developer portal that is easily accessible to your software teams. Look for API management products that offer this centralized management and access.

Answering these three questions alone may not be enough to nail down your entire API strategy, but it can at least put you on the right path towards driving more business value through your APIs. Bortenschlager’s site offers a variety of insights and information regarding API strategies and management that are worth browsing and learning from, including roundups of API articles from around the web.


April 19, 2017  4:10 PM

Oracle nudges enterprises towards using Docker

Fred Churchville Fred Churchville Profile: Fred Churchville
Application containerization, containers, Docker

Oracle has pushed itself further into the Docker community by allowing developers to now pull images of their flagship databases and developer tools through the Docker Store.

Effective immediately, developers can pull images of Oracle products including Oracle Database, Oracle MySQL, Oracle Java 8 SE Runtime Environment and Oracle Coherence. These arrive alongside over 100 images of Oracle products that are already available in the Docker Hub, including Open JDK and Oracle Linux. This move is reportedly occurring through the Docker Certification Program, a framework for partners to integrate and certify their technology to the Docker EE commercial platform.

Encouragement to the enterprise

According to Mark Cavage, vice president of software development at Oracle, this move is aimed at encouraging enterprises to lower their guard when it comes to using Docker for the building and deployment of mission-critical applications and systems.

“Docker is revolutionizing the way developers build and deploy modern applications, but mission-critical systems in the enterprise have been a holdout until now,” Cavage said. “Together with Docker, Oracle is bringing bedrock software to millions of developers enabling them to create enterprise-grade solutions that meet stringent security, performance and resiliency SLAs with the high level of productivity and low friction that they have come to expect from Dockerizing their application development stack.”

A hesitant enterprise, for better or worse

Surveys about the use of application containers in the enterprise have shown that a rapidly increasing number of enterprises are interested in Docker, with its popularity outpacing other explosive trends like PaaS and DevOps.

However, there has traditionally been hesitancy amongst enterprises to use application containers like Docker for mission-critical applications, due, for instance, to concerns pertaining to multi-tenant security and data persistence. As such, many have opted to either stick with virtual machines or find some sort of combination of the two rather than deploying solely using Docker containers.

However, Docker has made many changes to its services that have well prepared it for enterprise use, and the addition of high-level support from Oracle may just be the nudge enterprises need to come out from hiding behind their VMs.

What do you think? Does Oracle’s new availability in the Docker Hub change your mind about using Docker in the enterprise? Let us know with your comments.

Read the full copy of the Oracle press release about this new release.


April 17, 2017  7:50 PM

Can DevOps help us save lives?

Fred Churchville Fred Churchville Profile: Fred Churchville

A former U.S. Marine wants software developers and architects to get more in touch with their feelings. In fact, their lives may very well depend on it.

Ken Mugrage, the former Marine who is also a technology evangelist at ThoughtWorks, led a session at the 2017 O’Reilly Software Architecture Conference in New York City, presented a session in which he talked about burnout that software developers and managers can experience in enterprise environments – and how DevOps can help. It can also make it worse, by the way, but we’ll get to that later.

Why it matters

This isn’t just a plug for DevOps – it’s a serious issue. Look at the CDC watch list that lists computer programming as an occupation where individuals are prone to suicide. Perhaps more importantly, read John Willis’ blog post titled Karōjisatsu where he recounts the life-changing losses of extremely bright, talented developers to suicide. Burnout is more than stress, it’s deadly.

“Some researchers put burnout in the same clinical category of diseases like PTSD and depression,” Willis said in an interview about the subject. “In some cases the ultimate effect can be death; however, more common results are health related stress symptoms. Other work-life balance issues can arrive from unresolved burnout symptoms. For example, divorce, disconnectedness from family and friends.”

why-address-burnout

The psychological impact of burnout should not be ignored, Mugrage said.

What can we do

Thankfully, burnout is fixable, Mugrage explained in his talk. And there are reasons beyond just preventing suicide to reduce the chance of burnout. At the end of the day, Mugrage said, people who are happier tend to be better at their jobs.

Professional mental health services should always be regarded as the most reliable source of healing symptoms from professional burnout, especially if those symptoms are entering into the areas of depression or potential suicidal tendencies. But there are steps that both managers and teams can take to prevent burnout from happening.

One thing, Mugrage advised, is to measure how you feel about your job using professionally designed tests such as the Maslach Burnout Inventory, which has been recognized as the leading measure for burnout symptoms. It can be used by individuals, but corporate subscriptions exist as well. It may very well be worth the investment.

Mugrage gave other helpful tips for avoiding burnout, such as protecting your work/life balance and possibly getting mental health first aid training. However he did say that the correct implementation of a DevOps culture may be able to help.

The DevOps effect

So how can DevOps help? When implemented correctly, DevOps may be able to address some of the identified causes for burnout, such as feeling in control of your own work and communication breakdowns, Mugrage explained.

For instance, good DevOps practices encourage organizations to keep teams as small as possible, Mugrage explained, citing the “two pizza rule” from Amazon. Everyone involved with the product is on the same team, and they share resources freely, creating a better back-and-forth between workers.

ken-mugrage

Ken Mugrage addressed a crowd at the O’Reilly Software Architecture Conference in New York City about the issue of burnout in software development.

The other thing DevOps can provide, Mugrage said, is a feeling of fairness in terms of project responsibility. In a true DevOps world, there is no “passing the buck,” so to speak, when something goes wrong – everyone is responsible for their own part of the project. Everyone is measured the same way. While it does mean that some people may feel more stressed, it makes people comfortable to know they work in an environment where they won’t get blamed for someone else’s mistake.

However, DevOps can’t solve all your problems, such as those related to work overload or any issues that arise from bad company values. And if DevOps isn’t practiced correctly – i.e., you established a completely siloed DevOps team – you may actually make some issues worse.

It’s a real issue that requires real solutions. Hopefully well-practiced DevOps, in addition to the other considerations listed above, can help. If you need help, talk to someone, and don’t forget that at the end of the day your job should make you happy, not miserable.


April 7, 2017  5:50 PM

Beware of complexity in API and microservices development

Jan Stafford Jan Stafford Profile: Jan Stafford
API management, Development, Microservices

Finding the causes of cost overruns in software development and lifecycle is Theresa Lanowitz’s bailiwick. So, it’s natural that our recent conversation about technical debt in app development expanded beyond that subject. In this post, I share her insights on the hidden costs of API and microservices development and management, as well as some technologies that help software pros reduce those costs.

Theresa Lanowitz, founder, voke inc.

Theresa Lanowitz, founder, voke inc.

A well-known and sought-after consultant and speaker, Lanowitz founded voke inc., an analyst firm aimed focused on the evolving application lifecycle. Since its founding in 2006, voke’s surveys have uncovered hidden costs in Agile development, release management and other areas. Prior to founding voke, she served as a Gartner industry analyst and in leadership positions with Boeing, Borland Software and Sun Microsystems. She’s also a co-author, alongside voke COO Lisa Dronzek, of the book Lifecycle Virtualization.

API and microservices challenges

According to voke’s research, a minority of organizations automate API testing. Most often, the API owner or developer does some manual testing, releases the API and waits for feedback.

“Their attitude is, ‘If it works, great. If it doesn’t, we’ll deal with that later on,” said Lanowitz. “With this write-and-ship mentality, there’s no time to think about the cost of rework.”

Automating API tests would save time and money by eliminating manual tests and catching flaws before APIs are released. More importantly, said Lanowitz, better software releases result in happier customers. To make a relatively quick change to leveraging automation, she said, look at cloud-based API test tools.

Software quality vs. quick releases?

Speaking of better software releases, voke is seeing a shift away from a laser focus on fast time to market – a change in attitude compared to what used to be seen amongst organizations.

“In a recent survey, we asked which mattered most: time to market, quality or cost,” said Lanowitz. “People said quality is most important, and then time to market.”

The focus on speed has had the adverse effect of making technical debt a top ALM problem.

“We had so many people comment to us that focusing only on time to market is making costs skyrocket out of control,” said Lanowitz. “It’s taken this reality for people to finally say: ‘Let’s step back and take a look at how much we’re actually spending on this.’”

Hidden costs of microservices

In both API and microservices implementation, hidden complexities can add to costs. Changing from developing and managing an application to dealing with all the transactions in microservices makes control very difficult.

“The challenge with microservices is that even though you take the complexity down a level, you still have that complexity,” Lanowitz said. “You still have to reassemble those transactions back at the top and make sure that they’re all handled correctly.”

Lanowitz has seen DevOps teams struggle to manage microservices due to a lack of understanding of the services lifecycle. Here’s the bottom line: Don’t get into microservices if your organization doesn’t have really good, tried-and-true microservices management in place.

Ways to cut cost of API and microservices rework   

There are technologies that can diminish the challenges in API and microservices implementation. Lanowitz’s advice was to evaluate service virtualization, virtual and cloud-based labs, data virtualization and test data virtualization technologies. The latter helps keep data secure and reduces the cost of rework, because it makes it possible to perform testing with data as close to production as possible. She covers these technologies in depth in her aforementioned book.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: