Microservices Matters


July 24, 2017  4:57 PM

What caused the applet to fall…and what’s next?

Fred Churchville Fred Churchville Profile: Fred Churchville
Applet, Applets, Java applets

Recently I was tasked with rewriting TechTarget’s definition for “applet,” one that I admittedly was not too familiar with. But after a little research, I realized I must have interacted with applets thousands, if not millions, of times before.

An applet, essentially, is a very small application designed to perform a very specific function within another application. Often these are web applications, and the applet runs through a plugin. They are typically used for things like checkboxes or buttons, but can be used to create small animations, fetch data, run threads, buffer videos and more — and they were a popular option for a long time.

But as is common in history, new technological players came along to usurp the web development throne. JavaScript, HTML5, JavaFX and other scripting languages have managed to beat out applets both in terms of browser support and extra functionality. Browser and device diversity have also hurt applets, as there are fewer guarantees than ever that users will have the right plugins installed. In fact, Google Chrome has already phased out support for certain plug-ins, which will render many applets pretty useless.

In addition to stiff competition, there are also some security-related issues surrounding applets. Applets, like many things, can be used for malicious purposes. As such, applets almost always trigger a security prompt asking the user if they want to run it, which may concern and turn away some users.

Of course, if you’re willing to do the legwork, it’s still possible to run a Java applet on a webpage. However, while it’s possible, the consensus surrounding their use seems to be: “Why would you?”

It seems like no matter how popular a technology becomes, there’s nothing that can last forever in the development world. It does make me wonder what will be the next methodology to bite the dust — and for what.

What do you think? Have you used applets in the past and ditched them in favor of new approaches? Do you think the applet is still viable today? What do you think is the next technology on the chopping block? Let us know with your comments.

July 7, 2017  6:34 PM

Digital transformation puts middleware on the mind

Fred Churchville Fred Churchville Profile: Fred Churchville

However organizations choose to interpret the meaning behind digital transformation, there is at least one thing that appears consistent: it’s got business decision makers thinking about their integration middleware.

A survey report titled The Great Middleware Transition put out in cooperation between Aberdeen Group and the cloud-based integration provider Liaison Technologies shows that a vast majority of organizations plan to make a change when it comes to their middleware. The report, authored by analyst Michael Caton, specifically found that:

  • 76% of those surveyed plan to fully or partially replace their integration middleware platforms
  • 84% of all middleware will be replaced in the next four years
  • 84% of companies surveyed have 50 or more business applications to integrate

Caton said that the high numbers garnered from the survey were surprising. However, it is not surprising that more organizations are looking to the cloud. As such, he said the industry should prepare to witness a paradigm shift in how organizations manage their software infrastructure.

“IT organizations are about to undertake one of the most dramatic infrastructure shifts we’ve seen in 20 years, and the transition will be to the cloud,” Caton said. “This shift makes sense as companies look for greater flexibility, scalability and predictable cost models.”

However, while the cloud appears to be a popular destination, the report also found that 30% of organizations are still considering on-premises middleware management services. It makes sense that there are plenty of security-conscious organizations out there — financial institutions and the like — who no doubt prefer the on-premises approach. Still, it seems a little surprising that organizations would actually plan a middleware migration to another on-premises system as part of their so-called digital transformation.

Why are organizations making the switch? The survey found that many IT departments feel pressure to keep integration costs down while the number of integrations that need to occur between increasingly disparate applications and data sources grows. With compliance and security also a growing concern, for many this means shifting from DIY approaches like iPaaS to leveraging managed services like Red Hat’s JBoss platform, Oracle’s Fusion platform or Liason’s ALLOY platform.

This is certainly not the first we at SearchMicroservices.com have heard of organizations feeling the pressure to ditch their old application middleware, not just in favor of the cloud but also in favor of API-centric approaches. And it’s not just about keeping internal applications integrated; experts have been talking about the need for organizations to consider API-based approaches for B2B data integration as well.

But as organizations think about changing their middleware technology, they should also think about how that middleware ultimately fits into their application infrastructure. Specifically, organizations should think about how to keep middleware from becoming an application performance bottleneck. They should also think critically about who should be in charge of that middleware if an organization chooses to manage it themselves.

What do you think about the changing middleware landscape, or digital transformation in general? Let us know with your comments.


June 9, 2017  4:48 PM

OpenStack certification: Taking the COA exam

Fred Churchville Fred Churchville Profile: Fred Churchville

Are you using OpenStack? Maybe not yet, but it may be in your future. According to the OpenStack Foundation Annual Survey, the number of full production OpenStack deployments rose from 49% in April 2015 to 65% in April 2017. And the trend shows no sign of significantly slowing down.

As the adoption of OpenStack increases, the need for management skills and tooling increases as well. Organizations will start looking for OpenStack expertise within their own organization. And those with OpenStack skills listed on their résumé are likely to be in high demand.

Unfortunately, the OpenStack infrastructure can still be a little complex for those new to it to jump into due to the variety of features and its sheer scale. However, by utilizing the right tooling and educational resources, becoming an OpenStack expert is within reach.

If you think OpenStack may be on your doorstep soon, it’s worth looking into taking the Certified OpenStack Administrator (COA) exam. Even if you’ve been using OpenStack for a while, this is a good opportunity to verify your expertise and add another in-demand skill to your résumé.

What is the Certified OpenStack Administrator exam?

The COA exam is a vendor-neutral test of an IT professional’s familiarity and competency with the core components of OpenStack. The exam avoids including the nuances found between the ever-changing versions of OpenStack, but rather seeks to determine if someone can use OpenStack at the most basic level.

How does it work?

The COA exam is a skills-based test. Takers are put into a miniature production environment and are required to perform tasks or solve problems using the command line interface and Horizon dashboard, based on OpenStack Liberty. Proctors monitor the exam by streaming audio, video and screen-sharing feeds. Candidates will not be graded on the specific commands they use, but rather on the final state of the environment.

The exam can be taken on SuSE or Ubuntu. Candidates are tested on 10 specific aspects of OpenStack:

  • Getting to know OpenStack
  • Identity management
  • Dashboard
  • Compute
  • Object storage
  • Block storage
  • Networking
  • Heat/Orchestration
  • Troubleshooting
  • Image management

Find more specifics about the requirements on the COA requirements page.

How to prepare

It’s recommended that candidates have six months of professional experience before they take the exam. However, there are plenty of trainings and classes available that can help get you prepared, perhaps sooner if need be.

For example, the OpenStack Foundation marketplace offers training resources, including courses from vendors like HP. If you are new to OpenStack, The Linux Foundation offers an OpenStack course that provides videos, downloadable study guides and hands-on lab training aimed at OpenStack certification.

Requirements and OpenStack certification time

Test takers will have to provide their own front-end hardware with Chrome or Chromium browser, internet access and a microphone. You do not need your own Linux installation or VM. Use the compatibility check tool to verify that you meet the requirements.

Right now the exam costs $300, but check the exam info page to verify the cost in case that changes. Once scheduled, you are given 12 months to complete the exam, with one free retake allowed during the 12-month period.

The OpenStack certification lasts for three years. After that, candidates will have to re-take the COA exam to remain certified.


May 12, 2017  6:14 PM

Three questions to ask when forming an API strategy

Fred Churchville Fred Churchville Profile: Fred Churchville
API, API development, API management, APIs

There are plenty of tools out there promising to help make the most of your APIs. But without a proper API strategy, you risk spending time and money on processes and investments that don’t really serve your business as a whole and keep you from making the most of APIs.

Manfred Bortenschlager, director of business development for API management at Red Hat, gave a presentation at the company’s 2017 Summit in Boston where he talked to attendees about how they should think about their API strategy.

In his talk, Bortenschlager laid out three questions that you should ask yourself when it comes to your API strategy.

Question #1: Why do we want to implement APIs?

First, he explained, it’s important to find an answer to how the API can align with an organization’s goals. This includes identifying your most valuable cases. For example, do you need mobile and IoT support? Are you worried about a partner or customer ecosystem? Are you going to charge directly for the use of your API if you make it available?

It helps to “think outside the box,” when asking this question as well. For instance, consider the API use case of Amsterdam’s Schiphol Airport, which published its API platform offering in hopes that developers, including those from other travel companies and airlines, will use API access to improve the airline passenger experience in creative ways.

Question #2: What concrete outcomes do we want to achieve?

What exactly do you want your API to achieve? In order to answer this question, Bortenschlager explained, it needs to be thought about from two distinct perspectives: an external one and an internal one.

From an external perspective, is there an API available either through open source or licensing that can help you achieve a specific goal? Don’t spend your time reinventing the wheel if there is already an option out there that meets your needs.

From an internal perspective, do you have capabilities or unique data that could serve your company well from either a revenue or marketing perspective? If so, it may make sense to expose that data or those unique applications as APIs that the community can either use freely or license from you.

Considering these two perspectives should help to establish the tactics you employ as part of your API strategy, such as your plans for operations or your marketing strategy.

Question #3: How will we execute the API program?

Once the need and concrete objectives of the API strategy have been established, it’s time to determine execution. Bortenschlager advised that there are a few factors to take into account here, including what the actual value of the API is, how the API will be delivered and how you will capitalize on the API.

All of these things, he said, should be covered in API management, which is a key part of making any API strategy work. But, he warned that too many companies fail to think about the entire API lifecycle, from concept to end-of-life. Comprehensive API management requires that an organization think not just about the conception, creation, distribution and marketing of that API, but how it will ultimately either be updated or, if necessary, retired. Otherwise, you risk creating a mish-mosh of either useless or poorly performing APIs that will hurt your business or may make your company seem inattentive.

Bortenschlager also pointed out the importance of utilizing a centralized API manager and creating a developer portal that is easily accessible to your software teams. Look for API management products that offer this centralized management and access.

Answering these three questions alone may not be enough to nail down your entire API strategy, but it can at least put you on the right path towards driving more business value through your APIs. Bortenschlager’s site offers a variety of insights and information regarding API strategies and management that are worth browsing and learning from, including roundups of API articles from around the web.


April 19, 2017  4:10 PM

Oracle nudges enterprises towards using Docker

Fred Churchville Fred Churchville Profile: Fred Churchville
Application containerization, containers, Docker

Oracle has pushed itself further into the Docker community by allowing developers to now pull images of their flagship databases and developer tools through the Docker Store.

Effective immediately, developers can pull images of Oracle products including Oracle Database, Oracle MySQL, Oracle Java 8 SE Runtime Environment and Oracle Coherence. These arrive alongside over 100 images of Oracle products that are already available in the Docker Hub, including Open JDK and Oracle Linux. This move is reportedly occurring through the Docker Certification Program, a framework for partners to integrate and certify their technology to the Docker EE commercial platform.

Encouragement to the enterprise

According to Mark Cavage, vice president of software development at Oracle, this move is aimed at encouraging enterprises to lower their guard when it comes to using Docker for the building and deployment of mission-critical applications and systems.

“Docker is revolutionizing the way developers build and deploy modern applications, but mission-critical systems in the enterprise have been a holdout until now,” Cavage said. “Together with Docker, Oracle is bringing bedrock software to millions of developers enabling them to create enterprise-grade solutions that meet stringent security, performance and resiliency SLAs with the high level of productivity and low friction that they have come to expect from Dockerizing their application development stack.”

A hesitant enterprise, for better or worse

Surveys about the use of application containers in the enterprise have shown that a rapidly increasing number of enterprises are interested in Docker, with its popularity outpacing other explosive trends like PaaS and DevOps.

However, there has traditionally been hesitancy amongst enterprises to use application containers like Docker for mission-critical applications, due, for instance, to concerns pertaining to multi-tenant security and data persistence. As such, many have opted to either stick with virtual machines or find some sort of combination of the two rather than deploying solely using Docker containers.

However, Docker has made many changes to its services that have well prepared it for enterprise use, and the addition of high-level support from Oracle may just be the nudge enterprises need to come out from hiding behind their VMs.

What do you think? Does Oracle’s new availability in the Docker Hub change your mind about using Docker in the enterprise? Let us know with your comments.

Read the full copy of the Oracle press release about this new release.


April 17, 2017  7:50 PM

Can DevOps help us save lives?

Fred Churchville Fred Churchville Profile: Fred Churchville

A former U.S. Marine wants software developers and architects to get more in touch with their feelings. In fact, their lives may very well depend on it.

Ken Mugrage, the former Marine who is also a technology evangelist at ThoughtWorks, led a session at the 2017 O’Reilly Software Architecture Conference in New York City, presented a session in which he talked about burnout that software developers and managers can experience in enterprise environments – and how DevOps can help. It can also make it worse, by the way, but we’ll get to that later.

Why it matters

This isn’t just a plug for DevOps – it’s a serious issue. Look at the CDC watch list that lists computer programming as an occupation where individuals are prone to suicide. Perhaps more importantly, read John Willis’ blog post titled Karōjisatsu where he recounts the life-changing losses of extremely bright, talented developers to suicide. Burnout is more than stress, it’s deadly.

“Some researchers put burnout in the same clinical category of diseases like PTSD and depression,” Willis said in an interview about the subject. “In some cases the ultimate effect can be death; however, more common results are health related stress symptoms. Other work-life balance issues can arrive from unresolved burnout symptoms. For example, divorce, disconnectedness from family and friends.”

why-address-burnout

The psychological impact of burnout should not be ignored, Mugrage said.

What can we do

Thankfully, burnout is fixable, Mugrage explained in his talk. And there are reasons beyond just preventing suicide to reduce the chance of burnout. At the end of the day, Mugrage said, people who are happier tend to be better at their jobs.

Professional mental health services should always be regarded as the most reliable source of healing symptoms from professional burnout, especially if those symptoms are entering into the areas of depression or potential suicidal tendencies. But there are steps that both managers and teams can take to prevent burnout from happening.

One thing, Mugrage advised, is to measure how you feel about your job using professionally designed tests such as the Maslach Burnout Inventory, which has been recognized as the leading measure for burnout symptoms. It can be used by individuals, but corporate subscriptions exist as well. It may very well be worth the investment.

Mugrage gave other helpful tips for avoiding burnout, such as protecting your work/life balance and possibly getting mental health first aid training. However he did say that the correct implementation of a DevOps culture may be able to help.

The DevOps effect

So how can DevOps help? When implemented correctly, DevOps may be able to address some of the identified causes for burnout, such as feeling in control of your own work and communication breakdowns, Mugrage explained.

For instance, good DevOps practices encourage organizations to keep teams as small as possible, Mugrage explained, citing the “two pizza rule” from Amazon. Everyone involved with the product is on the same team, and they share resources freely, creating a better back-and-forth between workers.

ken-mugrage

Ken Mugrage addressed a crowd at the O’Reilly Software Architecture Conference in New York City about the issue of burnout in software development.

The other thing DevOps can provide, Mugrage said, is a feeling of fairness in terms of project responsibility. In a true DevOps world, there is no “passing the buck,” so to speak, when something goes wrong – everyone is responsible for their own part of the project. Everyone is measured the same way. While it does mean that some people may feel more stressed, it makes people comfortable to know they work in an environment where they won’t get blamed for someone else’s mistake.

However, DevOps can’t solve all your problems, such as those related to work overload or any issues that arise from bad company values. And if DevOps isn’t practiced correctly – i.e., you established a completely siloed DevOps team – you may actually make some issues worse.

It’s a real issue that requires real solutions. Hopefully well-practiced DevOps, in addition to the other considerations listed above, can help. If you need help, talk to someone, and don’t forget that at the end of the day your job should make you happy, not miserable.


April 7, 2017  5:50 PM

Beware of complexity in API and microservices development

Jan Stafford Jan Stafford Profile: Jan Stafford
API management, Development, Microservices

Finding the causes of cost overruns in software development and lifecycle is Theresa Lanowitz’s bailiwick. So, it’s natural that our recent conversation about technical debt in app development expanded beyond that subject. In this post, I share her insights on the hidden costs of API and microservices development and management, as well as some technologies that help software pros reduce those costs.

Theresa Lanowitz, founder, voke inc.

Theresa Lanowitz, founder, voke inc.

A well-known and sought-after consultant and speaker, Lanowitz founded voke inc., an analyst firm aimed focused on the evolving application lifecycle. Since its founding in 2006, voke’s surveys have uncovered hidden costs in Agile development, release management and other areas. Prior to founding voke, she served as a Gartner industry analyst and in leadership positions with Boeing, Borland Software and Sun Microsystems. She’s also a co-author, alongside voke COO Lisa Dronzek, of the book Lifecycle Virtualization.

API and microservices challenges

According to voke’s research, a minority of organizations automate API testing. Most often, the API owner or developer does some manual testing, releases the API and waits for feedback.

“Their attitude is, ‘If it works, great. If it doesn’t, we’ll deal with that later on,” said Lanowitz. “With this write-and-ship mentality, there’s no time to think about the cost of rework.”

Automating API tests would save time and money by eliminating manual tests and catching flaws before APIs are released. More importantly, said Lanowitz, better software releases result in happier customers. To make a relatively quick change to leveraging automation, she said, look at cloud-based API test tools.

Software quality vs. quick releases?

Speaking of better software releases, voke is seeing a shift away from a laser focus on fast time to market – a change in attitude compared to what used to be seen amongst organizations.

“In a recent survey, we asked which mattered most: time to market, quality or cost,” said Lanowitz. “People said quality is most important, and then time to market.”

The focus on speed has had the adverse effect of making technical debt a top ALM problem.

“We had so many people comment to us that focusing only on time to market is making costs skyrocket out of control,” said Lanowitz. “It’s taken this reality for people to finally say: ‘Let’s step back and take a look at how much we’re actually spending on this.’”

Hidden costs of microservices

In both API and microservices implementation, hidden complexities can add to costs. Changing from developing and managing an application to dealing with all the transactions in microservices makes control very difficult.

“The challenge with microservices is that even though you take the complexity down a level, you still have that complexity,” Lanowitz said. “You still have to reassemble those transactions back at the top and make sure that they’re all handled correctly.”

Lanowitz has seen DevOps teams struggle to manage microservices due to a lack of understanding of the services lifecycle. Here’s the bottom line: Don’t get into microservices if your organization doesn’t have really good, tried-and-true microservices management in place.

Ways to cut cost of API and microservices rework   

There are technologies that can diminish the challenges in API and microservices implementation. Lanowitz’s advice was to evaluate service virtualization, virtual and cloud-based labs, data virtualization and test data virtualization technologies. The latter helps keep data secure and reduces the cost of rework, because it makes it possible to perform testing with data as close to production as possible. She covers these technologies in depth in her aforementioned book.


March 31, 2017  7:22 PM

Do new development technologies put software architects on edge?

Fred Churchville Fred Churchville Profile: Fred Churchville
Application development, Architects, Enterprise Architects, Enterprise architecture

While new technologies like microservices may be an exciting prospect for those strictly working at the development and programming level, these trends may weigh a little more heavily on the minds of enterprise software architects.

Brian Foster, content lead at O’Reilly Media and co-chair of the O’Reilly Software Architecture Conference, said he recognizes the struggle today’s software architects face as businesses hustle to adopt the latest and greatest development technologies and methods.

“A lot of our core audience are people who have made the move to things like microservices, and are happy that they have, but they’re seeing the next wave of what they have to do,” he explained. “Now that they’ve spun up a few hundred services … what does it take to keep that running?”

Foster said that these architects also face a unique challenge in that they are often a bridge between C-level executives focused on business needs and development teams who want to change the way they work with software. Because of this, he said, these architects are often the ones tasked with deciding if the adoption of a certain technology or method truly is the answer to a particular business need.

“I think that’s the unique challenge of being an architect,” Foster said. “They have these great technology choices, and for some organizations it might be the right move to jump in. But for others, caution is necessary. [So} they want to understand the impacts not just from a technology perspective but also from a business perspective.”

If that’s not enough, many organizations also seem to have this lingering tension that exists between developers who want to move forward and architects that want to do their due diligence. As Shawn Ryan, Axway’s VP of their digital as a service platform, pointed out in a Q&A about digital transformation needs, bridging that gap between software architects and developers proves to be an integral part of making a digital transformation happen.

“The developers say: ‘Get out of my way, let me build what I need to build,'” Ryan explained. “And then the architect [is responsible for] security and implementing policy. So bridging both of those personas is a first step in talking about managing the full lifecycle.”

Do you find that there are significant gaps between developers and architects? What is the best way to bridge that gap? Let us know with your comments.


March 20, 2017  4:28 PM

What will web apps do to family board game night?

Fred Churchville Fred Churchville Profile: Fred Churchville
Application development, Custom Web applications, Games, mobile app development, Web, Web Application Designer, Web application development

Since the introduction of Monopoly in the early 1900s, board games became a staple in households all around the world, acting as a huge form of entertainment even into the radio, TV and even internet era. But while many of our favorite games still live in the cardboard-box form, computers have given these games a new form. It started with e-mails, floppy disks and CD-ROM, but now web apps have disrupted the space as well. And now that applications, especially web apps, are easier to make than ever, how are those games’ paper-based counterparts going to fare?

One of the most famous examples of this transition would perhaps be the game Civilization, which began as a board-based game of world domination. The game was eventually re-formatted to a PC-based format, which happened to suit the nature of the game better and for the most part displaced its physical ancestor.

Of course, the computer doesn’t always win. I’m sure many people would still prefer to play board-based Monopoly or Risk over their computer-based versions, especially if it required a CD or even a computer download. However, the rise of easy-to-build web apps is, for lack of better wording, a game changer.

Take the game Diplomacy, for example. This is another domination game where seven players seek to control a 1900s-themed Europe through negotiation with other players and the strategic movement of game pieces. But even though the game has a strong fan base (Henry Kissinger supposedly once called it his favorite game), there are a few things that make it a pain in the a** to play in person.

First, the rules dictate that the game can only be played with seven people – no more, no less. That would be easier to accomplish if it weren’t for how long the game takes: playing the game in person is usually a four to five hour affair, and there is a lot of down time. Players found a solution to this problem early on: Diplomacy was the first commercially licensed board game to be played by mail (only chess saw significant play by mail action earlier). But in the 1980s, people started to find a healthier alternatives to play-by-mail Diplomacy, which led first to playing by email and, ultimately, to the creation of browser-based versions of Diplomacy such as Backstabbr.

But with web apps becoming even easier to build with the help of emerging web development tools that are available even through browsers like Firefox, are we essentially witnessing the last existence of board games made out of paper and plastic? Sure, playing a game in person with someone may not provide the same experience as playing them over a screen (as in, you can’t yell at them when they beat you), but at what point does the convenience become more important than the experience?

Personally, I think the makers of board games are probably too smart to let web applications win, and what we will see is some kind of hybrid model. Clearly electronic board games are not a new thing, but I wonder if we will start seeing some kind of Wi-Fi connected board game that allows you to play with both those around you and those who are remote (maybe utilizing a web application-based connection?)

All I know is that as long as I can still yell “YAHTZEE!,” I’m happy.


March 15, 2017  6:30 PM

NGINX announces NGINX Plus R12

Fred Churchville Fred Churchville Profile: Fred Churchville
NGINX, Web platform

NGINX, Inc. has announced the availability of NGINX Plus Release 12 (R12), the latest release of its application delivery platform. According to a press release from NGINX, R12 is aimed at significantly improves NGINX Plus’s high performance load balancer, content cache and web server, providing application development and operations teams with new features for delivering applications.

NGINX Plus R12 focuses on configuration management within a cluster, enhanced programmability with nginScript, deeper monitoring and instrumentation of key application resources. It also provides the ability to automatically scale load balanced applications with proactive application-level health checks.

New capabilities in the NGINX Plus R12 release include a new process to reliably check and distribute load balancing and web serving configuration within a cluster of NGINX Plus servers. Additionally, the nginScript configuration language is fully supported in NGINX Plus. Advances in monitoring and instrumentation provide actionable insights on application performance and NGINX Plus tuning, and new caching features improve performance to enhance the end-user experience.

Finally, NGINX Plus load balancing has been enhanced with application level health checks to support the autoscaling of application resources in a safe, controlled fashion.

Check out the full list of new and improved features in NGINX Plus R12.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: