There are plenty of tools out there promising to help make the most of your APIs. But without a proper API strategy, you risk spending time and money on processes and investments that don’t really serve your business as a whole and keep you from making the most of APIs.
Manfred Bortenschlager, director of business development for API management at Red Hat, gave a presentation at the company’s 2017 Summit in Boston where he talked to attendees about how they should think about their API strategy.
In his talk, Bortenschlager laid out three questions that you should ask yourself when it comes to your API strategy.
Question #1: Why do we want to implement APIs?
First, he explained, it’s important to find an answer to how the API can align with an organization’s goals. This includes identifying your most valuable cases. For example, do you need mobile and IoT support? Are you worried about a partner or customer ecosystem? Are you going to charge directly for the use of your API if you make it available?
It helps to “think outside the box,” when asking this question as well. For instance, consider the API use case of Amsterdam’s Schiphol Airport, which published its API platform offering in hopes that developers, including those from other travel companies and airlines, will use API access to improve the airline passenger experience in creative ways.
Question #2: What concrete outcomes do we want to achieve?
What exactly do you want your API to achieve? In order to answer this question, Bortenschlager explained, it needs to be thought about from two distinct perspectives: an external one and an internal one.
From an external perspective, is there an API available either through open source or licensing that can help you achieve a specific goal? Don’t spend your time reinventing the wheel if there is already an option out there that meets your needs.
From an internal perspective, do you have capabilities or unique data that could serve your company well from either a revenue or marketing perspective? If so, it may make sense to expose that data or those unique applications as APIs that the community can either use freely or license from you.
Considering these two perspectives should help to establish the tactics you employ as part of your API strategy, such as your plans for operations or your marketing strategy.
Question #3: How will we execute the API program?
Once the need and concrete objectives of the API strategy have been established, it’s time to determine execution. Bortenschlager advised that there are a few factors to take into account here, including what the actual value of the API is, how the API will be delivered and how you will capitalize on the API.
All of these things, he said, should be covered in API management, which is a key part of making any API strategy work. But, he warned that too many companies fail to think about the entire API lifecycle, from concept to end-of-life. Comprehensive API management requires that an organization think not just about the conception, creation, distribution and marketing of that API, but how it will ultimately either be updated or, if necessary, retired. Otherwise, you risk creating a mish-mosh of either useless or poorly performing APIs that will hurt your business or may make your company seem inattentive.
Bortenschlager also pointed out the importance of utilizing a centralized API manager and creating a developer portal that is easily accessible to your software teams. Look for API management products that offer this centralized management and access.
Answering these three questions alone may not be enough to nail down your entire API strategy, but it can at least put you on the right path towards driving more business value through your APIs. Bortenschlager’s site offers a variety of insights and information regarding API strategies and management that are worth browsing and learning from, including roundups of API articles from around the web.
Oracle has pushed itself further into the Docker community by allowing developers to now pull images of their flagship databases and developer tools through the Docker Store.
Effective immediately, developers can pull images of Oracle products including Oracle Database, Oracle MySQL, Oracle Java 8 SE Runtime Environment and Oracle Coherence. These arrive alongside over 100 images of Oracle products that are already available in the Docker Hub, including Open JDK and Oracle Linux. This move is reportedly occurring through the Docker Certification Program, a framework for partners to integrate and certify their technology to the Docker EE commercial platform.
Encouragement to the enterprise
According to Mark Cavage, vice president of software development at Oracle, this move is aimed at encouraging enterprises to lower their guard when it comes to using Docker for the building and deployment of mission-critical applications and systems.
“Docker is revolutionizing the way developers build and deploy modern applications, but mission-critical systems in the enterprise have been a holdout until now,” Cavage said. “Together with Docker, Oracle is bringing bedrock software to millions of developers enabling them to create enterprise-grade solutions that meet stringent security, performance and resiliency SLAs with the high level of productivity and low friction that they have come to expect from Dockerizing their application development stack.”
A hesitant enterprise, for better or worse
Surveys about the use of application containers in the enterprise have shown that a rapidly increasing number of enterprises are interested in Docker, with its popularity outpacing other explosive trends like PaaS and DevOps.
However, there has traditionally been hesitancy amongst enterprises to use application containers like Docker for mission-critical applications, due, for instance, to concerns pertaining to multi-tenant security and data persistence. As such, many have opted to either stick with virtual machines or find some sort of combination of the two rather than deploying solely using Docker containers.
However, Docker has made many changes to its services that have well prepared it for enterprise use, and the addition of high-level support from Oracle may just be the nudge enterprises need to come out from hiding behind their VMs.
What do you think? Does Oracle’s new availability in the Docker Hub change your mind about using Docker in the enterprise? Let us know with your comments.
Read the full copy of the Oracle press release about this new release.
A former U.S. Marine wants software developers and architects to get more in touch with their feelings. In fact, their lives may very well depend on it.
Ken Mugrage, the former Marine who is also a technology evangelist at ThoughtWorks, led a session at the 2017 O’Reilly Software Architecture Conference in New York City, presented a session in which he talked about burnout that software developers and managers can experience in enterprise environments – and how DevOps can help. It can also make it worse, by the way, but we’ll get to that later.
Why it matters
This isn’t just a plug for DevOps – it’s a serious issue. Look at the CDC watch list that lists computer programming as an occupation where individuals are prone to suicide. Perhaps more importantly, read John Willis’ blog post titled Karōjisatsu where he recounts the life-changing losses of extremely bright, talented developers to suicide. Burnout is more than stress, it’s deadly.
“Some researchers put burnout in the same clinical category of diseases like PTSD and depression,” Willis said in an interview about the subject. “In some cases the ultimate effect can be death; however, more common results are health related stress symptoms. Other work-life balance issues can arrive from unresolved burnout symptoms. For example, divorce, disconnectedness from family and friends.”
What can we do
Thankfully, burnout is fixable, Mugrage explained in his talk. And there are reasons beyond just preventing suicide to reduce the chance of burnout. At the end of the day, Mugrage said, people who are happier tend to be better at their jobs.
Professional mental health services should always be regarded as the most reliable source of healing symptoms from professional burnout, especially if those symptoms are entering into the areas of depression or potential suicidal tendencies. But there are steps that both managers and teams can take to prevent burnout from happening.
One thing, Mugrage advised, is to measure how you feel about your job using professionally designed tests such as the Maslach Burnout Inventory, which has been recognized as the leading measure for burnout symptoms. It can be used by individuals, but corporate subscriptions exist as well. It may very well be worth the investment.
Mugrage gave other helpful tips for avoiding burnout, such as protecting your work/life balance and possibly getting mental health first aid training. However he did say that the correct implementation of a DevOps culture may be able to help.
The DevOps effect
So how can DevOps help? When implemented correctly, DevOps may be able to address some of the identified causes for burnout, such as feeling in control of your own work and communication breakdowns, Mugrage explained.
For instance, good DevOps practices encourage organizations to keep teams as small as possible, Mugrage explained, citing the “two pizza rule” from Amazon. Everyone involved with the product is on the same team, and they share resources freely, creating a better back-and-forth between workers.
The other thing DevOps can provide, Mugrage said, is a feeling of fairness in terms of project responsibility. In a true DevOps world, there is no “passing the buck,” so to speak, when something goes wrong – everyone is responsible for their own part of the project. Everyone is measured the same way. While it does mean that some people may feel more stressed, it makes people comfortable to know they work in an environment where they won’t get blamed for someone else’s mistake.
However, DevOps can’t solve all your problems, such as those related to work overload or any issues that arise from bad company values. And if DevOps isn’t practiced correctly – i.e., you established a completely siloed DevOps team – you may actually make some issues worse.
It’s a real issue that requires real solutions. Hopefully well-practiced DevOps, in addition to the other considerations listed above, can help. If you need help, talk to someone, and don’t forget that at the end of the day your job should make you happy, not miserable.
Finding the causes of cost overruns in software development and lifecycle is Theresa Lanowitz’s bailiwick. So, it’s natural that our recent conversation about technical debt in app development expanded beyond that subject. In this post, I share her insights on the hidden costs of API and microservices development and management, as well as some technologies that help software pros reduce those costs.
A well-known and sought-after consultant and speaker, Lanowitz founded voke inc., an analyst firm aimed focused on the evolving application lifecycle. Since its founding in 2006, voke’s surveys have uncovered hidden costs in Agile development, release management and other areas. Prior to founding voke, she served as a Gartner industry analyst and in leadership positions with Boeing, Borland Software and Sun Microsystems. She’s also a co-author, alongside voke COO Lisa Dronzek, of the book Lifecycle Virtualization.
API and microservices challenges
According to voke’s research, a minority of organizations automate API testing. Most often, the API owner or developer does some manual testing, releases the API and waits for feedback.
“Their attitude is, ‘If it works, great. If it doesn’t, we’ll deal with that later on,” said Lanowitz. “With this write-and-ship mentality, there’s no time to think about the cost of rework.”
Automating API tests would save time and money by eliminating manual tests and catching flaws before APIs are released. More importantly, said Lanowitz, better software releases result in happier customers. To make a relatively quick change to leveraging automation, she said, look at cloud-based API test tools.
Software quality vs. quick releases?
Speaking of better software releases, voke is seeing a shift away from a laser focus on fast time to market – a change in attitude compared to what used to be seen amongst organizations.
“In a recent survey, we asked which mattered most: time to market, quality or cost,” said Lanowitz. “People said quality is most important, and then time to market.”
The focus on speed has had the adverse effect of making technical debt a top ALM problem.
“We had so many people comment to us that focusing only on time to market is making costs skyrocket out of control,” said Lanowitz. “It’s taken this reality for people to finally say: ‘Let’s step back and take a look at how much we’re actually spending on this.’”
Hidden costs of microservices
In both API and microservices implementation, hidden complexities can add to costs. Changing from developing and managing an application to dealing with all the transactions in microservices makes control very difficult.
“The challenge with microservices is that even though you take the complexity down a level, you still have that complexity,” Lanowitz said. “You still have to reassemble those transactions back at the top and make sure that they’re all handled correctly.”
Lanowitz has seen DevOps teams struggle to manage microservices due to a lack of understanding of the services lifecycle. Here’s the bottom line: Don’t get into microservices if your organization doesn’t have really good, tried-and-true microservices management in place.
Ways to cut cost of API and microservices rework
There are technologies that can diminish the challenges in API and microservices implementation. Lanowitz’s advice was to evaluate service virtualization, virtual and cloud-based labs, data virtualization and test data virtualization technologies. The latter helps keep data secure and reduces the cost of rework, because it makes it possible to perform testing with data as close to production as possible. She covers these technologies in depth in her aforementioned book.
While new technologies like microservices may be an exciting prospect for those strictly working at the development and programming level, these trends may weigh a little more heavily on the minds of enterprise software architects.
Brian Foster, content lead at O’Reilly Media and co-chair of the O’Reilly Software Architecture Conference, said he recognizes the struggle today’s software architects face as businesses hustle to adopt the latest and greatest development technologies and methods.
“A lot of our core audience are people who have made the move to things like microservices, and are happy that they have, but they’re seeing the next wave of what they have to do,” he explained. “Now that they’ve spun up a few hundred services … what does it take to keep that running?”
Foster said that these architects also face a unique challenge in that they are often a bridge between C-level executives focused on business needs and development teams who want to change the way they work with software. Because of this, he said, these architects are often the ones tasked with deciding if the adoption of a certain technology or method truly is the answer to a particular business need.
“I think that’s the unique challenge of being an architect,” Foster said. “They have these great technology choices, and for some organizations it might be the right move to jump in. But for others, caution is necessary. [So} they want to understand the impacts not just from a technology perspective but also from a business perspective.”
If that’s not enough, many organizations also seem to have this lingering tension that exists between developers who want to move forward and architects that want to do their due diligence. As Shawn Ryan, Axway’s VP of their digital as a service platform, pointed out in a Q&A about digital transformation needs, bridging that gap between software architects and developers proves to be an integral part of making a digital transformation happen.
“The developers say: ‘Get out of my way, let me build what I need to build,'” Ryan explained. “And then the architect [is responsible for] security and implementing policy. So bridging both of those personas is a first step in talking about managing the full lifecycle.”
Do you find that there are significant gaps between developers and architects? What is the best way to bridge that gap? Let us know with your comments.
Since the introduction of Monopoly in the early 1900s, board games became a staple in households all around the world, acting as a huge form of entertainment even into the radio, TV and even internet era. But while many of our favorite games still live in the cardboard-box form, computers have given these games a new form. It started with e-mails, floppy disks and CD-ROM, but now web apps have disrupted the space as well. And now that applications, especially web apps, are easier to make than ever, how are those games’ paper-based counterparts going to fare?
One of the most famous examples of this transition would perhaps be the game Civilization, which began as a board-based game of world domination. The game was eventually re-formatted to a PC-based format, which happened to suit the nature of the game better and for the most part displaced its physical ancestor.
Of course, the computer doesn’t always win. I’m sure many people would still prefer to play board-based Monopoly or Risk over their computer-based versions, especially if it required a CD or even a computer download. However, the rise of easy-to-build web apps is, for lack of better wording, a game changer.
Take the game Diplomacy, for example. This is another domination game where seven players seek to control a 1900s-themed Europe through negotiation with other players and the strategic movement of game pieces. But even though the game has a strong fan base (Henry Kissinger supposedly once called it his favorite game), there are a few things that make it a pain in the a** to play in person.
First, the rules dictate that the game can only be played with seven people – no more, no less. That would be easier to accomplish if it weren’t for how long the game takes: playing the game in person is usually a four to five hour affair, and there is a lot of down time. Players found a solution to this problem early on: Diplomacy was the first commercially licensed board game to be played by mail (only chess saw significant play by mail action earlier). But in the 1980s, people started to find a healthier alternatives to play-by-mail Diplomacy, which led first to playing by email and, ultimately, to the creation of browser-based versions of Diplomacy such as Backstabbr.
But with web apps becoming even easier to build with the help of emerging web development tools that are available even through browsers like Firefox, are we essentially witnessing the last existence of board games made out of paper and plastic? Sure, playing a game in person with someone may not provide the same experience as playing them over a screen (as in, you can’t yell at them when they beat you), but at what point does the convenience become more important than the experience?
Personally, I think the makers of board games are probably too smart to let web applications win, and what we will see is some kind of hybrid model. Clearly electronic board games are not a new thing, but I wonder if we will start seeing some kind of Wi-Fi connected board game that allows you to play with both those around you and those who are remote (maybe utilizing a web application-based connection?)
All I know is that as long as I can still yell “YAHTZEE!,” I’m happy.
NGINX, Inc. has announced the availability of NGINX Plus Release 12 (R12), the latest release of its application delivery platform. According to a press release from NGINX, R12 is aimed at significantly improves NGINX Plus’s high performance load balancer, content cache and web server, providing application development and operations teams with new features for delivering applications.
NGINX Plus R12 focuses on configuration management within a cluster, enhanced programmability with nginScript, deeper monitoring and instrumentation of key application resources. It also provides the ability to automatically scale load balanced applications with proactive application-level health checks.
New capabilities in the NGINX Plus R12 release include a new process to reliably check and distribute load balancing and web serving configuration within a cluster of NGINX Plus servers. Additionally, the nginScript configuration language is fully supported in NGINX Plus. Advances in monitoring and instrumentation provide actionable insights on application performance and NGINX Plus tuning, and new caching features improve performance to enhance the end-user experience.
Finally, NGINX Plus load balancing has been enhanced with application level health checks to support the autoscaling of application resources in a safe, controlled fashion.
Check out the full list of new and improved features in NGINX Plus R12.
Enterprises may often be exposed to rhetoric regarding moving away a SOA-based software strategy and adopting a more microservices-focused approach. Whether you see microservices as the next evolution of SOA or even just SOA by another name, the fact is that enterprises aren’t necessarily required to choose between one and the other.
But combining microservices and SOA is more than a technological decision. It requires developers and operations people to have a deep understanding of the fundamental things that differentiate the original concepts surrounding SOA from the REST model that surrounds microservices. In his article on bringing the worlds of SOA and microservices together, software development pro Tom Nolle explains how REST is conceptually different from SOA particularly because of the shift from using modules to depending on location-independent resources. To me it seems akin to using a service like Spotify to listen to your music rather than exclusively listening to music saved on your computer’s hard drive.
This difference, Nolle explained, is the primary thing that developers will have to think about when trying to jam SOA and microserevices together.
“Microservices evolve from the REST model, so they represent a different software architecture paradigm than SOA did initially,” he says in his article. “To put SOA under a microservices front, we’d have to implement it partly using SOA, which means harmonizing SOA with the RESTful front-end interface.”
Nolle also warns in his article on the links between SOA, web services and microservices that developers should also consider advancements in the cloud and web services important in the transition from SOA to microservices, as microservices contain much of the characteristics of RESTful, web-like, functional components. With this in mind, using a cloud-hosted API gateway can help developers solve latency issues related to deploying microservices in the cloud. While it is possible to present a microservice as a RESTful API without the need for a manager, this approach may present problems particularly if scaling the microservices under heavy loads. The API gateway can help provide load balancing capabilities that will help with this scaling. An alternative to this – a technology which is being further developed through this next year – is the use of load balancing DNS services such as NGINX plus, a popular open source option.
Wherever your journey on the road to microservices takes you, just remember: dream big, but think small.
What are your tips for developers trying to put microservices and SOA together? Let us know with your comments below.
We’re excited to inform you about an upcoming change to our site. On Thursday, January 19th, SearchSOA.com will officially change to SearchMicroservices.com. Likewise, the SOA Talk blog will become the Microservices Minute blog. You’ll also see some other changes, such as a new Twitter handle, @MicroservicesTT, and a new RSS feed called “News on microservices and service management.”
Don’t worry — we are not going to stop producing content focused on application development within the enterprise, which in many cases still revolves around concepts like SOA and web services. However, we will give the site and blog a more forward-looking direction heading into the new year. As a vital part of how DevOps teams are accelerating application deployment and change management, microservices has earned increased focus in our coverage.
What’s our mission with the new site? First, we want to focus on the wide breadth of microservices and related application development technologies, such as containers, cloud and APIs. We also want to provide decision makers with an independent perspective on how best to evaluate microservices trends and features. Finally, we want to act as a resource for microservices professionals to gain further knowledge about the technology and interact with other engaged members of the community.
Thanks for your continued readership, and please don’t hesitate to reach out with any questions about the site.
– The editors at SearchSOA.com (soon to be SearchMicroservices.com)
As we get closer to the year’s end, let’s take a look at some of most popular topics we’ve covered on SOA Talk in 2016. Based on your readership, here are the top five most popular posts this year:
Our top SOA Talk blog post this year took a look at HTML5 as it was celebrating its official one year anniversary, mainly asking the question: What makes HTML5 so great? It’s by no means a perfect language, as there are still questions about how much security HTML5 can provide and its usability for platforms like Android. But advocates still claim that it is part of the world’s movement towards combining web apps and smart devices.
You could consider this post the obituary for traditional software-based enterprise middleware. Here we explore the thoughts and feelings around the industry when it comes to traditional middleware in the era of APIs, including what prompted OpenLegacy CEO Romi Stein to publish “An Ode to Middleware.” This post also explores why companies have a hard time accepting the reality of traditional middleware’s demise and hold on tightly to their existing middleware infrastructure.
It’s tough to find tech talent in the Valley. Tough competition in San Francisco for an elite class of qualified software engineers has led companies like Zendesk to adopt a distributed engineer approach that spans the globe. It’s not the easiest approach, but here we explore how Zendesk makes it work and how it is helping them survive the talent shortage.
One of the themes that were consistently trumpeted during this year’s presidential election was demand for change. This post takes a look at why it’s important for developers to think about what they can do to instill real change within their organization with the applications that they create. We explore how free application development platforms and tools can help make this change happen and why some developers may not be comfortable doing this.
As developer and writer Tom Nolle pointed out in his piece on microservices management, microservices are great – provided they are managed correctly. Otherwise, they can be a real problem. Whether they are already on their way or just starting their journey to microservices, one of the things developers will have to think about is how well their documentation keeps up with new styles and new rates of development. This post takes a look at why microservices can create a documentation nightmare and the tools developers should look to for help.