Software quality tools company SmartBear has now announced support for OpenAPI Specification (OAS) in its AlertSite product.
So let’s break that down and think about what it means in terms of the way the so-called API economy is developing.
The OpenAPI specification is a definition format to describe RESTful APIs (a web services interoperability standard) — it makes APIs easier to a) develop and b) integrate into a wider application structure because it maps out all the resources and operations associated with the API itself.
AlertSite is a performance monitoring tool for web, mobile, cloud and APIs that simulates key user journeys from real browsers — this means that developers can monitor applications and website performance from the end user’s perspective.
So in summary… SmartBear has put higher level open specifications support into its own API monitoring tool.
API economy, API teams
The company says that the API economy continues to grow — and now we are seeing actual ‘API teams’ — who are increasingly adopting the OpenAPI Specification in order to standardize their APIs and improve maintenance, adoption, and consumption.
According to SmartBear VP Anand Sundaram, “However [despite the above being true], many teams lack the time or expertise to implement an API monitoring tool to ensure their APIs function as designed once they are deployed. With the new OpenAPI support in AlertSite, development and operations teams can instantly create API monitors from their existing OAS files, thereby extending OAS standardisation from design to deployment and gaining actionable insights into the performance and availability of their APIs.”
The new support for the OAS in AlertSite builds on existing API monitoring integrations with SoapUI and SoapUI Pro, and is the first of several new API monitoring integrations and functionalities planned for 2018.
As part of a continuing set of analysis posts dedicated to examining major developments across the major (and some lesser) open source Linux distributions, we consider 2017 at open German softwarehaus SUSE.
Back in March we saw SUSE complete the acquisition of OpenStack IaaS and Cloud Foundry PaaS ‘talent and technology’ assets from HPE.
SUSE said it plans to use the acquired assets to expand its OpenStack Infrastructure as a Service (IaaS) solution and accelerate the company’s play into the Cloud Foundry Platform-as-a-Service (PaaS) market.
Also in the early part of the year we saw Huawei and SUSE announced that SUSE Linux Enterprise Server is the preferred standard operating system for Huawei’s KunLun RAS 2.0.
May arrived and it brought news of SUSE unveiling its SUSE OpenStack Cloud Monitoring open source software. The product monitors and manages the health and performance of enterprise OpenStack cloud environments and workloads.
“Based on the OpenStack Monasca project, SUSE OpenStack Cloud Monitoring makes it easy for operators and users to monitor and analyse the health and performance of complex private clouds, delivers reliability, performance and high service levels for OpenStack clouds, and reduces costs by simplifying, automating and pre-configuring cloud monitoring and management,” said the company.
Back to school, Linux style
In May, SUSE opened the SUSE Academic Program which shares no – or low – cost open source software with students globally. This includes a training curriculum, tools and support to help schools, universities, teaching hospitals and academic organisations use, teach and develop open source software.
In June we saw SUSE launch CaaS (Container as a Service) Platform – a development and hosting platform for container-based applications and services. SUSE CaaS Platform lets IT operations and developers provision, manage and scale container-based applications and services to meet business goals faster.
In July, SUSE made SUSE Linux Enterprise Server for SAP Applications available as the operating system for SAP solutions on Google Cloud Platform (GCP).
“Customers can use it to leverage high performance virtual machines with proven price/performance advantages for SAP HANA workloads on GCP powered by SUSE Linux Enterprise Server for SAP Applications. It is the first supported Linux for SAP HANA on Google Cloud,” said the company.
CaaS platform 2
Shortly after announcing CaaS Platform, SUSE launched CaaS platform 2, a container management platform based on Kubernetes technology. SUSE also previewed SUSE Cloud Application Platform, which is based on Cloud Foundry and Kubernetes technologies.
“SUSE Openstack Cloud and SUSE Enterprise Storage make up key elements of SAP Cloud Platform – providing robust, enterprise-grade infrastructure services for running applications that allow businesses to collect, manage, analyse and leverage information of all types,” said the firm.
There was even more Huawei collaboration later in the year. Huawei and SUSE announced that they will collaborate to build a more reliable Mission Critical Server.
It will lead the way as a Mission Critical Server that supports memory module hot swap, helping slash unplanned maintenance time while keeping their production systems up and running.
October saw SUSE announce that SUSE Linux Enterprise Server for SAP Applications will be available as an operating system for SAP solutions on the IBM Cloud. Additionally, IBM Cloud is now a SUSE Cloud Service Provider giving customers a supported open source platform.
And finally… November saw SUSE launch SUSE Cloud Application Platform to provide enterprises with the world’s leading application delivery platform in the Cloud Foundry and the most widely adopted container management framework in Kubernetes.
These have been combined to help application development and operations groups take better advantage of both technologies to accelerate application delivery and increase business agility.
It was 2014 that SUSE went into its next guise as part of the Micro Focus family after a history of corporate changeovers. Despite this, it appears, the core Linux-driving mission is (if just a little more corporate these days)… basically the same.
Pass me a chameleon.
Computer Weekly Open Source Insider continues its analysis and deconstruction of major open source distros this month with a series of personal conversations inside Red Hat.
Stormy Peters, senior manager for the Red Hat community team has reflected on what open source has accomplished over the years.
Peters insists that the open source community has changed not only how software is developed but how companies collaborate. Pointing to the number of times she has read about things in science fiction books and then gone on to using them in everyday life, she says that even when new solutions are not completely built on open source, they very typically run on various pieces of open source infrastructure.
Her opinion is that the speed of innovation has only been possible because we are all cooperating at a tremendous level.
A catalyst force
Nick Hopman is senior director for emerging technology practices at Red Hat. He says that he views open source as much more than just a process to develop and expose technology – he sees it as a catalyst to drive change in every facet of society.
“Government, policy, medical diagnostics, process re-engineering, you name it. All these things can leverage open principles that have been perfected through the experiences of open source software development to create communities that drive change and innovation,” said Hopman.
Mainstreaming joint innovation
Jim Whitehurst, president and CEO of Red Hat insists that over the next decade, we will see entire industries based on open source concepts, as the sharing of information and joint innovation, become mainstream.
“We’ll see this impact every sector, from non-profits, like healthcare, education and government, to global corporations who realise sharing information leads to better outcomes. Open and participative innovation will become a key part of increasing productivity around the world,” said Whitehurst.
A personal developer experience
Chris Wright, vice president and chief technology officer at Red Hat explains how he first got started in the open source community.
“Late 1995 or early 1996, I was out of university and working in my first “real” job. During school I had discovered UNIX and loved it. At work, our product was UNIX-based, and I was hungry to learn as much as I could about the programming language. Because of that, one thing was missing for me… and that was the ability to play with UNIX at home. UNIX itself was expensive and even more expensive was the hardware it ran on,” said Wright.
“A friend of mine worked at an ISP and he suggested I try Linux as a way to create a UNIX-like environment on a PC I had at home. So I dialed-up, connected to the Internet and began downloading over 50 floppy disks of Slackware, an early Linux distribution. The installation was tough – and configuring X for a graphical desktop was truly mysterious. But even this was interesting as I had to learn about Linux and my hardware to make it all work. Linux was still rough around the edges, but to me it was fun,” added Wright.
For Wright, the appeal of open source has always the chance to collaborate with the community – indeed, he says he read the GNU Manifesto and was excited by the idea of software freedom.
Open source has, obviously, changed from being something hobbyist programmers do for fun – it has become a real working paradigm for software application development.
If there’s one thing that Linux needs to aid its march onwards it is (arguably) more enterprise robustness.
Actually, if there’s one thing that Linux needs for enterprise success it’s firms like Microsoft stating that it loves Linux, but we’ve already experienced that epiphany, so what else can we hope for?
Openly open source German softwarehaus SUSE reflects the need for enterprise critical tech in its latest moves which see it expand its portfolio of business-critical computing infrastructure software.
The goals here is to provide so-called ‘workload predictability’ for enterprise users.
In terms of actual product, the company is offering SUSE Linux Enterprise Live Patching for IBM Power Systems and SUSE Linux Enterprise Real Time 12 Service Pack 3.
What is live (hot) patching?
Using live patching (sometime called hot patching), sysadmins can apply patches to a Linux kernel without rebooting the system. Applications keep running during updates because the patching is independent of the application running on the Linux kernel.
It’s that easy then i.e. you just turn live patching on do you? Well no, exactly. Attempts at live patching Linux have been around for some years, but the fact that SUSE live patching has now been certified for IBM Power Systems and for SAP HANA is some evidence that this current generation of patching works effectively, at the enterprise level.
According to Forrester’s Richard Fichera, “Early versions of Linux hot patching has been around for several years, most notably in a company called Ksplice, acquired a few years ago by Oracle. But the real change happened earlier [in 2016] when SUSE declared that its hot-patch capability, kGraft, previously in limited availability, was now in GA, suitable for all production workloads.”
These offerings are part of SUSE’s software-defined infrastructure and application delivery products. SUSE says that customers are demanding live patching as it helps them improve business continuity and lower costs by reducing downtime, increasing service availability and enhancing security and compliance.
A working example
“The live patching function on SUSE Linux Enterprise Server is almost invisible: it just runs and there are no reboots,” said Hans Lenting, IT architect for SVHW, a government service organisation in the Netherlands that runs a large number of interconnected applications to support important municipal services. “It enables us to apply major maintenance and security patches with no downtime. Live patching is a huge benefit for the hypervisor layer, which we need to keep ‘in the air’ for as long as we can. Without this, we would be faced with bringing down all 40+ virtual machines each time we needed to apply critical patches – at least once a month.”
SUSE Linux Enterprise Real Time 12 SP3 is an open source real time operating system with process and task prioritisation and scheduling.
This latest release ships with an updated real time kernel for what we could define as advanced application workloads that require precise timing and synchronisation.
The development version of openSUSE Leap 15 has reached its beta phase builds and snapshots are available for testers.
As a free and open source (FOSS) operating system, Leap is derived from the source code of SUSE Enterprise Linux (known not as SEL, but SLE) and so is positioned in much the same space as CentOS (from Red Hat) and Ubuntu (from Canonical).
Leap was announced at SUSECon 2015.
In terms of Leap’s relationship with its enterprise mothership, Leap is upstream of SLE – that is, when enterprise functions are packaged and brought into SLE, Leap’s proximity to the motherlode with allow it to benefit from this higher level development.
“Exactly like the rolling development model used to make openSUSE Leap 42.3, Leap 15.0 will use the same model until its final build. No concrete milestones will be used building up to the final release, which is expected in late Spring. As bugs are fixed and new packages introduced or excluded, snapshots of the latest beta phase builds will be released once they pass openQA testing,” notes OpenSUSE writer Douglas DeMaio.
DeMaio mentions Spring – it doesn’t take a clairvoyant to find the dates for the openSUSE conference in Prague this May 2018.
The first beta version build (Build 109.3) of openSUSE Leap 15 was recently released and there are currently two follow-on beta builds that would feature minor improvements if the beta builds pass openQA .
Is Leap a winner?
Developers, users and open source industry watchers say they like Leap’s stability (it is said to be as stable as Debian) and its overall robustness and applicability to server-grade deployment (a space that CentOS has been well-suited for)… and, not forgetting, home users, who need a stable PC operating system.
It is the server space that open source operating systems must (for this decade at least) really do battle on if they want to win the hearts and minds of not just enthusiasts, but enterprise sysadmins… so this is perhaps the best long term gauge to look for to assess market penetration.
GitLab is expanding… but what is its position in the total source code repository management universe?
Let’s draw a couple of lines first with a nod to the SESYNC research support community for its clarification.
GitHub open source and free.
All code hosted on GitHub must be made publicly available (unless it forms part of a paid account) and any developer or software engineer is permitted to a) push code to GitHub and b) offer suggestion designed to enhance or improve or alter the service.
A load of gits
GitLab is like GitHub, but not completely the same.
GitLab is used by commercial organisations for internal management of their own Git repositories.
Git itself is a source code versioning system in its own right designed to allow developers to track changes and push or pull changes from remote resources.
GitLab exists as one of the most popular services of its kind, although it should be noted that Bitbucket also exists in this space.
Definitions over then, GitLab Inc (the company behind the product) calls itself an integrated product for the entire DevOps lifecycle.
GitLab Inc has now acquired Gemnasium, a company that provides software to help developers mitigate security vulnerabilities in open source code.
GitLab says Gemnasium’s security scanning functionality will fit natively into GitLab’s CI/CD pipelines – as in Continuous Integration / Continuous Delivery – to perform automating application security testing.
Dependency tree [roots]
According to GitLab, as the dependency tree [roots] of open source software go deeper, it can be daunting or even impossible for developers to keep track of which software they are using and what ramifications its use may have on the business.
Sid Sijbrandij, CEO of GitLab has said that GitLab has already begun adding native security functionality, such as the addition of Static Application Security Testing (SAST) in the 10.3 release, along with Dynamic Application Security Testing (DAST) and Container Scanning in the 10.4 release.
Red Hat has been out shopping.
The enterprise focused open source platform company has bought CoreOS, a specialist technology player known for its Kubernetes and container-native solutions.
Kubernetes is an open-source system for automating deployment, scaling and management of containerized applications — it works by grouping containers (that go to make up an application) into logical units so that they can be a) discovered and b) subsequently managed.
Why did Red Hat buy Core OS?
Because it wants to champion the “any application deployed in any environment” line much loved among many key tech firms including Microsoft, Oracle and others… and, logically, more containerisation should open the door to more “any” deployment scenarios.
But Red Hat already has a “broad” (its words, not ours) Kubernetes and container-based portfolio, including Red Hat OpenShift (an open source container application platform based on top of Docker containers and the Kubernetes container cluster manager), so why does it need CoreOS specifically?
Put simply, Red Hat seeks to acquire CoreOS because CoreOS has a whole heap of hard core container management DNA it it and Red Hat might just want to cross-pollinate (some would say absorb and fold into) some of its own container stack with the capabilities in CoreOS.
CoreOS container DNA
What has CoreOS produced that has gotten Red Hat salivating?
Let’s start with CoreOS Tectonic – this is an open source Kubernetes platform for automated operations that enables portability across private and public cloud providers.
Also in the DNA stream, it offers CoreOS Quay, a container registry.
CoreOS is also a leading contributor to both Kubernetes itself and Container Linux, a lightweight Linux distribution created and maintained by CoreOS that automates software updates and is streamlined for running containers.
Not bored yet? CoreOS is also known for etcd, a distributed data store for Kubernetes; and rkt, an application container engine, donated to the Cloud Native Computing Foundation (CNCF), that helped drive the current Open Container Initiative (OCI) standard.
Alex Polvi, CEO (for now) of CoreOS has said that Red Hat and CoreOS’s relationship began many years ago as open source collaborators developing some of the key innovations in containers and distributed systems, helping to make automated operations a reality.
Want to be an upstream (i.e. close to distributed source code maintainer/owner) hybrid cloud player with extreme container automation controls across distributed data stores with the power of registry control that feeds container-specific application engines? Well, Red Hat does.
Automated software container security company Twistlock claims to be passionate about open source contributions.
Company CTO John Morello points out that all too few firms actually contribute with ‘code commits’ despite many claiming to be open source advocates, or openly stating their use of open technologies.
Spin or substance here then?
Well… Twistlock has more technical tutorials on its blog than it has press releases on it’s media pages, so there may be some value here.
Morello suggests that although the rise open source software (OSS) is a good thing, he bemoans the lack of firms actively contributing back to community contribution model code bases.
“The biggest challenge for OSS is that so many successful SaaS platforms are built and extended with OSS with few contributions back out to the world. Some companies are really good about contributing their internally focused innovations, but it’s not universal and there are many projects that would greatly benefit from increased interaction from SaaS vendors, especially in the areas of automation, scale and security,” said Morello.
Automation, scale & security
These three areas here highlighted are telling i.e. why should automation, scale & security be the most (potentially) pertinent areas for sharing code?
- Scale: growing pains are always tough and taking software from what was conceived to be sufficient for its initial deployment upwards to what may be a radically increased scale is a ‘learning’ in architectural terms that should be shared back to the community.
- Security: similar to both of the above, firms should share best practices back with the community and perhaps concentrate on particularly focused engagement inside the same defined industry verticals where technology use cases are most similar.
Twistlock’s Morello thinks that the most significant accomplishment (of the open source software movement in general) is the fact that right now, from any cheap PC, you can: download, build and run the exact same software used to run Google, Facebook, Amazon and other leading edge organisations add to it, and then have your contributions reused by those organisations.
Isn’t that enough to make you want to contribute?
Facebook doesn’t usually make technical headlines unless the social site is being criticised over the implementation of its algorithm structure and questioned over whether it is unfairly promoting certain aspects of content to its users.
Unusually then, Facebook appears to have bucked that trend with a new development designed to define a precise moment of time.
A ‘flick’ (short for frame-tick) is precisely 1/705,600,000 of a second.
We have hours, minutes and seconds — and we even have nanoseconds, which if you need a reminder is 1/1,000,000,000 of a second – so why do we need flicks?
In simple terms, this measure is purported to be useful in video and audio production where time segments are defined and spliced down to this scale.
According to the Facebook Oculus GitHub page, This unit of time is the smallest time unit which is LARGER than a nanosecond, and can in integer quantities exactly represent a single frame duration for [various Hz values]. When working creating visual effects for film, television and other media, it is common to run simulations or other time-integrating processes which subdivide a single frame of time into a fixed, integer number of subdivisions. It is handy to be able to accumulate these subdivisions to create exact 1-frame and 1-second intervals, for a variety of reasons.”
The flick is open source technology and is defined in the C++ programming language — as suggested above, its use will enable video production engineers to sync video materials using whole integers.
At the highest usable resolution, nanoseconds fail to evenly divide common film & media ‘framerates’ and so now we have the flick.
The Linux operating system (OS) will turn 30 in the year 2021.
We know that Linus Torvalds first penned (typed) his work plans for what turned out to be Linux on a Usenet posting as follows:
“I’m doing a (free) operating system (just a hobby, won’t be big and professional like GNU) for 386(486) AT clones,” wrote Torvalds.
No brief history of Linux is needed here, there are plenty of write ups detailing the origins of UNIX, MINIX, the birth of GNU and Richard Stallman’s creation of the GNU General Public License.
Rather than nostalgia then, in betwixt the 25th and 30th anniversaries as we stand in 2018, let’s look at how far the major distributions (and the smaller but still mighty ones too) have come over the last year and what they might be able to bring forward between now and the 30th celebrations.
The thousand yard stare
Okay then, let’s just have one little piece of history before we look forward.
There’s a fabulous moment at the very start of the ‘Revolution OS’ movie when open source advocate, software programmer and technical author Eric Raymond describes meeting a Microsoft executive in an elevator.
They exchange pleasantries, but Raymond suggests that the be-suited Redmond exec looks down on him somewhat as he asks him: “So what do you do?”… wearing scruffy hacker clothes, as he was.
“I just looked at hime and gave him the thousand yard stare and said ‘I’m your worst nightmare’,” said Raymond.
We’ve come a long way since the time of thousand yard stares, obviously.
In an era when even Microsoft ‘loves’ Linux, even the Redmond team appear to have largely exonerated themselves from any accusations of mere openwashing and much of the proprietary past – even if if the firm’s new religion open strategy does align all roads to the Azure cloud, ideally.
The big three (& the others)
So to the state of the Linux nation then.
Today in 2018 we know that the ‘big three’ open (but commercially licensed, maintained and appropriately support option served) Linux distributions are:
- Red Had Enterprise Linux (RHEL) – by Red Hat.
- Ubuntu for enterprise – by Canonical.
- SUSE Linux – by SUSE, owned by Micro Focus.
The big three are joined by a second group of in some cases quite well-known other operating systems. Never ever referred to as also-rans, lower-tier of lesser in any real sense, the ‘other’ open source operating systems that come to mind include names like Debian, Mint Linux, Arch Linux, Cent OS and the list goes on.
Enough intros to the distros already… let’s dive in and note the most interesting recent developments and ask what to expect on the road ahead.
To be continued in other stories…