Open Source Insider

Page 1 of 10512345...102030...Last »

March 13, 2018  12:21 PM

InfluxData: Fundamentally, all sensor data is time series data

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

An IoT developer is now ‘a thing’ – well, a person, a defined entity and a software engineering sub-discipline.

In the rush and drive to provide this genre of programmer/developers with tools and mechanics, we find InfluxData.

InfluxData describes itself as an open source firm specifically dedicated to ‘metrics and events’ for developers to build IoT analytics and monitoring applications.

The San Francisco headquartered company has now joined the PTC Partner Network – a firm known for its Product Lifecycle Management (PLM) approach.

InfluxData with integrate with PTC’s ThingWorx IoT package.

As part of the PTC Partner Network, InfluxData will be accessible via the ThingWorx Marketplace for developers to use its time series data technology in their IoT applications.

The idea being that developers will be able to  store, analyse and act on IoT data in real-time.

Sensor truth 101

“Fundamentally, all sensor data is time series data,” said Brian Mullen, VP of business development at InfluxData.

“Any meaningful IoT application should be accompanied by a data platform built specifically for that purpose. With this in mind, we are excited to partner with PTC to make InfluxData more compatible with the ThingWorx industrial innovation platform. We are solving the time series data challenge for IoT developers so they can focus on their primary objective of building applications,” said Mullen.

PTC says it has tried to build its partner network up to assemble a lineup of helpful and specialised tools to accelerate the development of IoT applications.

Apps to microservices — systems to sensors

ThingWorx users can now use InfluxData to build monitoring, alerting and notification applications for ThingWorx-connected devices and sensors — and also, IoT applications supporting millions of events per second.

InfluxData has built its developer and customer base across industries including manufacturing, financial services, energy and telecommunications.

The firm claims that its essentially open source platform helps get to data-driven real-time actions and a consolidated single view of their an infrastructure – from applications to microservices and from systems to sensors.


February 23, 2018  8:02 AM

Canonical Ubuntu 2017 milestones, a year in the rulebook

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Rounding out our analysis of some of the major Linux platform developments seen throughout 2017, let’s turn our attention to Canonical and its Ubuntu distro.

As we know, in programming, canonical means ‘according to the rules’ — and non-canonical means ‘not according to the rules’… and in the early Christian church, the ‘canon’ was the officially chosen text.

So has Canonical been breaking rules with Ubuntu is 2017, or has it in been writing its own rulebook?

Back in April we saw an AWS-tuned kernel of Ubuntu launched, the move to cloud is unstoppable, clearly. We also saw Ubuntu version 17.04 released, with Unity 7 as the default desktop environment. This release included optimisations for environments with low powered graphics hardware.

“Especially suited to virtual machines or remote desktop situations. These changes have made their way back in to 16.04 LTS to be supported until April 2021,” said Canonical.

The early part of the summer brought certified Ubuntu images now being made available on Oracle Bare Metal Cloud platform. Following this came the launch of Canonical Kernel Live patch and the release of Conjure-up 2.2.0 and Juju 2.2.0, both fresh from the kitchen.

Windows 10 loves Ubuntu

In July we were told that Windows 10 loves Ubuntu – this meant that Ubuntu was now available as an app from the Windows Store.

Also in summer, Canonical launched a new Enterprise Kubernetes packages – Kubernetes Discoverer and Explorer.

As we reached September 21st, we saw Canonical release an Azure tuned kernel for Ubuntu, plus also Ubuntu 16.04 was selected for Samsung Artik gateway modules.

Entering October we say version 17.10 released featuring the return to the GNOME desktop and Wayland as the default display server, with the option of Xorg — and into November we saw the Up2 Grove IoT development kit with Ubuntu launched.

Rounding out the year in November and December, Rancher Labs and Canonical announced its Cloud Native Platform and Canonical became FIPS Certified on the Ubuntu 16.04 release.

As we turn into 2018 we can see Canonical pushing to extend its partner network, more snaps being published (Skype snap is the major release of note) and a Storage Made Easy Charm published to Juju ecosytem.

Desktop delights

Commenting on the desktop division, Will Cooke, engineering director for Canonical Ubuntu desktop has said that 18.04 LTS (codenamed Bionic Beaver) is the next LTS release of Ubuntu due in April 2018.

“It will feature GNOME Shell as the desktop environment on the desktop and will be supported for five years. It includes the latest versions of Firefox and LibreOffice as well as a host of other applications and games which make it the secure environment for developers, business and home users alike. The subsequent release 18.10 will be an opportunity to explore more options for the desktop default package selection and features. We will be distributing more and more applications as snaps giving users an easy way to ensure that their favourite apps are always up to date and we will be working with key application vendors to bring a wider choice of software to their desktop,” said Cooke.

Canonical’s Ubuntu doesn’t appear to spend more than a couple of weeks without a noteworthy release or update of some form. It’s development team would probably say that micro-releases are happening almost constantly, such is the nature of Continuous Delivery (CD) in the modern age, especially with regard to operating systems.

Will the Linux home user desktop every become a default reality as a result of all this work? Well, even Microsoft loves Linux, so anything can happen.

February 22, 2018  8:00 AM

Splunk competitor open sources two log analytics tools

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Splunk startup competitor has been rolling out new tools and new projects on the back of its seemingly healthy venture funding injections, which came in last year.

The log analysis firm has now come forward with two new open source projects, Sawmill and Apollo.

The two tools were created and used in’s own development and data ingestion pipeline, but are now being released to the developer community to build and run log analysis environments. log analytics combines machine learning with the open source ELK (Elasticsearch, Logstash, Kibana) stack to synthesise machine data, user behaviour and community knowledge into analytics results.

“Our team worked endlessly to ensure these tools make data ingestion, processing and building applications easier and more scalable,” said CEO Tomer Levy.

Two new data tools

Sawmill is a high performing Java library created for processing, parsing and manipulating large data-sets in a horizontally scalable manner. It can be used for various purposes, from machine data, to time series data and business data.

Apollo is an advanced visual layer built on top of Kubernetes created to automate the process of deploying containers. The tool enables container orchestration for microservices and so is used to build and deploy complex applications.

February 20, 2018  9:52 AM

Can one ‘multi-model’ database rule them all?

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

In open source, we trust community – and as such, we might reasonably trust benchmarking studies that have been driven by community groups, in theory at least.

ArangoDB open source NoSQL performance benchmark series is one such open study.

The native ‘multi-model’ NoSQL database company has even published the necessary scripts required to repeat the benchmark.

A multimodel (or multi-model) database is a data processing platform that supports multiple data models, which define the parameters for how the information in a database is organised and arranged. Being able to incorporate multiple models into a single database lets users meet various application requirements without needing to deploy different database systems.

The goal of the benchmark is to measure the performance of each database system when there is no cache used.

For the 2018 benchmark, three single-model database systems were compared against ArangoDB: Neo4j for graph; MongoDB for document; and PostgreSQL for relational database.

Additionally, it tested ArangoDB against a multi-model database, OrientDB.

Benchmark parameters

The benchmark used NodeJS 8.9.4. The operating system for the servers was Ubuntu 16.04, including the OS-patch 4.4.0-1049-aws — this includes Meltdown and Spectre V1 patches. Each database had an individual warm-up.

What ArangoDB has been trying to suggest (would ‘spin’ be too cruel?) is how a multi-model database competes to single-model databases in their specialities.

In fundamental queries like Single Read, Single Write and Single Write Sync, ArangoDB says its technology outperformed PostgreSQL.

Claudius Weinberger, CEO of ArangoDB, said: “One of our main objectives, when conducting the benchmark, is to demonstrate that a native multi-model database can compete with single-model databases on their home turf. To get more developers to buy-in to the multi-model approach, ArangoDB needs to continually evolve and innovate.”

The company lists a series of similarly “positive” (its term, not ours) performance stats in areas including document aggregation, computing statistics about age distribution and benchmark results that profile data, shortest path and memory usage.

Need for debate

We’ve been talking about multi-model databases for perhaps half a decade now and the promise is an end to the ‘polyglot persistence’ scenario where an IT team has to use a variety of databases for different data model requirements and so end up with multiple storage and operational requirements — and then the additional task of integrating that stack and ensure fault tolerance controls are applied across the spectrum. Multi-model does indeed provide a means of alleviating those concerns… but we need to hear some balancing arguments put forward from the single model cognoscenti in order for us to judge more broadly for all use cases.

February 19, 2018  9:21 AM

Canonical got Juju eyeballs for storage

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Canonical’s is mixing new potions in its Juju charm store.

Juju is Canonical’s open source modelling tool for cloud software — it handles operations designed to deploy, configure, manage, maintain and scale applications via the command line interface, or through its optional GUI.

The Juju charm store in an ‘online marketplace’ where charms (and bundles of charms) can be uploaded, released (published) and optionally shared with other users.

Recommended charms have been vetted and reviewed by a so-called ‘Juju Charmer’ and all updates to the charm are also vetted prior to landing — there is also a ‘Community’ section of charms that have not enjoyed the same ratification process.

New to the charm store this month is the Storage Made Easy (SME) Enterprise File Fabric charm.

This technology is designed to operations engineers to give them access to Storage Made Easy’s data store unification and governance technology.

“Storage Made Easy’s participation in the Charm Partner Program and the release of our own Juju charm aligns with our mandate to make storage easier to use and secure whether on-premises, or in private or public clouds. This is an important milestone in our partnership with Canonical offering solutions that help customers deploy their applications quickly, securely and at scale,” said Steven Sweeting, director of product management. at Canonical. “We’re also pleased to support JAAS – Juju-as a Service for even faster deployment”.

Juju provides reusable, abstracted operations across hybrid cloud and physical infrastructures. Integration points and operations are encoded in these charms by vendors and the community who choose to be familiar with an app.

Looks like Canonical has Juju eyeballs for storage.

February 15, 2018  8:32 AM

API teams are now ‘a thing’

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Software quality tools company SmartBear has now announced support for OpenAPI Specification (OAS) in its AlertSite product.

So let’s break that down and think about what it means in terms of the way the so-called API economy is developing.

The OpenAPI specification is a definition format to describe RESTful APIs (a web services interoperability standard) — it makes APIs easier to a) develop and b) integrate into a wider application structure because it maps out all the resources and operations associated with the API itself.

AlertSite is a performance monitoring tool for web, mobile, cloud and APIs that simulates key user journeys from real browsers — this means that developers can monitor applications and website performance from the end user’s perspective.

So in summary… SmartBear has put higher level open specifications support into its own API monitoring tool.

API economy, API teams

The company says that the API economy continues to grow — and now we are seeing actual ‘API teams’ — who are increasingly adopting the OpenAPI Specification in order to standardize their APIs and improve maintenance, adoption, and consumption.

According to SmartBear VP Anand Sundaram, “However [despite the above being true], many teams lack the time or expertise to implement an API monitoring tool to ensure their APIs function as designed once they are deployed. With the new OpenAPI support in AlertSite, development and operations teams can instantly create API monitors from their existing OAS files, thereby extending OAS standardisation from design to deployment and gaining actionable insights into the performance and availability of their APIs.”

The new support for the OAS in AlertSite builds on existing API monitoring integrations with SoapUI and SoapUI Pro, and is the first of several new API monitoring integrations and functionalities planned for 2018.

February 13, 2018  8:53 AM

SUSE 2017 milestones, a year in the kernel

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

As part of a continuing set of analysis posts dedicated to examining major developments across the major (and some lesser) open source Linux distributions, we consider 2017 at open German softwarehaus SUSE.

Back in March we saw SUSE complete the acquisition of OpenStack IaaS and Cloud Foundry PaaS ‘talent and technology’ assets from HPE.

SUSE said it plans to use the acquired assets to expand its OpenStack Infrastructure as a Service (IaaS) solution and accelerate the company’s play into the Cloud Foundry Platform-as-a-Service (PaaS) market.

Also in the early part of the year we saw Huawei and SUSE announced that SUSE Linux Enterprise Server is the preferred standard operating system for Huawei’s KunLun RAS 2.0.

May arrived and it brought news of SUSE unveiling its SUSE OpenStack Cloud Monitoring open source software. The product monitors and manages the health and performance of enterprise OpenStack cloud environments and workloads.

“Based on the OpenStack Monasca project, SUSE OpenStack Cloud Monitoring makes it easy for operators and users to monitor and analyse the health and performance of complex private clouds, delivers reliability, performance and high service levels for OpenStack clouds, and reduces costs by simplifying, automating and pre-configuring cloud monitoring and management,” said the company.

Back to school, Linux style

In May, SUSE opened the SUSE Academic Program which shares no – or low – cost open source software with students globally. This includes a training curriculum, tools and support to help schools, universities, teaching hospitals and academic organisations use, teach and develop open source software.

In June we saw SUSE launch CaaS (Container as a Service) Platform – a development and hosting platform for container-based applications and services. SUSE CaaS Platform lets IT operations and developers provision, manage and scale container-based applications and services to meet business goals faster.

In July, SUSE made SUSE Linux Enterprise Server for SAP Applications available as the operating system for SAP solutions on Google Cloud Platform (GCP).

“Customers can use it to leverage high performance virtual machines with proven price/performance advantages for SAP HANA workloads on GCP powered by SUSE Linux Enterprise Server for SAP Applications. It is the first supported Linux for SAP HANA on Google Cloud,” said the company.

CaaS platform 2

Shortly after announcing CaaS Platform, SUSE launched CaaS platform 2, a container management platform based on Kubernetes technology. SUSE also previewed SUSE Cloud Application Platform, which is based on Cloud Foundry and Kubernetes technologies.

“SUSE Openstack Cloud and SUSE Enterprise Storage make up key elements of SAP Cloud Platform – providing robust, enterprise-grade infrastructure services for running applications that allow businesses to collect, manage, analyse and leverage information of all types,” said the firm.

There was even more Huawei collaboration later in the year. Huawei and SUSE announced that they will collaborate to build a more reliable Mission Critical Server.

It will lead the way as a Mission Critical Server that supports memory module hot swap, helping slash unplanned maintenance time while keeping their production systems up and running.

SAP apps

October saw SUSE announce that SUSE Linux Enterprise Server for SAP Applications will be available as an operating system for SAP solutions on the IBM Cloud. Additionally, IBM Cloud is now a SUSE Cloud Service Provider giving customers a supported open source platform.

And finally… November saw SUSE launch SUSE Cloud Application Platform to provide enterprises with the world’s leading application delivery platform in the Cloud Foundry and the most widely adopted container management framework in Kubernetes.

These have been combined to help application development and operations groups take better advantage of both technologies to accelerate application delivery and increase business agility.

It was 2014 that SUSE went into its next guise as part of the Micro Focus family after a history of corporate changeovers. Despite this, it appears, the core Linux-driving mission is (if just a little more corporate these days)… basically the same.

Pass me a chameleon.

February 7, 2018  7:43 AM

Red Hat: open source genesis, to mainstreaming revelations

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Computer Weekly Open Source Insider continues its analysis and deconstruction of major open source distros this month with a series of personal conversations inside Red Hat.

Stormy Peters, senior manager for the Red Hat community team has reflected on what open source has accomplished over the years.

Peters insists that the open source community has changed not only how software is developed but how companies collaborate. Pointing to the number of times she has read about things in science fiction books and then gone on to using them in everyday life, she says that even when new solutions are not completely built on open source, they very typically run on various pieces of open source infrastructure.

Her opinion is that the speed of innovation has only been possible because we are all cooperating at a tremendous level.

A catalyst force

Nick Hopman is senior director for emerging technology practices at Red Hat. He says that he views open source as much more than just a process to develop and expose technology – he sees it as a catalyst to drive change in every facet of society.

“Government, policy, medical diagnostics, process re-engineering, you name it. All these things can leverage open principles that have been perfected through the experiences of open source software development to create communities that drive change and innovation,” said Hopman.

Mainstreaming joint innovation

Jim Whitehurst, president and CEO of Red Hat insists that over the next decade, we will see entire industries based on open source concepts, as the sharing of information and joint innovation, become mainstream.

“We’ll see this impact every sector, from non-profits, like healthcare, education and government, to global corporations who realise sharing information leads to better outcomes. Open and participative innovation will become a key part of increasing productivity around the world,” said Whitehurst.

A personal developer experience

Chris Wright, vice president and chief technology officer at Red Hat explains how he first got started in the open source community.

“Late 1995 or early 1996, I was out of university and working in my first “real” job. During school I had discovered UNIX and loved it. At work, our product was UNIX-based, and I was hungry to learn as much as I could about the programming language. Because of that, one thing was missing for me… and that was the ability to play with UNIX at home. UNIX itself was expensive and even more expensive was the hardware it ran on,” said Wright.

“A friend of mine worked at an ISP and he suggested I try Linux as a way to create a UNIX-like environment on a PC I had at home. So I dialed-up, connected to the Internet and began downloading over 50 floppy disks of Slackware, an early Linux distribution. The installation was tough – and configuring X for a graphical desktop was truly mysterious. But even this was interesting as I had to learn about Linux and my hardware to make it all work. Linux was still rough around the edges, but to me it was fun,” added Wright.

For Wright, the appeal of open source has always the chance to collaborate with the community – indeed, he says he read the GNU Manifesto and was excited by the idea of software freedom.

Open source has, obviously, changed from being something hobbyist programmers do for fun – it has become a real working paradigm for software application development.

February 6, 2018  7:24 AM

SUSE serves up Linux kernel patching, live & hot

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

If there’s one thing that Linux needs to aid its march onwards it is (arguably) more enterprise robustness.

Actually, if there’s one thing that Linux needs for enterprise success it’s firms like Microsoft stating that it loves Linux, but we’ve already experienced that epiphany, so what else can we hope for?

Openly open source German softwarehaus SUSE reflects the need for enterprise critical tech in its latest moves which see it expand its portfolio of business-critical computing infrastructure software.

The goals here is to provide so-called ‘workload predictability’ for enterprise users.

In terms of actual product, the company is offering SUSE Linux Enterprise Live Patching for IBM Power Systems and SUSE Linux Enterprise Real Time 12 Service Pack 3.

What is live (hot) patching?

Using live patching (sometime called hot patching), sysadmins can apply patches to a Linux kernel without rebooting the system. Applications keep running during updates because the patching is independent of the application running on the Linux kernel.

It’s that easy then i.e. you just turn live patching on do you? Well no, exactly. Attempts at live patching Linux have been around for some years, but the fact that SUSE live patching has now been certified for IBM Power Systems and for SAP HANA is some evidence that this current generation of patching works effectively, at the enterprise level.

According to Forrester’s Richard Fichera, “Early versions of Linux hot patching has been around for several years, most notably in a company called Ksplice, acquired a few years ago by Oracle. But the real change happened earlier [in 2016] when SUSE declared that its hot-patch capability, kGraft, previously in limited availability, was now in GA, suitable for all production workloads.”

These offerings are part of SUSE’s software-defined infrastructure and application delivery products. SUSE says that customers are demanding live patching as it helps them improve business continuity and lower costs by reducing downtime, increasing service availability and enhancing security and compliance.

A working example

“The live patching function on SUSE Linux Enterprise Server is al­most invisible: it just runs and there are no reboots,” said Hans Lenting, IT architect for SVHW, a government service organisation in the Netherlands that runs a large number of interconnected applications to support important municipal services. “It enables us to apply major maintenance and security patches with no downtime. Live patching is a huge benefit for the hypervisor layer, which we need to keep ‘in the air’ for as long as we can. Without this, we would be faced with bringing down all 40+ virtual machines each time we needed to apply critical patches – at least once a month.”

SUSE Linux Enterprise Real Time 12 SP3 is an open source real time operating system with process and task prioritisation and scheduling.

This latest release ships with an updated real time kernel for what we could define as advanced application workloads that require precise timing and synchronisation.

February 5, 2018  7:11 AM

SUSE polishes openSUSE Leap 15

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

The development version of openSUSE Leap 15 has reached its beta phase builds and snapshots are available for testers.

As a free and open source (FOSS) operating system, Leap is derived from the source code of SUSE Enterprise Linux (known not as SEL, but SLE) and so is positioned in much the same space as CentOS (from Red Hat) and Ubuntu (from Canonical).

Leap was announced at SUSECon 2015.

In terms of Leap’s relationship with its enterprise mothership, Leap is upstream of SLE – that is, when enterprise functions are packaged and brought into SLE, Leap’s proximity to the motherlode with allow it to benefit from this higher level development.

“Exactly like the rolling development model used to make openSUSE Leap 42.3, Leap 15.0 will use the same model until its final build. No concrete milestones will be used building up to the final release, which is expected in late Spring. As bugs are fixed and new packages introduced or excluded, snapshots of the latest beta phase builds will be released once they pass openQA testing,” notes OpenSUSE writer Douglas DeMaio.

DeMaio mentions Spring – it doesn’t take a clairvoyant to find the dates for the openSUSE conference in Prague this May 2018.

The first beta version build (Build 109.3) of openSUSE Leap 15 was recently released and there are currently two follow-on beta builds that would feature minor improvements if the beta builds pass openQA .

Is Leap a winner?

Developers, users and open source industry watchers say they like Leap’s stability (it is said to be as stable as Debian) and its overall robustness and applicability to server-grade deployment (a space that CentOS has been well-suited for)… and, not forgetting, home users, who need a stable PC operating system.

It is the server space that open source operating systems must (for this decade at least) really do battle on if they want to win the hearts and minds of not just enthusiasts, but enterprise sysadmins… so this is perhaps the best long term gauge to look for to assess market penetration.

Page 1 of 10512345...102030...Last »

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: