Open Source Insider

Page 1 of 10512345...102030...Last »

April 11, 2018  10:30 AM

Red Hat goes big on interoperability-ness

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Red Hat changes its tagline from time to time, but this year the firm appears to happy being labelled as ‘the world’s leading provider of open source solutions’ — perhaps, with Microsoft and so many others picking up the flame, Red Hat feels it need to state its aim with such simplicity.

Branding shenanigans notwithstanding, Red Hat Enterprise Linux (RHEL) must go through its release cycles and now we reach version 7.5 in all its glory.

Big themes this release include a view that the operating system is a) a foundation for hybrid cloud environments b) has enhanced security and compliance controls and c) further integration with Microsoft Windows infrastructure both on-premise and in Microsoft Azure.

The hybrid play here is, of course, because organisations are frequently seeking to pair existing infrastructure and application investments with both bare metal and public clouds.

But hybrid brings with it ‘multiple deployment footprints’, so Red Hat is aiming to align security controls for that aspect.

Security playbooks

A major component of these controls is security automation through the integration of OpenSCAP with Red Hat Ansible Automation. This is designed to enable the creation of Ansible Playbooks directly from OpenSCAP scans which can then be used to implement remediations more rapidly and consistently across a hybrid IT environment.

This release also includes storage optimisation. Virtual Data Optimizer (VDO) technology reduces data redundancy and improves storage capacity through de-duplication and compression of data before it lands on a disk.

There’s love for Linux systems administrators, troubleshooters and developers too through enhancements to the cockpit administrator console.

Windows integration

New functionality and integration with Windows-based infrastructure is now offered via improved management and communication with Windows Server implementations, more secure data transfers with Microsoft Azure… and performance improvements for complex Microsoft Active Directory architectures.

Overall, says Red Hat, this can help to provide a smoother transition for organisations seeking to bridge the scalability and flexibility of Red Hat Enterprise Linux 7.5 implementations with existing Windows-based IT investments.

Red Hat Enterprise Linux 7.5 also adds full support for Buildah, an open source utility designed to help developers create and modify Linux container images without a full container runtime or daemon running in the background.

This enables IT teams to build and deploy containerised applications more quickly without needing to run a full container engine, reducing the attack surface and removing the need to run a container engine on a system not intended to do so in production.

Denise Dumas, vice president for platform engineering at Red Hat provides the summary comment.

“The future of enterprise IT doesn’t exist solely in the datacentre or in the public cloud, but rather as a fusion of environments spread across IT’s four footprints: physical, virtual, private cloud, and public cloud. Red Hat Enterprise Linux serves as a scalable, flexible and robust bridge across these footprints,” said Dumas.

Interoperability-ness

Goodness! Footprint fusion and cross-platform containerised interconnectedness inside hybrid development spheres — this all points to some new umbrella term… interoperability-ness isn’t an adjective yet, but it could be, you have been warned.

 

April 5, 2018  9:52 AM

Nginx gets granular on managed microservices

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Open source at its heart and essentially a web server technology, Nginx (pronounced: engine X) is the company that would like to have its name capitalised in the media but can’t, because it’s not an acronym.

Branding police histrionics and so-called ‘marketing guidelines’ notwithstanding, Nginx does have some arguably interesting work going on in the microservices space.

The firm is now working to bolster its application platform technology with enhancements focused on API gateways, Kubernetes Ingress controllers and service meshes.

API gateways

So let’s define those technologies first.

An API gateway is responsible for request routing, composition and protocol translation — it provides each of an application’s clients with a custom API.

According to an Nginx blog detailing the use of API gateways, “When you choose to build your application as a set of microservices, you need to decide how your application’s clients will interact with the microservices. With a monolithic application there is just one set of (typically replicated, load‑balanced) endpoints. In a microservices architecture, however, each microservice exposes a set of what are typically fine‑grained endpoints.”

An API gateway is a server that is the single entry point into the system — it encapsulates the internal system architecture and provides an API that is tailored to each client.

It should be noted that an API gateway may also have other responsibilities such as authentication, monitoring, load balancing, caching, request shaping and management – plus also static response handling.

Ingress & service meshes

Looking briefly at Kubernetes Ingress controllers we will note Ingress can provide load balancing, SSL termination and name-based virtual hosting. Also noted in the Nginx news above are service meshes – as defined here by William Morgan a service mesh is a dedicated infrastructure layer for making service-to-service communication safe, fast and reliable. If you’re building a cloud native application, you need a service mesh.

Nginx CEO Gus Robertson claims that his firm’s technologies span the data and control planes and are infrastructure-independent, working across bare metal, VMs, containers and any public cloud.

The firm’s application platform collapses ten disparate functions into a single tool: web server, application server, load balancer, reverse proxy, CDN, web application firewall, API gateway, ingress controller, sidecar proxy and service mesh controller.

“Nginx eliminates the operational complexity and management frameworks needed to orchestrate these technologies, accelerating microservices adoption for enterprises that lack the skills and resources to manage point solutions,” said the company, in a press statement.

As part of this news, Nginx is tabling a new control-plane to manage both legacy and microservices apps. New data-plane features aim to simplify the microservices application stack — and, finally, a new app server aims to improve performance for modern microservices code.

As we get used to the notion of microservices and the ‘composable’ nature of more granular application architectures, we need to start understanding the internal mechanics of microservices themselves so that we can precision engineer their connection points and management effectively – with its higher level control plane approach, Nginx is attempting to take some of that complexity away, which it is (arguably) doing so admirably, but surely the smart money is still on making sure that we have an understanding of what is going on at the API gateway microservice mesh coalface.


April 3, 2018  10:55 AM

Microsoft Serial Console: how to fix a ‘broken’ cloud

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Microsoft makes Windows, but Microsoft isn’t really as focused on Windows these days as it is on its Azure cloud platform and new extensions that emanate into the world of Artificial Intelligence (AI).

Redmond isn’t exactly official in terms of the above statement, but week after week we see new announcements coming forward that further fluff the feathers in Microsoft’s already very cloud-centric focus.

This month (March 2018 actually, but who’s counting?) saw the firm launch the public preview of ‘Serial Console’ access for both Linux and Windows Virtual Machines (VM).

But what is Microsoft Serial Console?

This Microsoft technology provides a console interface (a control screen GUI, basically) that allows cloud developers and their operations counterparts to manage virtual machines on Azure.

But it’s not just management, Serial Console is dedicated to fixing broken clouds.

Broken clouds

When we say broken clouds, we mean virtual machines that have gotten themselves locked up, frozen and inaccessible, a fault which will typically happen as a result of misconfiguration procedures that have been applied during initial virtual machine creation or as a result of changes that have been applied to the cloud instance that do not fit 100% with the DNA of virtual machine being served.

Reasons for cloud breakage (yes, it’s now a ‘thing’) can also include misconfiguration of cloud firewall policies, network changes that make the cloud inaccessible and incorrect file systems table syntax (fstab).

Microsoft Azure corporate VP Corey Sanders says he knows that managing and running virtual machines (VMs) can be hard.

Sanders points to the fact that Microsoft already offers tools to help manage and secure VMs, including patching management, configuration management, agent-based scripting, automation, SSH/RDP connectivity — and support for DevOps tooling like Ansible, Chef, and Puppet.

“However, we have learned from many of you that sometimes this isn’t enough to diagnose and fix issues. Maybe a change you made resulted in an fstab error on Linux and you cannot connect to fix it. Maybe a bcdedit change you made pushed Windows into a weird boot state. Now, you can debug both with direct serial-based access and fix these issues with the tiniest of effort,” said Sanders, in a blog post.

Microsoft claims that this technology will feel like having a keyboard plugged into the server in the Azure datacentre, but in the comfort of your office or home.

Serial Console is configured for a variety of Linux distros supported on Azure including CentOS, CoreOS, Oracle Linux, Red Hat, SuSE and Ubuntu.


March 29, 2018  12:17 PM

Linux ‘glued’ to Microsoft: Windows Subsystem for Linux (WSL)

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Microsoft loves Linux, yeah okay we know that bit.

Microsoft loves Linux, because it wants to create a wider transept of interoperability with its core stack, initially perhaps via its well-known personal computer operating system (you may have heard of Windows) and extended services and tools offerings such that all data workload roads ultimately lead to the Microsoft Azure cloud.

Well, that’s a bit strong, Microsoft has done a lot of admirable work in open source and anyway the above is far too long to fit on a t-shirt.

Regardless then, what Microsoft does next with specific Linux distribution (distro) interoperability should make for an interest murder-mystery suspense novel.

Windows Subsystem for Linux (WSL)

Windows Subsystem for Linux (WSL) is a software offering that works as a compatibility layer between Windows itself and the binary executables of a given Linux distro – it is, a glue, essentially.

In terms of form and function, WSL provides a Linux-compatible kernel interface developed by Microsoft, which can then run a GNU-userland (the term userland or user space refers to all code that runs outside the operating system’s kernel) version of a Linux distro on top.

Which Linux distros can you run like this?

Microsoft started with Ubuntu way back in 2016… and now we can also run SuSE, Debian or Kali distros of Linux on top of Windows in the same way.

Fedora is not yet available, although Microsoft has stated openly that it is working to make it so.

As Peter Bright clarifies on Ars Technica, “Theoretically, anyone could take a distribution of their choice and package it for the [Microsoft] Store, but Microsoft says that it will only accept such packages from distribution owners.  The [WSL] tool is aimed at two groups: distribution owners (so they can produce a bundle to ship through the Microsoft Store) and developers (so they can create custom distributions and sideload them onto their development systems).”

Microsoft has also provided an open source tool called Microsoft WSL / DistroLauncher for users who want to build their own Linux package where a particular distribution is either a) not available yet or b) is available, but the user wants to apply a greater degree of customisation to it than comes as standard.

Will all these moves, machinations and monkey business ever lead us to a time when Windows itself is open sourced?

Some argue logically that, one day, it must be… but others argue that Windows inherently lacks the community drive at kernel level that ‘true’ open source distributions have been built with from the ground up.

It is, for now, more likely that Microsoft will continue to use a high degree of open source software to continue to develop Windows itself while also contributing weighty chunks of code back to the community – but the company will still stop short of fully open source Windows for the foreseeable future.

Disclosure: This story was driven by Peter Bright’s initial coverage as linked above.


March 26, 2018  11:13 AM

Cloud, without workload management, doesn’t work

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

When you want cloud, you just turn it on (or off), right? That’s the beauty of the eminently controllable datacentre-driven services-centric cloud computing model of application processing and information storage, right?

Well yes, but no, it’s kind of not quite like that, at least not at the coalface.

A major issue within cloud is workload management i.e. the science behind controlling what data (processing, analytics, storage, other) are being executed where.

In a (cloud computing) world where containers and microservices form a potentially more intricate landscape and computing topography than we have ever known at any time in the past, the need to manage (data and application processing) workloads across shared cloud resources (on-premises, hybrid and public cloud infrastructures) is key to building enterprise compute that will actually scale.

This (above) is not the exact branded company mantra or corporate mission of Univa, but it could be.

The Chicago, Canada and Germany headquartered firm is provides on-premise and hybrid cloud workload management solutions for enterprise HPC customers.

Univa has now contributed its Navops Launch (nee Unicloud) product to open source community as Project Tortuga under an Apache 2.0 license.

While the software is largely used in enterprise HPC environments as of now, Project Tortuga is a general purpose cluster and cloud management framework with applicability to a broad set of applications including high performance computing, big data frameworks, Kubernetes and scale-out machine learning / deep learning environments.

Tortuga automates the deployment of these clusters in local on-premise, cloud-based and hybrid-cloud configurations through (and here’s the crucial bit) repeatable templates.

The technology in Tortuga can provision and manage both virtual and bare-metal environments and includes cloud-specific adapters for AWS, Google Cloud, Microsoft Azure, OpenStack and Oracle Cloud Infrastructure with support for bring-your-own image (BYOI).

Gary Tyreman, president and CEO of Univa explains that the built-in policy engine allows users to dynamically create, scale and teardown cloud-based infrastructure in response to changing workload demand.

Management, monitoring and accounting of cloud resources is the same as for local servers and the open source project is available now at https://github.com/UnivaCorporation/tortuga


March 13, 2018  12:21 PM

InfluxData: Fundamentally, all sensor data is time series data

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

An IoT developer is now ‘a thing’ – well, a person, a defined entity and a software engineering sub-discipline.

In the rush and drive to provide this genre of programmer/developers with tools and mechanics, we find InfluxData.

InfluxData describes itself as an open source firm specifically dedicated to ‘metrics and events’ for developers to build IoT analytics and monitoring applications.

The San Francisco headquartered company has now joined the PTC Partner Network – a firm known for its Product Lifecycle Management (PLM) approach.

InfluxData with integrate with PTC’s ThingWorx IoT package.

As part of the PTC Partner Network, InfluxData will be accessible via the ThingWorx Marketplace for developers to use its time series data technology in their IoT applications.

The idea being that developers will be able to  store, analyse and act on IoT data in real-time.

Sensor truth 101

“Fundamentally, all sensor data is time series data,” said Brian Mullen, VP of business development at InfluxData.

“Any meaningful IoT application should be accompanied by a data platform built specifically for that purpose. With this in mind, we are excited to partner with PTC to make InfluxData more compatible with the ThingWorx industrial innovation platform. We are solving the time series data challenge for IoT developers so they can focus on their primary objective of building applications,” said Mullen.

PTC says it has tried to build its partner network up to assemble a lineup of helpful and specialised tools to accelerate the development of IoT applications.

Apps to microservices — systems to sensors

ThingWorx users can now use InfluxData to build monitoring, alerting and notification applications for ThingWorx-connected devices and sensors — and also, IoT applications supporting millions of events per second.

InfluxData has built its developer and customer base across industries including manufacturing, financial services, energy and telecommunications.

The firm claims that its essentially open source platform helps get to data-driven real-time actions and a consolidated single view of their an infrastructure – from applications to microservices and from systems to sensors.

 


February 23, 2018  8:02 AM

Canonical Ubuntu 2017 milestones, a year in the rulebook

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Rounding out our analysis of some of the major Linux platform developments seen throughout 2017, let’s turn our attention to Canonical and its Ubuntu distro.

As we know, in programming, canonical means ‘according to the rules’ — and non-canonical means ‘not according to the rules’… and in the early Christian church, the ‘canon’ was the officially chosen text.

So has Canonical been breaking rules with Ubuntu is 2017, or has it in been writing its own rulebook?

Back in April we saw an AWS-tuned kernel of Ubuntu launched, the move to cloud is unstoppable, clearly. We also saw Ubuntu version 17.04 released, with Unity 7 as the default desktop environment. This release included optimisations for environments with low powered graphics hardware.

“Especially suited to virtual machines or remote desktop situations. These changes have made their way back in to 16.04 LTS to be supported until April 2021,” said Canonical.

The early part of the summer brought certified Ubuntu images now being made available on Oracle Bare Metal Cloud platform. Following this came the launch of Canonical Kernel Live patch and the release of Conjure-up 2.2.0 and Juju 2.2.0, both fresh from the kitchen.

Windows 10 loves Ubuntu

In July we were told that Windows 10 loves Ubuntu – this meant that Ubuntu was now available as an app from the Windows Store.

Also in summer, Canonical launched a new Enterprise Kubernetes packages – Kubernetes Discoverer and Explorer.

As we reached September 21st, we saw Canonical release an Azure tuned kernel for Ubuntu, plus also Ubuntu 16.04 was selected for Samsung Artik gateway modules.

Entering October we say version 17.10 released featuring the return to the GNOME desktop and Wayland as the default display server, with the option of Xorg — and into November we saw the Up2 Grove IoT development kit with Ubuntu launched.

Rounding out the year in November and December, Rancher Labs and Canonical announced its Cloud Native Platform and Canonical became FIPS Certified on the Ubuntu 16.04 release.

As we turn into 2018 we can see Canonical pushing to extend its partner network, more snaps being published (Skype snap is the major release of note) and a Storage Made Easy Charm published to Juju ecosytem.

Desktop delights

Commenting on the desktop division, Will Cooke, engineering director for Canonical Ubuntu desktop has said that 18.04 LTS (codenamed Bionic Beaver) is the next LTS release of Ubuntu due in April 2018.

“It will feature GNOME Shell as the desktop environment on the desktop and will be supported for five years. It includes the latest versions of Firefox and LibreOffice as well as a host of other applications and games which make it the secure environment for developers, business and home users alike. The subsequent release 18.10 will be an opportunity to explore more options for the desktop default package selection and features. We will be distributing more and more applications as snaps giving users an easy way to ensure that their favourite apps are always up to date and we will be working with key application vendors to bring a wider choice of software to their desktop,” said Cooke.

Canonical’s Ubuntu doesn’t appear to spend more than a couple of weeks without a noteworthy release or update of some form. It’s development team would probably say that micro-releases are happening almost constantly, such is the nature of Continuous Delivery (CD) in the modern age, especially with regard to operating systems.

Will the Linux home user desktop every become a default reality as a result of all this work? Well, even Microsoft loves Linux, so anything can happen.


February 22, 2018  8:00 AM

Splunk competitor Logz.io open sources two log analytics tools

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Splunk startup competitor Logz.io has been rolling out new tools and new projects on the back of its seemingly healthy venture funding injections, which came in last year.

The log analysis firm has now come forward with two new open source projects, Sawmill and Apollo.

The two tools were created and used in Logz.io’s own development and data ingestion pipeline, but are now being released to the developer community to build and run log analysis environments.

Logz.io log analytics combines machine learning with the open source ELK (Elasticsearch, Logstash, Kibana) stack to synthesise machine data, user behaviour and community knowledge into analytics results.

“Our team worked endlessly to ensure these tools make data ingestion, processing and building applications easier and more scalable,” said Logz.io CEO Tomer Levy.

Two new data tools

Sawmill is a high performing Java library created for processing, parsing and manipulating large data-sets in a horizontally scalable manner. It can be used for various purposes, from machine data, to time series data and business data.

Apollo is an advanced visual layer built on top of Kubernetes created to automate the process of deploying containers. The tool enables container orchestration for microservices and so is used to build and deploy complex applications.


February 20, 2018  9:52 AM

Can one ‘multi-model’ database rule them all?

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

In open source, we trust community – and as such, we might reasonably trust benchmarking studies that have been driven by community groups, in theory at least.

ArangoDB open source NoSQL performance benchmark series is one such open study.

The native ‘multi-model’ NoSQL database company has even published the necessary scripts required to repeat the benchmark.

A multimodel (or multi-model) database is a data processing platform that supports multiple data models, which define the parameters for how the information in a database is organised and arranged. Being able to incorporate multiple models into a single database lets users meet various application requirements without needing to deploy different database systems.

The goal of the benchmark is to measure the performance of each database system when there is no cache used.

For the 2018 benchmark, three single-model database systems were compared against ArangoDB: Neo4j for graph; MongoDB for document; and PostgreSQL for relational database.

Additionally, it tested ArangoDB against a multi-model database, OrientDB.

Benchmark parameters

The benchmark used NodeJS 8.9.4. The operating system for the servers was Ubuntu 16.04, including the OS-patch 4.4.0-1049-aws — this includes Meltdown and Spectre V1 patches. Each database had an individual warm-up.

What ArangoDB has been trying to suggest (would ‘spin’ be too cruel?) is how a multi-model database competes to single-model databases in their specialities.

In fundamental queries like Single Read, Single Write and Single Write Sync, ArangoDB says its technology outperformed PostgreSQL.

Claudius Weinberger, CEO of ArangoDB, said: “One of our main objectives, when conducting the benchmark, is to demonstrate that a native multi-model database can compete with single-model databases on their home turf. To get more developers to buy-in to the multi-model approach, ArangoDB needs to continually evolve and innovate.”

The company lists a series of similarly “positive” (its term, not ours) performance stats in areas including document aggregation, computing statistics about age distribution and benchmark results that profile data, shortest path and memory usage.

Need for debate

We’ve been talking about multi-model databases for perhaps half a decade now and the promise is an end to the ‘polyglot persistence’ scenario where an IT team has to use a variety of databases for different data model requirements and so end up with multiple storage and operational requirements — and then the additional task of integrating that stack and ensure fault tolerance controls are applied across the spectrum. Multi-model does indeed provide a means of alleviating those concerns… but we need to hear some balancing arguments put forward from the single model cognoscenti in order for us to judge more broadly for all use cases.


February 19, 2018  9:21 AM

Canonical got Juju eyeballs for storage

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Canonical’s is mixing new potions in its Juju charm store.

Juju is Canonical’s open source modelling tool for cloud software — it handles operations designed to deploy, configure, manage, maintain and scale applications via the command line interface, or through its optional GUI.

The Juju charm store in an ‘online marketplace’ where charms (and bundles of charms) can be uploaded, released (published) and optionally shared with other users.

Recommended charms have been vetted and reviewed by a so-called ‘Juju Charmer’ and all updates to the charm are also vetted prior to landing — there is also a ‘Community’ section of charms that have not enjoyed the same ratification process.

New to the charm store this month is the Storage Made Easy (SME) Enterprise File Fabric charm.

This technology is designed to operations engineers to give them access to Storage Made Easy’s data store unification and governance technology.

“Storage Made Easy’s participation in the Charm Partner Program and the release of our own Juju charm aligns with our mandate to make storage easier to use and secure whether on-premises, or in private or public clouds. This is an important milestone in our partnership with Canonical offering solutions that help customers deploy their applications quickly, securely and at scale,” said Steven Sweeting, director of product management. at Canonical. “We’re also pleased to support JAAS – Juju-as a Service for even faster deployment”.

Juju provides reusable, abstracted operations across hybrid cloud and physical infrastructures. Integration points and operations are encoded in these charms by vendors and the community who choose to be familiar with an app.

Looks like Canonical has Juju eyeballs for storage.


Page 1 of 10512345...102030...Last »

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: