Open Source Insider


March 21, 2019  10:11 AM

What to expect from SUSECON 19

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

The Computer Weekly Developer Network and Open Source Insider team enjoy hardcore programming discussion, truly vibrant open communities where code commits fly around in a swarm, well-cooked American barbeque ribs and Taylor Swift in her country music years.

Bizarrely then, the stars have aligned and all of the above factors have come together at once because SUSECON 19 is being held in Nashville, Tennesse… and the team has packed its bags.

SuSE origins

Wikipedia tells us that SuSE is so named because it stems from the German: “Gesellschaft für Software und Systementwicklung mbH” and so the name S.u.S.E. was an acronym for Software-und System-Entwicklung (Software and Systems Development).

What does SUSE do? For those that need a reminder… the company provides an enterprise-grade open source software-defined infrastructure and a set of application delivery tools.

That term enterprise-grade is important, that is – this is open source, but service and support options are also essentially available.

So why pick this event?

The chameleons over at SUSE tell us that the 2019 event offers attendees the chance to learn the latest developments in enterprise-class Linux, OpenStack, Ceph storage, Kubernetes, Cloud Foundry and other open source projects from technical experts, ecosystem partners and peers.

In terms of what’s happening at the event, following the #SUSECON hashtag linked here is probably a good start.

An open, open source culture

SUSE likes to champion open source as a ‘complete culture’ and insists upon lack of vendor lock-in — and, as such, the company insists that it currently has 100+ open source projects and has 650 staff actively working in research and development.

This is a show with a LOT of certification and training… with SUSE Certified Administrator (SCA) Exams and SUSE Certified Engineer (SCE) Exams billed at the top of the list.

“At its heart, SUSECON 2019 has been designed to showcase SUSE’s commitment to open source and its enabling technologies, providing access to the people that make it all happen. It will bring together technical experts from our ecosystem of customers and partners, along with those responsible for developing innovative solutions for digital transformation,” said Matt Eckersall, regional director, EMEA West, SUSE.

Eckersall suggests that as all companies navigate the process of IT transformation, they are competing to increase agility, manage complexity and reduce cost… so what does he think SUSE and its partner network can do to help organisations steer through these increasingly complex interconnected waters?

“To keep up with the pace of today’s business environment, organisations are increasingly relying on new technologies such as artificial intelligence, containers, the Internet of Things and software-defined storage without vendor lock in. Alongside its partners, SUSE is working closely with open source project communities including Kubernetes, OpenStack, Ceph, openATTIC and Cloud Foundry to deliver innovative enterprise solutions and provide companies with all-important freedom of choice. The SUSECON 2019 event will act as an invaluable networking opportunity for our customers and partners, allowing them to expand their knowledge around the latest advances in Linux, open source software-defined infrastructure and application delivery. Through the sessions and keynotes, attendees will gain new insight into the technologies they need to successfully implement digital transformation – directly from those shaping the future of open source,” said Eckersall.

Doc Day Afternoon

Of special interest is the SUSE’s Doc Day — this is a defined period in time when a group of people comes together, virtually or physically, to collaborate on writing documentation on one or more given topics.

According to SUSE, “Documentation is an essential part of any product (software or otherwise, presumably) – above all when it comes to software. You hardly [rarely] find a ‘self-explanatory’ software tool. There is no product which is so simple to use and maintain, that it doesn’t require any description, introduction or examples. Most software solutions only become usable thanks to detailed documentation.”

Also worth tracking is the SUSE blog linked here — at the moment the blog is showcasing a few session previews such as the below:

An Introduction to Microservices Architecture. Organisations are hearing the word microservices a lot. But what actually are they?

The show literature suggests that speakers will, “Discuss the various types of microservice architectures and how they fit into the software-defined infrastructure and cloud paradigm and explain how [developers] can take an existing business application or product and break it down into its component services.”

SUSE says it wants to enable developers to start architecting 12-factor apps today — from codebase, dependencies, configuration, backing services, [build, release, run], processes, port binding, concurrency, disposability, dev/prod parity, logs and admin processes…  you can read through the component parts of the 12-factor app methodology here.

So… and finally, will this be an uber-geeky show? Well yes, that’s why we’re going. Pass the BBQ sauce please miss Swift.

March 18, 2019  9:12 AM

Big Switch builds open source network operating system

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Big Switch Networks describes itself as the cloud-first networking company.

The firm focuses on what public cloud-style networking matched with hybrid cloud consistency.

News this month sees the firm launch an open source network operating system.

Big Switch says that key points to zone in on here include automation, zero-touch provisioning and visibility.

The NOS itself is a combo integration of the Microsoft-led Software for Open Networking in the Cloud (SONiC) and Big Swith’s own Open Networking Linux (ONL).

In terms of visibility, this technology makes use of a DevOps-centric Ansible workflow and SDN-centric controller workflows.

NOTE: SONiC is an open source network operating system based on Linux that runs on switches from multiple vendors and ASICs — it offers a full suite of network functionality, like BGP and RDMA, that has been production-hardened in the data centers of some of the largest cloud-service providers.

“Leveraging SONiC and ONL to build an open source NOS combines the benefits of OCP’s two important open source software stacks, which are deployed in large-scale production networks,” said Prashant Gandhi, chief product officer, Big Switch Networks.

NOS commoditisation

Gandhi says that the emergence of SONiC coupled with ONL signals that lower layers of the NOS stack are being commoditised… and vendor-agnostic innovations will be moving to upper layers including multi-system automation, real-time telemetry, and predictive analytics.

According to 650 Group, the open networking market (excluding hyperscalers) is expected to reach $1.35 billion by 2023, so this is a tech zone with a strong compound annual growth rate.

SONiC has some arguably solid backing within the industry, including Microsoft, LinkedIn, Alibaba and Tencent, for deploying and popularizing an open-source NOS on open networking switch hardware.


March 12, 2019  9:30 AM

F5 caps off growth plans with acquisition of Nginx

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

The privately held Nginx will be acquired for a total enterprise value of approximately £500 million.

Nginx points to its open source community as one of the most attractive elements of the combination of the two firms.

F5 also says that open source (and user community interaction) is a core part of F5’s multi-cloud strategy and a driver for F5’s next phase of innovation.

As such, F5 expects the combination with Nginx will accelerate its product integrations with [some leading but unspecified] open source projects and will enhance its strong technology partnerships with open source vendors.

F5 has said that it will maintain Nginx operations in San Francisco, California and other locations globally.

President & CEO of F5  François Locoh-Donou says that his firm’s application security and application services portfolio will work well with Nginx’s software application delivery and API management solutions and its open source user base.

F5 explains that it intends to enhance Nginx’s current offerings with F5 security solutions and integrate F5 cloud-native innovations with Nginx software load balancing technology — F5 will also use its global sales force, channel infrastructure, and partner ecosystem to scale Nginx selling opportunities.

No comment was made as to whether or not the F5 would start spelling Nginx without capitalisation, given that the company name is neither an acronym or an initialism of any form.

Nginx, not an acronym

 

 

 

 

 

 

 

 


March 7, 2019  10:21 AM

Cloud Foundry CTO on the key to DevOps transformation

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

This is a guest post for the Computer Weekly Open Source Inside blog written by Chip Childers, CTO of the Cloud Foundry Foundation.

Cloud Foundry is an open source project with an open contribution and open governance model that claims to give users ‘maximum flexibility’ to avoid vendor lock-in. 

Explaining that adopting a ‘DevOps approach to digitisation’ can be disruptive, Childers argues that more and more organisations are doing it because it’s practical and delivers measurable benefits.

Childers writes as follows…

By its nature, DevOps improves collaboration among teams (business, dev and ops) by increasing transparency and revealing working practices –- and this is essential for effective decision making.

Essentially, it applies Agile principles throughout the development cycle, leading to faster development of software, at higher quality, to ensure faster and more frequent delivery.

DevOps and Agile are both fundamentally about achieving a more responsive approach to technology changes.

Agile is about the development process of getting ideas manufactured into software, and DevOps is about being nimble with both deployment and changes to IT environments.

Specifically for software, DevOps essentially extends Agile software development methodologies into the realms of operations and infrastructure, leading to a more holistic development cycle.

Legacy lethargy

There are of course many challenges here.

In transitioning to a more modern, DevOps-driven approach, companies with a legacy infrastructure may encounter issues involving resistance to change that are both technological and cultural. There are risks in both inertia and impatience – attempting to modernise too quickly can lead to technical disruption or even outages… and can also create cultural friction within an organisation.

However, failing to modernise at a steady rate can lead to an organisation being left behind if they continue with the same outdated approach, as their competitors choose DevOps and digitise more rapidly.

Ultimately, you can’t buy DevOps.

It’s not a tool or a product, even though there are plenty of companies determined to try to sell it to you.

Adopting a DevOps approach is about a cultural shift… and it requires implementing the technologies, tools and products that are actually a fit, in the context of that culture, between new processes and existing systems. The best way to do it will be different for every organisation, and identifying the details of that approach is crucial to a successful implementation.

With automation for example, DevOps is fundamentally about the culture, process and tools working together to achieve a better ‘flow’ through the business.

Getting changes deployed quickly, with high quality and with as little effort as possible, naturally leads to automation. That said, there are risks in automating too much too quickly, or attempting to automate more nuanced tasks.

Certain tasks require more adaptable approaches or are more error-prone, so attempting to automate those can end up creating challenges rather than generating benefits.

Changing the culture

In making the move to DevOps, don’t begin with tools and technology – start with people and culture. It’s vital for a smooth transformation to get everyone talking and understanding each other.

Ensure that everyone involved really knows what the business is about and how they fit into it. Establishing shared understanding and trust among traditionally siloed roles is the hardest and most important step to take – and it may take a little time.

Once that’s underway, the approach for each organisation will be different depending on which the stage of their digitisation journey.

For example, a small start-up with brand new applications and infrastructure can be a lot more ambitious in its approach to DevOps than an enterprise with an existing, older infrastructure, which will be much more difficult to turn around.

One thing that’s important for all companies to understand about DevOps is that no matter where they fall on that spectrum, they must have a realistic approach based on their own resources, teams and technology stack.

Childers: Ultimately, you can’t buy DevOps.

 

 


February 19, 2019  8:28 AM

Rubrik Build open source cloud data management for ‘any’ contributor

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Rubrik has announced Rubrik Build, a new open source community built around the organisation’s own cloud data management platform.

The community created is said to be 100% public and 100% open source — meaning that contributors can use existing software development kits, tools and use cases or contribute their own ideas, code, documentation and feedback.

This new formalised initiative is intended to help developers take advantage of Rubrik RESTful APIs.

Additional details of this announcement include access to SDKs in the Rubrik Github repository.  The firm says that contributors can create new applications, tooling and integrations that simplify monitoring, testing, development and automated workflows.

The company says that it has always built its product around an API-first architecture and so made its API is a first-class citizen.

“Our goal is to establish a community around consuming Rubrik’s APIs to quickly get started with pre-built use cases, quick start guides, and integrations with popular tooling. The Build program was designed with our customers in mind, easing their transition to consuming APIs,” notes the company, in a corporate blog post.

The company also notes that many in the tech community do not come from a traditional software engineering background — and that this can make contributing to open source seem daunting.

Small wins, from anyone

According to the announcement blog, the firm is not daunted by this and is a big believer in ‘small wins’ to break large goals into manageable chunks.

“Over the last few months, I have watched teammates, from Rubrik’s first employee to our content marketing manager, learn and master using Git and GitHub to contribute ideas, edits and updates to project documentation,” notes the team.

Rubrik’s chief technologist Chris Wahl is keen to tweet and discuss contributions.


February 14, 2019  11:14 AM

Bow to your Sensu

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

As all self-respecting geeks now, it’s important to bow to your sensei.

When it comes to open source cloud-native user-centric monitoring, it may soon be important to bow to your Sensu.

Sensu Go is a described as a scalable and user-centric monitoring event pipeline, designed to improve (infrastructure) visibility and streamline workflows.

When Sensu talks infrastructure visibility, the company means infrastructure from Kubernetes to bare metal.

The firm aims to provide a single source of truth among application and infrastructure monitoring tools.

“With a distributed architecture, updated dashboard, a newly designed API, direct support for automated and live deployment of monitoring plugins to traditional and cloud-native environments (and designed with Kubernetes in mind) Sensu Go brings an elevated level of flexibility and integration for enterprises,” said the firm, in a press statement.

Sensu Go supports the integration of industry-standard formats, including Prometheus, StatsD and Nagios, as well as integration with enterprise products such as Splunk, Elastic, ServiceNow, Slack, InfluxDB etc.

“Sensu Go empowers businesses to automate their monitoring workflows, offering a comprehensive monitoring solution for their entire infrastructure,” said Caleb Hailey, CEO of Sensu. “This latest release features human-centered design, with an emphasis on ease of use and quick time to value.”

The new versioned API is redesigned toestablish a stable API contract and is said to be powerful enough to configure the entire Sensu Go instance and built to enable users who want to extend Sensu capabilities.

Sensu Go now provides direct support for downloading and installing server plugins and agent plugins with no additional tooling; i.e., no dependency on configuration management or custom Docker images.

Finally, there’s also Jira integration (enterprise only) – Jira is Atlassian’s issue and project tracking software and another enterprise solution request by the Sensu community. The Jira handler, also a Sensu event handler, creates and updates Jira issues.


February 13, 2019  8:28 AM

IBM’s Code and Response is open source tech for natural disasters

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

IBM used its Think 2019 conference this month to announce Code and Response, a $25 million, four-year initiative to put open source technologies developed as part of coding challenges in the communities where they are needed most.

Code and Response is supported by IBM, a number of international (but predominantly US) governmental links as well as NGO partners.

A connected partnership with the Clinton Global Initiative University is hoped to provide college-age developers with the skills in line with this initiative.

In its first year, Code and Response will pilot Project Owl, the winning solution from Call for Code 2018, in disaster-struck regions like Puerto Rico, North Carolina, Osaka, and Kerala.

IBM senior vice president for cognitive applications and developer ecosystems Bob Lord has noted that every year natural disasters affect close to 160 million people worldwide.

“To take a huge leap forward in effective disaster response, we must tap into the global open source ecosystem to generate sustainable solutions we can scale and deploy in the field. But we cannot do it alone,” said Lord.

IBM chairman, president and CEO Ginni Rometty announced Call for Code 2019 as part of the company’s five-year, $30 million commitment to social impact coding challenges.

The goal is once again to unite the world’s 23 million developers and data scientists to unleash the power of cloud, AI, blockchain and IoT technologies to create sustainable (and scalable) open source technologies. The emphasis this year is on the health and wellbeing of individuals and communities threatened by natural disasters.


February 12, 2019  10:47 AM

Furnace turns up heat on data streaming apps

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Furnace is a free and open source platform for the creation of streaming data (you may prefer to say data streaming) applications.  

Launched by the warmly named Furnace Ignite Ltd, Furnace itself is targeting the new breed of ‘data-rich’ organisations that need data streaming apps.

Data streaming apps might typically feature in smart cities, IoT, finance, marketing and other data intensive sectors and scenarios.

Furnace is an infrastructure and language-agnostic serverless framework that can be instantiated and operable in minutes.

It is aligned with GitOps methodology.

As defined here by WeaveWorks, GitOps is a way to do Continuous Delivery — it works by using Git as a single source of truth for declarative infrastructure and applications.  

“With Git at the centre of your delivery pipelines, every developer can make pull requests and use Git to accelerate and simplify both application deployments and operations tasks to Kubernetes. By using familiar tools like Git, developers are more productive and can focus their attention on creating new features rather than on operations tasks,” notes WeaveWorks.

Back to Furnace, Ovum analyst Rik Turner says that from smart cities to marketing and security,  vast new pools of data are unused or underemployed, both because of the scarcity of developer talent and the enormous commitment of time and money currently required to bring a complex application to market.

Fusion of data streams

Furnace is designed to reverse the trend of escalating complexity and costs of big data processing and storage.

As such, initial use-cases of Furnace include rapid and inexpensive fusion of data streams from disparate sources — plus also data filtration, sanitisation and storage, for reasons such as legal compliance.

“The platform has been architected in a way that allows it to be deployed into various infrastructures, be that cloud, on-premise or within hybrid environments, with the ability to ingest huge volumes of data from different sources, in various formats, so developers can take that data and make it useable. Furnace will continue to be developed to suit the needs of the open source community and we welcome user feedback on the platform,” said Danny Waite, head of engineering, Furnace Ignite.

Future key features will include; running natively in Microsoft Azure and Google Cloud. Additional coding languages including Python and Golang. Constructs to connect cloud-based applications to legacy on-premise platforms.


February 7, 2019  2:17 PM

MapR ecosystem pack amplifies Kubernetes connections

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Data analytics firm MapR Technologies has sealed the cellophane on the MapR Ecosystem Pack (MEP) at its 6.1 version iteration.

The toolpack is meant to give developers (and data scientists, unless they happen to be the same person) flexibility in terms of how they access data and build AI/ML real-time analytics and, also, flexibility for building stateful containerised applications.

MEP 6.1 also expands on the Kafka ecosystem, adds new language support for the MapR document database and support for Container Storage Interface (CSI).

“MapR was first to solve the stateful container challenge – first with Persistent Application Client Containers (PACC) for Docker containers, then with Flex-volume driver for Kubernetes,” said Suzy Visvanathan, director, product management, MapR.  

Visvanathan says this release is all about helping developers achieve greater independence between Kubernetes releases and underlying storage.

The CSI Driver leverages MapR volumes to provide a scalable, distributed persistent storage for stateful applications — and so this means that storage is no longer tightly coupled or interdependent with Kubernetes releases.

“Implementation of CSI provides a persistent data layer for Kubernetes and other Container Orchestration (CO) tools, such as Mesos and Docker Swarm,” said Visvanathan.

Released on a quarterly basis, MEPs are intended to give users access to the latest open source innovations in this space.

The company says that MEPs also ensure that these updates run in supported configurations with the MapR Data Platform and other interconnected projects.

For the MapR Database… there are new language bindings for Go, C# to give developers a chance to build a broader set of new applications in the language of their choice on MapR document database. Existing languages include Java, Python, Node.JS.

There are also Oozie 5.1 enhancements (Oozie is a workflow scheduler system to manage Apache Hadoop jobs) to move dependency from Tomcat to Jetty for embedded webserver, which is much more lightweight (and most would agree secure) and update the launcher of Oozie which is generic to YARN, instead of in the MapReduce format.


February 6, 2019  11:19 AM

Intel Nauta: for Deep Learning on Kubernetes

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Enterprises are still exploring use cases to augment their business models with Artificial intelligence (AI)… this is a market that is very much still-nascent.

Magical analyst house Gartner has conjoured up figures to suggest that real world AI deployments could reach nearly $4TN by 2022… and Deep Learning (DL) is key to the growth.

But, while DL in the enterprise is palpable, it is still a complex, risky and time-consuming proposition because it is tough to integrate, validate and optimise DL software solutions.

In an attempt to answer these challenges, we can look to Nauta as a new open source platform for distributed DL using Kubernetes.

What is Nauta?

Nauta (from Intel) provides a multi-user, distributed computing environment for running DL model training experiments on Intel Xeon Scalable processor-based systems.

Results can be viewed and monitored using a command line interface, web UI and/or TensorBoard.

Developers can use existing data sets, proprietary data, or downloaded data from online sources and create public or private folders to make collaboration among teams easier.

For scalability and management, Nauta uses components from the Kubernetes orchestration system, using Kubeflow and Docker for containerized machine learning at scale.

DL model templates are available (and customisable) on the platform — for model testing, Nauta also supports both batch and streaming inference.

Intel has said that it created Nauta with the workflow of developers and data scientists in mind.

“Nauta is an enterprise-grade stack for teams who need to run DL workloads to train models that will be deployed in production. With Nauta, users can define and schedule containerised deep learning experiments using Kubernetes on single or multiple worker nodes… and check the status and results of those experiments to further adjust and run additional experiments, or prepare the trained model for deployment,” said Intel, in a press statement.

The promise here is that Nauta gives users the ability to use shared best practices from seasoned machine learning developers and operators.

At every level of abstraction, developers still have the opportunity to fall back to Kubernetes and use primitives directly.

Essentially, Nauta gives newcomers to Kubernetes the ability to experiment – while maintaining guard rails.

 

 


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: