Open Source Insider


February 19, 2019  8:28 AM

Rubrik Build open source cloud data management for ‘any’ contributor

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Rubrik has announced Rubrik Build, a new open source community built around the organisation’s own cloud data management platform.

The community created is said to be 100% public and 100% open source — meaning that contributors can use existing software development kits, tools and use cases or contribute their own ideas, code, documentation and feedback.

This new formalised initiative is intended to help developers take advantage of Rubrik RESTful APIs.

Additional details of this announcement include access to SDKs in the Rubrik Github repository.  The firm says that contributors can create new applications, tooling and integrations that simplify monitoring, testing, development and automated workflows.

The company says that it has always built its product around an API-first architecture and so made its API is a first-class citizen.

“Our goal is to establish a community around consuming Rubrik’s APIs to quickly get started with pre-built use cases, quick start guides, and integrations with popular tooling. The Build program was designed with our customers in mind, easing their transition to consuming APIs,” notes the company, in a corporate blog post.

The company also notes that many in the tech community do not come from a traditional software engineering background — and that this can make contributing to open source seem daunting.

Small wins, from anyone

According to the announcement blog, the firm is not daunted by this and is a big believer in ‘small wins’ to break large goals into manageable chunks.

“Over the last few months, I have watched teammates, from Rubrik’s first employee to our content marketing manager, learn and master using Git and GitHub to contribute ideas, edits and updates to project documentation,” notes the team.

Rubrik’s chief technologist Chris Wahl is keen to tweet and discuss contributions.

February 14, 2019  11:14 AM

Bow to your Sensu

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

As all self-respecting geeks now, it’s important to bow to your sensei.

When it comes to open source cloud-native user-centric monitoring, it may soon be important to bow to your Sensu.

Sensu Go is a described as a scalable and user-centric monitoring event pipeline, designed to improve (infrastructure) visibility and streamline workflows.

When Sensu talks infrastructure visibility, the company means infrastructure from Kubernetes to bare metal.

The firm aims to provide a single source of truth among application and infrastructure monitoring tools.

“With a distributed architecture, updated dashboard, a newly designed API, direct support for automated and live deployment of monitoring plugins to traditional and cloud-native environments (and designed with Kubernetes in mind) Sensu Go brings an elevated level of flexibility and integration for enterprises,” said the firm, in a press statement.

Sensu Go supports the integration of industry-standard formats, including Prometheus, StatsD and Nagios, as well as integration with enterprise products such as Splunk, Elastic, ServiceNow, Slack, InfluxDB etc.

“Sensu Go empowers businesses to automate their monitoring workflows, offering a comprehensive monitoring solution for their entire infrastructure,” said Caleb Hailey, CEO of Sensu. “This latest release features human-centered design, with an emphasis on ease of use and quick time to value.”

The new versioned API is redesigned toestablish a stable API contract and is said to be powerful enough to configure the entire Sensu Go instance and built to enable users who want to extend Sensu capabilities.

Sensu Go now provides direct support for downloading and installing server plugins and agent plugins with no additional tooling; i.e., no dependency on configuration management or custom Docker images.

Finally, there’s also Jira integration (enterprise only) – Jira is Atlassian’s issue and project tracking software and another enterprise solution request by the Sensu community. The Jira handler, also a Sensu event handler, creates and updates Jira issues.


February 13, 2019  8:28 AM

IBM’s Code and Response is open source tech for natural disasters

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

IBM used its Think 2019 conference this month to announce Code and Response, a $25 million, four-year initiative to put open source technologies developed as part of coding challenges in the communities where they are needed most.

Code and Response is supported by IBM, a number of international (but predominantly US) governmental links as well as NGO partners.

A connected partnership with the Clinton Global Initiative University is hoped to provide college-age developers with the skills in line with this initiative.

In its first year, Code and Response will pilot Project Owl, the winning solution from Call for Code 2018, in disaster-struck regions like Puerto Rico, North Carolina, Osaka, and Kerala.

IBM senior vice president for cognitive applications and developer ecosystems Bob Lord has noted that every year natural disasters affect close to 160 million people worldwide.

“To take a huge leap forward in effective disaster response, we must tap into the global open source ecosystem to generate sustainable solutions we can scale and deploy in the field. But we cannot do it alone,” said Lord.

IBM chairman, president and CEO Ginni Rometty announced Call for Code 2019 as part of the company’s five-year, $30 million commitment to social impact coding challenges.

The goal is once again to unite the world’s 23 million developers and data scientists to unleash the power of cloud, AI, blockchain and IoT technologies to create sustainable (and scalable) open source technologies. The emphasis this year is on the health and wellbeing of individuals and communities threatened by natural disasters.


February 12, 2019  10:47 AM

Furnace turns up heat on data streaming apps

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Furnace is a free and open source platform for the creation of streaming data (you may prefer to say data streaming) applications.  

Launched by the warmly named Furnace Ignite Ltd, Furnace itself is targeting the new breed of ‘data-rich’ organisations that need data streaming apps.

Data streaming apps might typically feature in smart cities, IoT, finance, marketing and other data intensive sectors and scenarios.

Furnace is an infrastructure and language-agnostic serverless framework that can be instantiated and operable in minutes.

It is aligned with GitOps methodology.

As defined here by WeaveWorks, GitOps is a way to do Continuous Delivery — it works by using Git as a single source of truth for declarative infrastructure and applications.  

“With Git at the centre of your delivery pipelines, every developer can make pull requests and use Git to accelerate and simplify both application deployments and operations tasks to Kubernetes. By using familiar tools like Git, developers are more productive and can focus their attention on creating new features rather than on operations tasks,” notes WeaveWorks.

Back to Furnace, Ovum analyst Rik Turner says that from smart cities to marketing and security,  vast new pools of data are unused or underemployed, both because of the scarcity of developer talent and the enormous commitment of time and money currently required to bring a complex application to market.

Fusion of data streams

Furnace is designed to reverse the trend of escalating complexity and costs of big data processing and storage.

As such, initial use-cases of Furnace include rapid and inexpensive fusion of data streams from disparate sources — plus also data filtration, sanitisation and storage, for reasons such as legal compliance.

“The platform has been architected in a way that allows it to be deployed into various infrastructures, be that cloud, on-premise or within hybrid environments, with the ability to ingest huge volumes of data from different sources, in various formats, so developers can take that data and make it useable. Furnace will continue to be developed to suit the needs of the open source community and we welcome user feedback on the platform,” said Danny Waite, head of engineering, Furnace Ignite.

Future key features will include; running natively in Microsoft Azure and Google Cloud. Additional coding languages including Python and Golang. Constructs to connect cloud-based applications to legacy on-premise platforms.


February 7, 2019  2:17 PM

MapR ecosystem pack amplifies Kubernetes connections

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Data analytics firm MapR Technologies has sealed the cellophane on the MapR Ecosystem Pack (MEP) at its 6.1 version iteration.

The toolpack is meant to give developers (and data scientists, unless they happen to be the same person) flexibility in terms of how they access data and build AI/ML real-time analytics and, also, flexibility for building stateful containerised applications.

MEP 6.1 also expands on the Kafka ecosystem, adds new language support for the MapR document database and support for Container Storage Interface (CSI).

“MapR was first to solve the stateful container challenge – first with Persistent Application Client Containers (PACC) for Docker containers, then with Flex-volume driver for Kubernetes,” said Suzy Visvanathan, director, product management, MapR.  

Visvanathan says this release is all about helping developers achieve greater independence between Kubernetes releases and underlying storage.

The CSI Driver leverages MapR volumes to provide a scalable, distributed persistent storage for stateful applications — and so this means that storage is no longer tightly coupled or interdependent with Kubernetes releases.

“Implementation of CSI provides a persistent data layer for Kubernetes and other Container Orchestration (CO) tools, such as Mesos and Docker Swarm,” said Visvanathan.

Released on a quarterly basis, MEPs are intended to give users access to the latest open source innovations in this space.

The company says that MEPs also ensure that these updates run in supported configurations with the MapR Data Platform and other interconnected projects.

For the MapR Database… there are new language bindings for Go, C# to give developers a chance to build a broader set of new applications in the language of their choice on MapR document database. Existing languages include Java, Python, Node.JS.

There are also Oozie 5.1 enhancements (Oozie is a workflow scheduler system to manage Apache Hadoop jobs) to move dependency from Tomcat to Jetty for embedded webserver, which is much more lightweight (and most would agree secure) and update the launcher of Oozie which is generic to YARN, instead of in the MapReduce format.


February 6, 2019  11:19 AM

Intel Nauta: for Deep Learning on Kubernetes

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Enterprises are still exploring use cases to augment their business models with Artificial intelligence (AI)… this is a market that is very much still-nascent.

Magical analyst house Gartner has conjoured up figures to suggest that real world AI deployments could reach nearly $4TN by 2022… and Deep Learning (DL) is key to the growth.

But, while DL in the enterprise is palpable, it is still a complex, risky and time-consuming proposition because it is tough to integrate, validate and optimise DL software solutions.

In an attempt to answer these challenges, we can look to Nauta as a new open source platform for distributed DL using Kubernetes.

What is Nauta?

Nauta (from Intel) provides a multi-user, distributed computing environment for running DL model training experiments on Intel Xeon Scalable processor-based systems.

Results can be viewed and monitored using a command line interface, web UI and/or TensorBoard.

Developers can use existing data sets, proprietary data, or downloaded data from online sources and create public or private folders to make collaboration among teams easier.

For scalability and management, Nauta uses components from the Kubernetes orchestration system, using Kubeflow and Docker for containerized machine learning at scale.

DL model templates are available (and customisable) on the platform — for model testing, Nauta also supports both batch and streaming inference.

Intel has said that it created Nauta with the workflow of developers and data scientists in mind.

“Nauta is an enterprise-grade stack for teams who need to run DL workloads to train models that will be deployed in production. With Nauta, users can define and schedule containerised deep learning experiments using Kubernetes on single or multiple worker nodes… and check the status and results of those experiments to further adjust and run additional experiments, or prepare the trained model for deployment,” said Intel, in a press statement.

The promise here is that Nauta gives users the ability to use shared best practices from seasoned machine learning developers and operators.

At every level of abstraction, developers still have the opportunity to fall back to Kubernetes and use primitives directly.

Essentially, Nauta gives newcomers to Kubernetes the ability to experiment – while maintaining guard rails.

 

 


February 5, 2019  9:41 AM

Service Mesh: what is it, where are we… and where are we going?

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

This is a contributed post for the Computer Weekly Developer Network written by Ranga Rajagopalan is his capacity as CTO and co-founder of Avi Networks.

Avi Networks is known for its Intelligent Web Application Firewall (iWAF) technology.

The firm offers a software-only product that operates as a centrally managed fabric across datacentres, private clouds and public clouds.

Rajagopalan writes…

Cloud-native applications – shorthand these days for containers and microservices – have a lot going for them, but most [or at least many] of the benefits come from their ability to accelerate dev and test processes, reducing the time it takes to bring applications online, fix bugs, add new features and so on.

Move those applications out of the dev & test sandbox and into the wider production world, however, and cloud-native applications [often] add new issues in terms of scalability, security and management… so potentially wiping out those benefits altogether.

The solution is a service mesh [A service mesh is a configurable infrastructure layer for a microservices application that can work alongside an orchestration tool such as Kubernetes], but it’s not a simple product you can just bolt on to make cloud-native applications production ready.

It’s more a framework, which can be used to connect cloud-native components to the services they need… and one which can be delivered in a variety of ways.

A matter of scale

Scalability is absolutely fundamental to the problems posed by cloud-native technologies, which work by breaking applications down into much smaller parts (microservices) each wrapped in its own lightweight and very portable virtual environment (container).

So, whereas a conventional web application might span a handful of virtual machines, a cloud-native app can comprise a collection of hundreds or even thousands of microservices, each in its own container running anywhere across a hybrid cloud infrastructure.

On the plus side, containers can be turned on and off, patched, updated and moved around very rapidly and without impacting on the availability of the application as a whole.

Each, however, also needs to find and communicate both with its companions and shared load balancing, management, security and other application services. And that’s far from straightforward given the sheer number of containers involved and their potentially high turnover rates.

This need to communicate adds too much weight to cloud-native apps and would be a nightmare to manage at scale through traditional means. Hence the development of service mesh, a dedicated infrastructure layer for handling service-to-service requests, effectively joining up the dots for cloud-native apps by providing a centrally managed service ecosystem ready for containers to plug into and do their work.

Project Istio, open source

Despite the relative immaturity of this market, there’s a lot going on to put this all into practice, both by vendors in the services space (particularly application delivery, traffic and security management solutions) and the big-name cloud providers. This has led to the development of a number of proprietary service mesh ‘products’.

But, of [perhaps] greater interest is Istio, an open source initiative, originally led by Google, IBM and Lyft, but now with an ever-growing list of other well-known names contributing to and supporting its development including, Cisco, Pivotal, Red Hat and VMware.

Istio is now almost synonymous with service mesh, just as Kubernetes is with container orchestration. Not surprisingly, Istio’s initial implementations are very much bound to Kubernetes and cloud native application architecture. The promise of service mesh is alive today within a Kubernetes cluster, but the value will grow exponentially when a service mesh can be applied to all applications across clusters and clouds.  

Where next for service mesh?

As mentioned above, more and more companies are joining the service mesh conversation under the Istio banner. This is the type of support that helped Kubernetes evolve from a project to the de facto container orchestration solution in just a few short years.

The rich combination of established technology giants and [perhaps more] innovative startups will continue to foster and develop the Istio service mesh to include more features and support more applications. By extending Istio with other proven technologies, you can easily apply cross-cluster and even cross-cloud communication — opening the door to apply the value of service mesh to existing applications in the datacentre.

The promise of granular application services delivered close to the application is an idea that is readily applicable to traditional applications running on virtual machines or bare metal infrastructure. With the disaggregation of application components made possible by the containerised microservices architecture, this mode of service delivery is a necessity and will eventually become ubiquitous across all application types beyond

Companies looking to adopt cloud-native technologies will, almost certainly, need a service mesh of some description and the smart money is on Istio being part of that solution. Whatever format it takes, however, the chosen solution needs to deliver value that fits with new and existing applications.

Avi CTO Rajagopalan: big fan of clouds, all kinds.


January 30, 2019  7:05 PM

Nginx: managing monolithic app traffic is an API game

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Nginx is the company that likes to be called NGINX, except it’s not… because it’s not an acronym, it’s supposed to say “engine-X”, which is a cool snazzy name, right?

Actually, Nginx would only ever rank as Nginx, because almost all reputable press outlets only allow acronyms up to a maximum of three letters.

There’s always an exception that proves the rule and SuSE might be the fly in the ointment. Or could it be TIBCo (who would prefer we say TIBCO, for The Information Bus Company) that makes this an imperfect rule?

Either way, it’s tough to read NGINX news without automatically self-editing yourself back to Nginx, which might be a shame… because the firm’s application delivery platform has just been augmented with by availability of its API management module for Nginx Controller.

Nginx controller manages application and web server performance. The API controller is special (Nginx would say ‘unique’) in that it is capable of separating runtime (day-to-day) traffic from management traffic, which is (very arguably) rather neat when we look at the amount of reliance web-connected firms are placing on API management and the need there now is to improve API response times.

“The Nginx API management solution enables infrastructure and operations (I&O) teams to define and publish APIs, manage traffic to and secure backend applications, monitor performance issues and analyse API usage,” notes the company, in a press statement.

Nginx API Management Module is built on an architecture that provides control‑plane functionality with Nginx Plus, an API gateway.

The firm reminds us that Nginx is also a component in many traditional API management solutions, providing the underlying gateway for Axway, IBM DataPower, Kong, MuleSoft, Red Hat 3Scale, and others.

This technology provides what is said to be a ‘simple interface’ to define APIs, manage upstream groups and backend servers, route resources to upstreams, and publish the resulting API definitions to Nginx Plus gateways.

Both Nginx Controller and Nginx Plus are flexible and can be deployed on any environment due to their small footprint – bare metal, VMs, containers, and public, private and hybrid clouds.

 


January 29, 2019  2:59 PM

Alfresco dines out on framework extensions

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Open source process automation, content management and information governance software company Alfresco has hit the version 3.0 iteration for it ADF 3.0 (Application Development Framework).

Company CTO John Newton claims that his firm has added some major extensibility features to Alfresco ADF to simplify building web apps.

The software ADF itself uses the firm’s own Alfresco Digital Business Platform — it provides a set of reusable Angular-based UI (user interface) components and services, command-line tooling and client APIs (Application Programming Interfaces) that surface Alfresco process, content and governance services.

“We believe a digital business needs more than a content repository, it needs a Digital Business Platform that developers can extend and customise. Enterprises today need a single source of truth for information across their end-user apps and back-end systems. That’s the power of process-led, cloud-native content management – a single platform to manage, secure, and collaborate on content,” said Alfresco CTO Newton.

Developers will now be able to extend the main ADF components with their own code. This new extensibility mechanism helps maintain code customisations while remaining up-to-date with the latest ADF versions.

Future-proof

By including this new extension framework, developers can isolate their custom work and upgrade (“future proof”) to later versions of ADF without losing their original code.

Alfresco has rewritten the Javascript API in Typescript so that packaging of App Source Codes is optimised for better performances. Typescript, designed for large app development, is also supported by Angular and uses it as its primary language.

This is said to ensure performance improvements of ADF apps when deployed in production.

Alfresco’s ADF 3.0 also now supports the newest version of the cloud-native, open source BPM project, Activiti 7.0.

With the support for Angular 7.0, developers will gain access to the performance improvements and several major improvements and new design features comprising virtual scrolling, drag and drop etc.


January 29, 2019  2:38 PM

Dynatrace goes for Go (Golang)

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Software applications need management, monitoring, testing and continual levels of deep tissue massage to ensure they run as intended and deliver to the user requirements for which they were initially built.

This is the space that Application Performance Monitoring (APM) specialist Dynatrace has been working to position itself since the company was initially formed in its first iteration back in Austria in 2005.

The company’s application monitoring (and testing) tools are available as cloud-based SaaS services or as on-premises software… and this month we see the firm extend its work to provide automatic code level insights into applications that rely on the popular Go programming language.

Go Golang

Sometimes referred to as GoLang, Go was initially developed by Google as an open source programming general purpose programming language.

It is a static programming language (meaning variables must be declared upfront) and  statically typed and explicit (code must be manually structured to execute specific tasks).

Looking at the latest news from Dynatrace, the company insists that [when alterations need to be made to an application] there is no need to inject code into thousands of microservices or change the code of a Go application… instead, Dynatrace automatically discovers and monitors Go components.

With this feature, Dynatrace extends its scope of AI capabilities for cloud native platforms including Cloud Foundry, Kubernetes and Red Hat OpenShift.

Why go for Go?

So why the focus on Go?

Because (based on 2018 GitHub usage) Dynatrace says it has recognised how fast Go is growing in terms of popularity – it is estimated to now be used by nearly two million developers today.

“Go is lightweight, suited to microservices architectures and fast becoming the programming language of the cloud. Yet most monitoring tools are blind to Go which has left organisations having to do manual development and configuration to get any sort of insight,” said Steve Tack, SVP of product management at Dynatrace. “The Dynatrace platform is built for the cloud and now automatically picks up and shines a light into Go components, ensuring that Go isn’t creating an enterprise cloud blind spot.”

Looking inside a cloud

This new capability is said to be particularly important to organisations that are using high-profile cloud platforms built using Go.

They now have visibility into the performance of the source code of these platforms which means they can utilise the AI from Dynatrace to automatically surface the root cause of performance problems… and this, in turn, enables DevOps teams to focus on optimisation rather than troubleshooting.

These same benefits also apply to those that are developing using Go, with the additional benefit of being able to get more accurate performance insights.

 


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: