Open Source Insider


October 15, 2018  10:49 AM

Container-native, it’s now ‘a thing’

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

San Francisco headquartered software analytics company New Relic has acquired Belgian container and microservices monitoring firm CoScale.

Neither firm is essentially open source in its core approach, but the technologies being interplayed here essentially are.

CoScale’s expertise is in monitoring container and microservices environments, with a special focus on Kubernetes — the open source container orchestration system for automating deployment, scaling and management of containerized applications originally designed by Google.

New Relic notes that Kubernetes has become the de facto standard for orchestrating containerized applications, which it indeed has.

Container-native

The company claims that CoScale has been a leader in providing container-native (not a term we have used much up until now) monitoring for Docker, Kubernetes and OpenShift, always with the aim of providing full-stack container visibility in production environments.

Key CoScale team members will join New Relic and relocate to its European Development Center in Barcelona, Spain.

As of now, a Google search for “what is container native” automatically extends and auto-completes to “what is container native storage”, but that may be because Red Hat directly brands a product in the space as Container Native Storage (CNS).

We could perhaps quite reasonably suggest that, soon, this may change to “what is container native development” as we now look to use this increasingly de facto form to govern the way we use cloud resources in live production software application development environments.

October 3, 2018  6:56 PM

DataStax code leader: how to be a good guru

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Developers need thought leadership, it’s a fact.

It is for this reason that software application development focused tech vendors often employ futurists, evangelists and advocates.

Computer Weekly Open Source Insider asked DataStax ‘solutions architect / vanguard’ (Ed: wow, that’s a new one) Patrick Callaghan what he thinks it takes to be a good visionary lead in the developer space.

DataStax itself is a data management company with roots in open source — the company is a (if not ‘the’) key contributor to Apache Cassandra.

CW OSI: What do developer relations teams and the advocates that lead them really have to concern themselves with today?

Callaghan: It’s all about engaging the user community as peers and equals.

Developer relations teams have to act as advocates for users within their companies, as well as creating all the content that provides insight into all the innovative work that can be done with software tools over time.

This makes it easier and faster for developers to adopt solutions in the right ways.

CW OSI: How can DevRel teams and evangelists avoid being tarred with the ‘marketing’ brush?

Callaghan: It’s about being truthful on what your product does, how it works, and equally what it can’t be used for.

You are working with engineers – they want to understand how things work, and why different approaches to problems should be considered.

CW OSI: Can you give us an example?

Callaghan: Say you have to cross a river – a bridge and a boat both help you cross the water, but whether you choose one or the other depends on your needs.

Software developers face similar challenges around picking the right tools from cloud databases to software languages for their projects. Making the right choices helps you avoid re-platforming or changing in the near future, so good advice is essential.

You can’t fake that approach.

Callaghan: It’s all about engaging the user community as peers and equals.


September 21, 2018  10:30 AM

CloudBees pours honey on Jenkins support

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

CloudBees are (is) a bunch of busy social bees (presumably why the firm called themselves that).

The enterprise DevOps company used the DevOps World | Jenkins World event to announce CloudBees Jenkins Support, a subscription service for commercial Jenkins support.

Jenkins is an open source automation server with what has been claimed to be as much as 62 percent market share and millions of active users — it is, essentially, a means of achieving continuous delivery for DevOps teams.

CloudBees says its engineers contribute about 80 percent of the source code for Jenkins.

Additionally, Kohsuke Kawaguchi, chief technology officer at CloudBees, is the original developer of Jenkins.

This whole package is available as a ‘support-only’ subscription from CloudBees — which comes with technical support and maintenance for Jenkins, backed by an SLA.

The firm promises enhanced Jenkins stability, risk-free upgrades via CloudBees Assurance Program, a ‘rigorous’ vetting process to verify Jenkins core and identify CloudBees-certified plugins.


September 20, 2018  9:50 AM

IBM open source ‘anti-bias’ AI tool to combat racist robot fear factor

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Developers using Artificial Intelligence (AI) ‘engines’ to automate any degree of decision making inside their applications will want to ensure that the brain behind that AI is free from bias and unfair influence.

Built with open source DNA, IBM’s new Watson ‘trust and transparency’ software service claims to be able to automatically detects bias.

No more ‘racist robots’ then, as one creative subheadline writer suggested?

Well it’s early days still, but (arguably) the effort here is at least focused in the right direction.

IBM Research will release to open source an AI bias detection and mitigation toolkit

Fear factor

Big Blue’s GM for Watson AI Beth Smith has pointed to the fear factor that exists across the business world when it comes to its use of applications that incorporate AI.

She highlights research by IBM’s Institute for Business Value, which suggests that while 82 percent of enterprises are considering AI deployments, 60 percent fear liability issues and 63 percent lack the in-house talent to confidently manage the technology.

So how does it work?

IBM says that its new cloud-based trust and transparency capabilities work with models built from a wide variety of machine learning frameworks and AI-build environments such as Watson, Tensorflow, SparkML, AWS SageMaker and AzureML.

This automated software service is designed to detects bias (in so far as it can) in AI models at runtime as decisions are being made. More interestingly, it also automatically recommends data to add to the model to help mitigate any bias it has detected… so we should (logically) get better at this as we go forward.

For the user, explanations are provided in easy to understand terms, showing which factors weighted the decision in one direction vs. another, the confidence in the recommendation, and the factors behind that confidence.

AI Fairness toolkit

In addition, IBM Research is making available to the open source community the AI Fairness 360 toolkit – a library of algorithms, code and tutorials for data scientists to integrate bias detection as they build and deploy machine learning models.

According to an IBM press statement, “While other open-source resources have focused solely on checking for bias in training data, the IBM AI Fairness 360 toolkit created by IBM Research will help check for and mitigate bias in AI models. It invites the global open source community to work together to advance the science and make it easier to address bias in AI.”

It’s good, it’s broad, but it is (very arguably) still not perfect is it? But then (equally arguably) neither are we humans.

Image source: IBM


September 19, 2018  2:17 PM

Experimental ‘foundry’ divisions are now tech de rigueur

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Pardon us… your company doesn’t have a technology experimental division?

Okay sorry, it might be called the prototyping department, the Proof of Concept (PoC) laboratory, the exploration ‘foundry’, alpha (as in pre-beta) division or it might even be unceremoniously labelled the mock up shop.

Whatever it’s called, the presence of a exploratory division or department now appears to be de rigueur for all self-respecting technology enterprises.

General Electric (GE) runs so-called foundry (Digital Industrial Foundry sites, to give them their full name) in San Francisco, Paris and Shanghai.

SAP has built workspaces to help populate and promote its SAP.iO Foundry programme, a network of foundry locations dedicated to helping build software that exists both inside and outside of SAP.

According to the firm, “The SAP.iO Fund invests in external early-stage enterprise software startups that strategically expand the SAP ecosystem.”

Current SAP.iO foundries are in major startup hubs, including Paris, Berlin, Tel Aviv, New York City and San Francisco.

IBM’s alphaWorks is a long established example of the same kind of emerging technology exploration — although it’s now called IBM developerWorks Open.

Newest (perhaps) to the table is GitHub.

GitHub has launched Experiments, a collection of demonstrations highlighting its ‘most exciting’ (it’s term, not ours) research projects and the ideas behind them.

The first demo is Semantic Code Search.

Semantic Code Search allows you to find code through meaning instead of keyword matching. That means the best search results don’t necessarily contain the words you searched for.

GitHub has used machine learning to build semantic representations of code that allow users to use natural language to search for code by intent, rather than just keyword matching.

You can find out more about Semantic Code Search in this linked blog post.

Someone once said that if you don’t launch at least 10% (or so) products and services that fail, then you (as a company) are simply not testing yourself… it must be true.

Image: Dr. Bunsen Honeydew (source Wikipedia)


September 17, 2018  10:37 AM

Pulumi offers ‘code-based approach’ to cloud native apps

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Pulumi Corporation has come forward with new libraries and tools for Kubernetes, the open source container orchestration system.

Pulumi who? Ah yes sorry, Pulumi is quite new (founded in 2017 and pronounced ‘puh-loo-me’) and describes itself as a cloud native development platform company that provides frameworks and libraries to build, deploy and manage cloud services using familiar programming languages (incliuding JavaScript, TypeScript, Python, Go).

The new libraries and tools enable what the firm calls a ‘code-based approach’ to coding (yeah, okay that part was obvious), but also deploying and managing applications across clouds including Amazon EKS, Microsoft AKS and Google GKE, in addition to on-premises and hybrid environments.

How does it work?

Initially by providing full access to Kubernetes APIs, Pulumi also enables the execution of all Kubernetes operations through ‘config as code’, which also enables code reuse and abstraction.

“Kubernetes is rapidly standardising compute across all public and private clouds and paves the way for a true cloud native development platform. Pulumi simplifies defining, deploying and managing modern cloud native applications,” said Joe Duffy, CEO, Pulumi.

Deploy a canary

This technology also supports advanced, complex service management functionality. For instance, Pulumi can deploy a canary, then run Prometheus-based health checks, before progressing to a production rollout or rollback if necessary.

As noted by Octopus here, Canary [i.e. early warning] deployments are a pattern for rolling out releases to a subset of users or servers. The idea is to first deploy the change to a small subset of servers, test it, and then roll the change out to the rest of the servers.

The code reuse element here is also suggested to ease previously more difficult operations such as ensuring complex multi-stage cloud application deployments, or combining Kubernetes deployments with other cloud resources such as datastores.


September 13, 2018  3:58 PM

SAP SuccessFactors builds open co-creation community

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

SAP is bringing a new element of automation to Human Resources (HR) with a new open community designed to create purpose-built and easy-to-consume HR applications.

The hope is that organisations of all sizes from enterprises to startups will come together to co-create applications (and smaller micro-apps) in this space.

“We believe this wave of innovation will result in a ‘human revolution’ that will allow businesses to focus time, talent and energy on the thing that really matters: the people that lead to business outcomes. With this community, we can help assemble a complementary set of solutions for our customers’ diverse needs. And, if they don’t exist yet, we can co-create them together,” said SAP SuccessFactors president Greg Tomb.

This initiative comes on the back of the SAP App Center, a somewhat similar grouping SAP-own-app-store collection of enterprise software applications.

The new community is organised around apps that fall into six initial pillars:

  • well-being
  • pay equity
  • real-time feedback
  • unbiased recruiting
  • predictive performance
  • internal mobility

SAP promises ‘more pillars’ in the coming quarters.

As an example, SAP SuccessFactors is working together with Arianna Huffington’s science-based stress lowering project Thrive Global to attempt to ‘operationalize’ a culture of well-being and improve the employee experience overall.

“Reducing bias in recruiting takes more than the best of intentions,” said Jim McCoy, chief revenue officer and general manager at Scout Exchange, a company known for its data-driven software designed to connect employers and search firms to fill jobs.

“[This action to reduce bias] requires data-backed analysis, process, behavioural changes and comprehensive strategies. As a first step, employers should commit to tracking, reporting and achieving diversity recruiting progress against a set of defined metrics. It’s also important to benchmark performance against industry peers so you’re comparing to more than just your own internal improvements.”

People issues

Additional partners comprising this new community are focused on providing solutions to other core ‘people issues’ including:

  • Helping employees dial down financial stress with Best Money Moves
  • Ensuring employees are paid equitably with PayScale
  • Developing next-generation leaders with AI-powered coaching from Cultivate
  • Providing insightful feedback to enhance employee engagement with Culture Amp
  • Achieving diversity goals with Blendoor
  • Eliminating recruiting bias with Brilliant Hire by SAP employees in the SAP.iO Venture Studio
  • Hiring, growing and retaining top talent using AI to build a deep talent database with Plum
  • Hiring internal, external and contingent talent more effectively with AI from HiredScore
  • Mobilising the workforce to cover understaffing with Andjaro

In October, SAP says it will launch a new SAP.iO Foundry in San Francisco, where SAP SuccessFactors will provide support to select startups with access to curated mentorship, exposure to SAP technology and Application Programming Interfaces (APIs) and opportunities to collaborate with SAP enterprise customers.


September 13, 2018  3:39 PM

Going to Hyperledger school

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Hyperledger (or the Hyperledger project) is an umbrella project of open source blockchains and related tools.

The project was founded by the Linux Foundation at the end of 2015 with the intention of encouraging the collaborative development of blockchain-based distributed ledgers.

In a play to help developers learn how to use this technology, the Linux Foundation has now announced the LFD271 – Hyperledger Fabric Fundamentals training course.

Additionally, Certified Hyperledger Fabric Administrator and Certified Hyperledger Sawtooth Administrator exams will be released later in 2018.

“Blockchain technology adoption is increasing at a rapid pace [media reports suggest that] blockchain jobs are the second-fastest growing in today’s labour market – leading to a shortage of professionals who are qualified to implement and manage it on an enterprise scale,” said Linux Foundation general manager for training & certification Clyde Seepersad.

Seepersad says that after seeing more than 100,000 students take the foundation’s free introductory Hyperledger course, they knew it was time for more advanced training options and certification exams to demonstrate the extent of professionals’ knowledge.

The Hyperledger Fabric Fundamentals course introduces the fundamental concepts of blockchain and distributed ledger technologies, as well as the core architecture and components that make up typical decentralised Hyperledger Fabric applications.

The course is designed for application developers to learn how business logic is implemented in Hyperledger Fabric through chaincode (Hyperledger Fabric’s smart contracts) and review the various transaction types used to read from and write to the distributed ledger.

Application developers will be shown how their applications can invoke transactions using the Hyperledger Fabric JavaScript SDK.


September 11, 2018  5:19 PM

Sauce Labs coding lead: how open source contribution should work

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Sauce Labs’ head of open source Isaac Murchie talks to Computer Weekly Open Source Insider to deconstruct open source and ask what happens next in terms of what matters, examine who is (and isn’t) contributing and question which myths we should dispel with.

Sauce Labs is known for its Continuous Testing (CT) technology and the company is a devoted adherent to open source — it provides a continuous testing cloud that allows developers to verify that their web and mobile native apps work correctly across a variety of devices using the open source testing protocols Selenium and Appium.  

Computer Weekly OSI: What concerns you most in open source today?

Murchie: The biggest issue that the open source community should keep an eye on in the years ahead is the tendency of companies to use open source systems but not contribute to them. At Sauce Labs, we know that the open source ecosystem can only survive if people give back. If maintenance and development is ignored, software systems will falter and fade.

Computer Weekly OSI: Why are some individuals failing to contribute to open source?

Murchie: There are legitimate reasons, such as they might work leveraging existing closed-source technology, or it being key to a company’s market distinction. But for the most part it seems to be a misguided idea of controlling and securing the code. In this mindset, any work that has been done is proprietary and potentially makes money, so it should not be disclosed to ‘competitors’.

While this is bad for the ecosystem outside the company, as the work has to get repeated, it is also bad internally, since it makes upgrading difficult. If work has been done to patch and fix a particular version, as things change in the open source system the code that is fixed internally can diverge… and the company becomes locked into the particular version they got working. If the changes were contributed back to the project then any subsequent work would have been done with that code in it.

Computer Weekly OSI: How can firms support and encourage their developers to contribute to open source?

Murchie: By allowing them the time, or financially, to allow projects to pay developers, or to pay for services needed to maintain good software (continuous integration services, for instance). It can also mean legal support, to allow employees to navigate the contributor licensing agreements that many projects have. It can also be moral support, acknowledging that the work is significant and valued.

Computer Weekly OSI: Which open source myths do you want to dispel?

Murchie: I would say that the largest myth is that contributing back to the community gives up some purported competitive advantage. On the lower end of the scale this is ridiculous. If a company’s competitive advantage lies in the fact that they have fixed a bug in an open source system they use, then their advantage is tenuous at best. The excuse is just covering over an unwillingness to go through the corporate legal hoops to get permission, at that point, and, as I said before, will bite back when the underlying system is changed.

In larger scale work, the competitive advantage argument may make more sense. However, if used as a blanket excuse to not allow any open source work it covers work that could easily, and fruitfully, be contributed back to the public. In particular, work that is used by customers and users will often be helped by their having access to the code. Not only do they tend to fix problems, but studies have shown that the code becomes more secure. In addition, we have found that users will, given the code, make use of it in novel and exciting ways, actually helping them use our system.

Computer Weekly OSI: How much of the success in open source do you think hinges on contributors/maintainers upkeeping existing projects vs. companies releasing proprietary code or new projects to open source?

Murchie: I think both are necessary. Old projects are inevitably the building blocks of new ones… and as such need to be maintained. This becomes more true the more ‘basic’ the projects are… libraries and frameworks are the ones that get depended on, while application software tends to change as user needs change. That is not to say that newer ways of doing something, no matter how basic, should not supplant an old way, as technologies and approaches evolve. New projects, on the other hand, drive the adoption and utility of open source in general.

Open source democracy

As an end note here, Sauce Labs says it’s also about culture and the firm insists that contributions comes all the way from Charles Ramsay, the CEO, down.

Murchie has said that this also highlights that open source is not just about lines of code. Every expertise that is useful within a company is also useful in the open source community.

Sauce Labs’ Murchie: Let’s all tune in, contribute and play nice now please.


September 10, 2018  4:58 PM

The ‘scramble’ for cloud repatriation

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

There are networks, there special interest groups, there are consortia, there are working groups, there are foundations, there are alliances and there are coalitions.

There are also common enterprise deployment model initiatives.

This is what has brought Hortonworks, IBM and Red Hat together this month and the new three-way union is called an Open Hybrid Architecture Initiative.

This collaboration has come about to attempt to provide a common enterprise deployment model to enable big data workloads to run in a hybrid manner across on-premises, multi-cloud and edge architectures.

It is, to put it simply, all about getting big data to work in more places… and about repatriating from public cloud in instances where public deployments have failed to deliver.

As the initial phase of the initiative, the companies plan to work together to optimize Hortonworks Data Platform, Hortonworks DataFlow, Hortonworks DataPlane and IBM Cloud Private for Data for use on Red Hat OpenShift, the container and Kubernetes application platform.

The firms say they see customers moving to hybrid cloud environments that use lightweight microservices in the most efficient manner possible.

As such, Hortonworks plans to certify Hortonworks Data Platform (HDP), Hortonworks DataFlow (HDF) and Hortonworks DataPlane as Red Hat Certified Containers on Red Hat OpenShift and looks forward to achieving “Primed” designation.

In addition, Hortonworks will enhance HDP to adopt a cloud-native architecture for on-premises deployments by separating compute and storage and containerising all HDP and HDF workloads.

IBM, for its part, has begun the Red Hat OpenShift certification process for IBM Cloud Private for Data and has achieved “Primed” designation as the first phase.

The move is hoped to provide the OpenShift community of developers with fast access to analytics, data science, machine learning, data management and governance capabilities across hybrid clouds.

The firms says that their customers are “scrambling” to bring applications once designed for public cloud behind the firewall for greater control, lower costs, greater security and easier management.

Repatriation situation

An IDC Cloud and AI Adoption Survey suggest that more than 80 percent of respondents said they plan to move or repatriate data and workloads from public cloud environments behind the firewall to hosted private clouds or on-premises locations over the next year, because the initial expectations of a single public cloud provider were not realised.

This then, is the kind of result that the vendors are producing in response to this need for cloud repatriation.

Wikimedia Commons


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: