Open Source Insider

January 20, 2020  10:42 AM

Open source licence series – Perforce: it was NEVER about the money

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Open source grew, it proliferated… and it became something that many previously proprietary-only software vendors embraced as a key means of development… but the issue of how open source software is licenced is still the stuff of some debate.

The Computer Weekly Open Source Insider team now features a series of guest posts examing in the ups & downs and ins & out of open source software licencing. 

Rod Cope, CTO, Perforce Software and Justin Reock, chief architect, OpenLogic at Perforce Software write from this point forward.

Rod Cope: not in it for the money

I’ve been using open source software since the 80’s, long before we called it open source.  Back then, it was mainly developers ‘scratching their own itch’ and coming up with creative solutions they wanted to share with their peers.  

It was more about doing it right (where each developer was free to use their own definition of ‘right) than it was about schedules, business value, or any thought of making money. Over time, communities of these likeminded people formed to tackle larger projects that were beyond the scope of a single developer (in most cases).  

The final results were often very good, thanks to an unlimited time budget, peer reviews and a meritocracy where only the ideas mattered and no one in business management could come along and force a hasty decision.  

Again, in those early days (and for many developers now, still) profit wasn’t a motivator.  It was all about creating the best software possible and the freedom to try new approaches, architectures, languages, project management strategies and anything else that might improve the final result.

In many cases, contributors were professional software developers by day who worked on things like accounting software that weren’t as exciting to them as their nights and weekends spent perfecting an open source messaging system or web framework or NoSQL data store. They really enjoyed the freedom open source gave them to fail and innovate again as many times as they had the patience and energy for. 

Commercial cloud doesn’t hurt, as such

Today, we often think of open source as a grander idea about sharing software with the world and that all users should contribute back for the benefit of all.  

The rise of SaaS and cloud vendors that use open source without giving back obviously upsets people with the share-alike mindset, but does that really hurt open source?  Developers are still free to work on what they like, scratch their own itch, innovate, experiment, fail and try again. They can still use any tools they like, take as much time as they like and not worry about a business manager asking them to go in a direction contrary to the developer’s vision. Developers will continue to be creative, earn respect from their peers and know that they made something good that is improving the lives of their users, even if those users are paying a third party for a service based on their work.

After all, it was never about getting paid or requiring users to give back in the first place.

Justin Reock: open fairness – is reciprocity required?

The modern software deployment landscape is worlds away from what was predicted when the first free software project was released over forty years ago. Since that time, advances such as cloud-native applications have presented challenges to the altruistic obligations that complement free software.  

While the Free Software Foundation has always maintained that monetisation of the means of delivery of a piece of software isn’t strictly against their mission, would that sentiment have applied knowing the scale at which people would consume free software via the cloud? 

Is it time to look at the original intent of the GPL, the moral fabric of free software and see how it holds up against the reality of software deployment in 2020? Perhaps… and that’s another story. 


January 19, 2020  10:06 AM

Open source licence series – R3: The world needs audit licenses

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Open source grew, it proliferated… and it became something that many previously proprietary-only software vendors embraced as a key means of development… but the issue of how open source software is licenced is still the stuff of some debate.

The Computer Weekly Open Source Insider team now features a series of guest posts examing in the ups & downs and ins & out of open source software licencing. 

Mike Hearn, lead platform engineer at enterprise blockchain company R3 writes from this point forward.

Why the world needs audit licenses

Many [software] programs grant the right to share and modify their source code.

The rapid spread of this model has enabled the software industry to scale up to larger codebases that would have been completely impractical if every component required a complex approval and purchase process. Just as importantly, it helped mitigate lock-in risk, enabling organisations to utilise ever bigger and more powerful platforms without associated exposure to vendor exploitation or stagnation.

I want to look at why the world needs audit licences.

Walking the open core tightrope

The so-called ‘open core’ model is hard to get right.

[As we know, the open-core model primarily involves offering a “core” or feature-limited version of a software product as free and open-source software, while offering “commercial” versions or add-ons as proprietary software.]

A common error is to open too much, leading to a Docker-style situation in which your commercial version is duplicated by other firms, leaving you with no business and a large maintenance bill. Other firms bet on becoming a managed service provider but find themselves forced into a license change when big cloud operators prove better at selling services than them.

The key is for open source platforms to get the balance right — so Corda is an example of an open source, decentralised database platform.

Editorial flag: Corda is developed by R3, so Hearn is talking about his own company’s product. The software is an enterprise blockchain platform that allows developers to write an application that is deployed on open source Corda and individual firms on the enterprise version can interoperate with those who are using open source.

Corda focuses on the needs of the largest companies and thus comes with an extended commercial version. Although still in its early days, it appears to be walking on the right side of the open core tightrope. As a result, customers choose to support the ecosystem and themselves by purchasing the enhanced version. Yet, some users do go live on the fully open source edition, pointing to a low lock-in risk.

Developers behind open source platforms need to recognise the importance of keeping customers happy, or else, they may simply leave the platform. By collectively insisting on open core licensing, enterprise blockchain users put themselves into a powerful position over vendors, befitting the decentralised ethos of the space.

I’d also point out that better security needs a new approach to licensing. With security demands increasingly coming to the fore across all sectors, a new approach to licensing is required to keep pace with this trend.

Enclave-oriented computing

Bringing enclave-oriented computing to the mainstream could be the security solution needed.

Enclave technology like Intel SGX enables a client to audit the code running on a remote server. A cryptographic ‘handshake’ reveals the hash of the program you’re securely connected to. Enclaves allow the removal of trust from a service operator: now anyone can audit the workings of a service and prove to themselves how data gets used. Think of it as an automatically enforced privacy policy. It is even possible to build services where the service provider doesn’t see any data at all.

Editorial flag: Hearn’s comments are valid and interesting, but it is worth noting that he is again steering the conversation towards technology that his company develops. As noted on Ledger Insights, Conclave – a play on enclave – is the name for R3’s research product which hopes to make ‘enclave-oriented computing’ (EoC) accessible to developers. 

For this scheme to work, users must be able to read and compile the source code of the service. Open source licenses automatically meet this requirement but it’s unreasonable to expect all enclaves to be fully open source. We need audit licenses – new agreements that allow understanding and replication of the enclave build without granting distribution or modification rights.

Trade secrets must be protected without creating awkward processes. There’s no reusable out-of-the-box license that meets these needs as it’s rarely been a requirement before. However, developing such licenses and open source text can enable a new era of enclave-oriented computing for everyone.

R3’s Hearn: there’s a balance to strike in open source, as in all things.



January 17, 2020  8:20 AM

MariaDB goes bigly on cloud-native smart apps

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

MariaDB Corporation is upping its cloud-native playbook.

At the same time, MariaDB is aiming to up its approach to so-called ‘smart’ applications., so before we define the parameters at play here, let’s look at the news.

The database company’s mysteriously named MariaDB Platform X4 is new to the table and is described as a cloud-native open source database for developers to build modern applications using smart transactions and cloud-native data storage. 

We know that modern applications (that aspire to be smart) require access to vast amounts of data — and that data needs to be optimised for analytical queries and Machine Learning (ML) models.

In this way, transactions can be augmented with data insights, turning them into smart transactions.

“The use of mobile devices and the rapid pace of technology has fundamentally changed how we interact with applications and what we expect from them,” said Gregory Dorman, vice president of distributed systems and analytics, MariaDB Corporation.

Dorman suggests that the trick that developers should be looking to pull off here is the ability to add the ‘smarts’ (plural) elements without impacting the performance of transactions, so this is why the company implemented a dual storage layout for data.

How does a  dual storage layout for data work? It’s row based for transactions and columnar based for ‘true’ analytics.

MariaDB insists that today, most web and mobile applications run on ‘dumb transactions’ or simple create, read, update and delete (CRUD) operations with a few complex queries. 

Smart transactions defined

With smart transactions, applications take advantage of what MariaDB calls “true” analytics before, during and/or after a transaction. 

So these are applications using smart transactions that can anticipate user needs, create context to be more helpful and take advantage of vast historical records to predict outcomes such as on-time flight performance, best pricing options or sales forecasts for better decision-making or automation. 

“Similar to newer analytical solutions such as Snowflake, MariaDB Platform X4 implements a cloud-native disaggregated architecture for analytics using an API compatible with AWS S3 for up to 70% cost savings over block storage, 99.999999999% durability and 99.99% availability, storage across multiple availability zones and unlimited storage capacity,” said the company, in a press statement.

Unlike pure analytical cloud-native solutions, MariaDB Platform X4 also uses block storage, such as AWS EBS, for fast transactions along with object storage, such as AWS S3, for fast, scalable analytics.

Developer enablement

MariaDB is publishing new material to try and help developers build modern applications using smart transactions. This new enterprise documentation includes install and deployment guides and outlines new platform functionality. 

MariaDB is also creating sample applications available on GitHub.

January 15, 2020  9:47 AM

Dynatrace ‘traces’ route to standardised (OpenTelemetry) observability with Google & Microsoft 

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Dynatrace has connected, collaborated, corroborated and cooperated on a new software huddle with Google and Microsoft.

The organisation that calls itself a ‘software intelligence company’ is working with the two tech giants (Dynatrace is no tiddler, the firm now has over 2000 employees) on the OpenTelemetry project to shape the future of open standards-based observability. 

Dynatrace says it is contributing ‘transaction tracing’ know-how and manpower (Ed – we think they mean ‘personpower’) to the project. 

As detailed here, “A transaction trace records the available function calls, database calls, and external calls [that an application makes]. You [developers, sysadmins or other engineering staff] can use transaction traces to troubleshoot performance issues and to get detailed low-level insight into how your app is working.”

OpenTelemetry is an open source observability framework focused on providing standardised transaction-level observability through the generation, collection and description of telemetry data for distributed cloud-native systems. It is a CNCF Sandbox member, formed through a merger of the OpenTracing and OpenCensus projects. The goal of OpenTelemetry is to provide a general-purpose API, SDK and related tools required for the instrumentation of cloud-native software, frameworks and libraries. The term observability stems from the discipline of control theory and refers to how well a system can be understood on the basis of the telemetry that it produces.

OpenTelemetry provides a single set of APIs, libraries, agents and collector services to capture distributed traces and metrics from an application. You can analyze them using Prometheus, Jaeger and other observability tools.

Cloud observability

As OpenTelemetry becomes more widely adopted, it will serve as an additional data source that further extends the breadth of ‘cloud observability’ (that term the industry is really loving this year in 2020), including expanding the reach of what the Dynatrace Software Intelligence Platform already collects and ingests into Davis™, the firm’s ‘explainable’ AI engine. 

“Our goal is to ensure ‘run the business’ software underpinning digital enterprises works perfectly, so we feel it’s important to contribute our expertise to this open source project to improve and advance observability in a broader manner,” said Alois Reitbauer, chief technical strategist and head of the Dynatrace Innovation Lab. 

The road to standardised observability

Reitbauer says that the OpenTelemetry initiative will enable developers of cloud-native applications to build ‘standardised observability’ into their software. 

“As this gains momentum, observability will be increasingly differentiated by what can be done with data, versus simply how much data can be collected. That’s why we’re excited for the day when OpenTelemetry is widely adopted, as it will increase the breadth of the data and scope of the cloud ecosystem that organisations can observe,” he added.

Product manager at Google Morgan McLean thinks that the ultimate goal of OpenTelemetry is to become the default way that developers and operators capture performance information from their services. The Microsoft spokesperson said something about Dynatrace… and seemed happy, so that’s good.

Dynatrace is working with Microsoft, Google and others as a core contributor to OpenTelemetry in areas, including: 

  • Higher-level instrumentation APIs: offering higher-fidelity tracing code to enable developers to build observability into cloud-native applications and reduce monitoring blind-spots as new methodologies and programming languages emerge.
  • Integration of universal Trace Context: supporting the availability of transactional context across hybrid multi-clouds to maintain end-to-end observability.
  • Runtime management: helping organizations ensure the resources needed to gain observability into the individual components and software libraries underpinning their cloud-native applications are dynamically available.




January 14, 2020  12:24 PM

The open source licence debate: what we need to know

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

As we have already noted on Computer Weekly Open Source Insider, open source grew, it proliferated… and it became something that many previously proprietary-only software vendors embraced as a key means of development.

But the issue of how open source software is licenced is still the stuff of some debate.

Open Source Insider has already looked at the issues relating to dead projects (that are still walking and running) and the need for workable incentivisation models. 

Chief operating officer (COO) for GitHub Erica Brescia noted that, from her perspective, she is seeing an “increasing tension” between open source projects and those that are building services on top of open source, such as cloud vendors with their database services. 

Brescia notes that licenses applied to open source projects a decade ago did not consider the possibility of a cloud vendor delivering an as-a-Service SaaS layer using the project without contributing back to it, which is leaving some open companies in a difficult position.

Computer Weekly’s Cliff Saran wrote, With friends like AWS, who needs an open source business? — and noted that a New York Times article suggested that Amazon Web Services (AWS) was strip-mining open source projects by providing managed services based on open source code, without contributing back to the community.

Security sources

We have also looked at the security aspects of open source licencing.

Exec VP at software intelligence company Cast is Rado Nikolov – for his money, the open source licencing debate also has a security element in it.

“Large organisations using open source code from GitHub, xs:code and other sources range from Walmart to NASA, collectively holding billions of pieces of sensitive data. Although open source code packages can be obtained at low or no cost, their various intellectual property and usage stipulations may lead to expensive legal implications if misunderstood or ignored,” said Niklov.

Ilkka Turunen, global director of solutions architecture at DevSecOps automation company Sonatype further reminded us that there are 1001 ways of commercialising open source software — but when releasing open source, the developer has a choice of publishing it under a license that is essentially a contract between them and the end user.

A multiplicity of complexities

So there’s security, there’s fair and just contributions back to the community, there’s layering over open for commercial use, there’s the complexity of just so many open source licences existing out there to choose from and there’s even concerns over whether trade sanctions can affect open source projects and see them becoming bifurcated along national borders. 

Open source is supposed to be built around systems of meritocracy and be for the benefit of all, we must work hard to ensure that we can do this and shoulder the nuances of licensing to keep open source software as good as it should be… let the debate continue.


January 9, 2020  9:25 AM

The open source licence debate: comprehension consternations & stipulation frustrations

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

As we have noted here, open source grew, it proliferated… and it became something that many previously proprietary-only software vendors embraced as a key means of development — but the issue of how open source software is licenced is still the stuff of some debate.

Exec VP at software intelligence company Cast is Rado Nikolov – for his money, the open source licencing debate also has a security element in it.

“Large organisations using open source code from GitHub, xs:code and other sources range from Walmart to NASA, collectively holding billions of pieces of sensitive data. Although open source code packages can be obtained at low or no cost, their various intellectual property and usage stipulations may lead to expensive legal implications if misunderstood or ignored,” said Niklov.

Stipulation situation

Niklov argues that the crux of the matter lies in the fact that (whatever licencing agreement open source software is brought in under), the most ‘important stipulations’ are often lost over time.

“The case of Artifex v Hancom shows the risk of being held liable for improper use of source code, even when it’s open source. Company executives need to ensure they are covered for the code they use, wherever they get it from. Ignorance of the law is no defence. Regularly using software intelligence for automating the analysis of open source usage is one way to significantly reduce such risk exposures,” said Nikolov.

Ilkka Turunen is global director of solutions architecture at DevSecOps automation company Sonatype.

Turunen reminds that, generally speaking, there are 1001 ways of commercialising open source software — but when releasing open source, the developer has a choice of publishing it under a license that is essentially a contract between them and the end user.

“These licenses vary from fairly restrictive (i.e. must associate where the open source came from and publish source code) to fairly liberal (buy the author a beer if you like the software). It’s important to understand that all open source is licenced under some terms at all times,” said

He notes that there are then several ways of adding commercial components on top of that (above) – and indeed many commercial companies leverage fairly open types to be able to add their own commercial code on top, to be able to spin out other commercial issues.

Comprehension consternations

“Fundamentally, it boils down to open source software licencing being generally hard to [comprehend and] understand. Most devs start these projects as a passion project and just publish it with some basic license they might live to regret later when they consider their options. Fundamentally, this is another avenue for them to gain funding, but would imagine there are limits to the scalability of what can be achieved,” added Turunen.


January 8, 2020  9:41 AM

The open source licence debate: dead project walking & incentive models

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Open source grew, it proliferated… and it became something that many previously proprietary-only software vendors embraced as a key means of development.

If you don’t accept the options offered by the community contribution model of development, then you risk becoming a Proprietary 2.0 behemoth… or so the T-shirt slogan might go.

But the issue of how open source software is licenced is still the stuff of some debate.

Chief operating officer (COO) for GitHub is Erica Brescia.

Brescia has pointed out that the industry is witnessing rising levels of tension between open source projects (and open source development shops) and those commercially motivated organizations that are building services on top of open source, such as cloud vendors with their database services

So how do we move forward with open source?

Dead project walking

Matthew Jacobs, director, legal counsel at Synopsys Software Integrity Group reinforces the suggestion that avoiding licence compliance issues and avoiding use of any software, open source included, that contains vulnerability risks is extremely important.

“However, many companies fail to consider the operational risks associated with the open source they are using. By this I mean the risk that a company will decide to leverage open source from a dead open source project or one that is failing to maintain a critical mass of contributors who are actively maintaining and improving that project. The viability of the project is only as good as the people behind it and those people need to support themselves,” said Jacobs.

He argues that providing avenues for developers to continue to do what they enjoy and for which we all benefit, but in a way that allows them to earn something along the way is important.

New incentive models, please

Shamik Mishra is Altran’s AVP of technology and innovation.

Mishra points out that in newer software development models, nobody really tries to reinvent the wheel and instead focuses on solving their own business problems – the ‘wheel’ comes from those pre-existing open source projects.

He says that many large open source projects survive because they enjoy a degree of investment from a supporting business entity to keep the community going as they hire experts and developers, but several brilliant projects have lost their momentum and have never come to fruition due to a lack of support.

“But, the industry badly needs incentive models. GitHub sponsor is a great example but still relies on the ‘donation’ mind-set. The other problem that organisations face is that they don’t exactly know which developer really contributed to that piece of brilliance that the organisation monetised, particularly within large projects. Collaborative models where developers can be compensated by interested organisations through smart contracts based on the level of contribution is perhaps the way forward,” said Mishra.

It seems clear that developers should also have a choice of providing licensed versions of open source and still have the ability to switch licences… but this subject is far from decisively closed as of 2020.




January 7, 2020  11:20 AM

commercetools: how GraphQL works for front-end developers

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

GraphQL (query language) was the brainchild of Facebook and was open sourced in 2015. 

Many of the apps and websites we use are built on GraphQL, including Twitter, AWS and GitHub.

Kelly Goetsch, chief product officer at commercetools and author of GraphQL for Modern Commerce (O’Reilly, 2020) argues why developers need to take notice of GraphQL in 2020.

Goetsch writes as follows…

GraphQL is a layer that sits on top of REST APIs, any application or data store — and it makes the process of data retrieval and extraction across multiple APIs easy.

Say you’re a developer for a retailer tasked with rendering a page for a product. You’ve already built a catalogue of 300 REST APIs and now need your product detail page to access data including product description, price and similar item information.

It may be 10 APIs, could be 200.

You could individually call each API one-by-one, but that could take a while… and calling many different APIs, which exist with a minor or major variation can be difficult in a microservices environment. You might not know which ones to call and which ones would provide the freshest data – the warehouse management system, or the Enterprise Resource Planning (ERP) system, or other?

One query to rule them all

With GraphQL, to consume REST APIs, you simply submit one query describing the information you need… and then the GraphQL layer does the legwork, making direct calls to the individual APIs.

As a result, you get back one JSON object with the exact data you requested, no less, no more. You can think of it like a SQL query where you make a request to a database, ‘select X from table 1 and join it with table 2’.

GraphQL solves a lot of headaches for developers. As well as being able to render webpages, app screens and other experiences in the first instance faster, there is no problem of under- or over-fetching data.

Under & over-fetching data

Under-fetching data can be a common issue with REST APIs and kit will especially affect devices with limited processing power like old smartphones connected to high-latency, low-bandwidth cellular networks. Making lots of HTTP requests can mean significantly increased page loading speeds.

Data over-fetching can cause severe performance issues too, for example building your product page for a smartwatch. You’d only need the product name, image and price but could get back a hundred fields.

GraphQL offers numerous advantages.

Since it is the GraphQL layer that calls all the APIs and not the developer, there is less code to maintain. Plus, as GraphQL makes all its requests within a datacentre where latency is almost zero and computing power is virtually endless, applications are loaded faster for the end-user.

And because GraphQL is the layer that decouples back-ends from front-ends, it is easy and quick for developers to change things, useful for IT teams under pressure to continually test and launch new improvements.

What else ya got?

What else should developers know about GraphQL?

GraphQL is not a product or implementation, it is a specification, so it makes no difference which programming language you use. You as the developer write the code that conforms to the specification. Imagine it like HTML whereby individual browsers implement the code that renders a web page. Furthermore, it is a supplement to REST APIs – it doesn’t replace them.

As is the case with most tech, there are downsides. GraphQL is a layer that needs to be maintained, the user is responsible for security, and it can be a challenge to combine several GraphQL endpoints and schemas.

However, the benefits of GraphQL far outweigh the costs.

In an increasingly competitive marketplace commerce players need to leverage the tools that enable them to be as agile as possible, save time and deliver superior customer experience.

When building commerce applications, GraphQL is the ideal tool for the job.

The platform: commercetools offers a scalable cloud platform with a flexible commerce API at its core, which supports a modern microservice-based architecture and offers a wide range of integrations.

Kelly Goetsch: enjoy the freedom from decoupling back-ends from front-ends… and breathe easy.

December 19, 2019  7:35 AM

New moon rising: DataStax Luna is subs-based support for Cassandra

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

DataStax has taken the Christmas wrapping paper off of DataStax Luna, a subscription-based support offering for open source Cassandra.  

The company says it is offering this service due to the rapid growth of Apache Cassandra. 

According to the DB-Engines Ranking

  • Cassandra is the 10th most popular database management system (out of 350 systems)
  • Cassandra is used by 40% of the Fortune 100
  • Cassandra has grown 252% in popularity from 2013 to 2019

“Enterprises and developers tell us that they want the power and flexibility of Cassandra for a wide range of compelling use cases to impact everything from business optimisation to consumer apps,” said Jonathan Ellis, co-founder and CTO of DataStax. “They want Cassandra to be easier to use, backed by experts, and available in a range of options depending on their business and app needs.”

DataStax Luna is supposed to address these (above) needs and is available via a self-service website for purchasing and scaling.

The company claims to be working on the world’s largest Cassandra implementations, contributing to open source Cassandra and pioneering advances that extend the power of Cassandra for the needs of the enterprise.

This includes free downloads of the DataStax Apache Kafka Connector and Bulk Loader for all Cassandra users to make loading and unloading data faster and easier.

Organizations are finding that running open source projects for important applications without professional support is a significant risk.

Analyst firm Gartner has been very explicit on the matter.

“Gartner does not recommend unsupported open source offerings for production applications,” said Merv Adrian, Gartner vice president and analyst, in the report, State of the Open-Source DBMS Market, 2019.

DataStax Luna is available now


December 10, 2019  4:28 PM

Fairwinds navigates straighter open course towards SaaS-y Kubernetes 

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Cloud-native infrastructure company Fairwinds recently launched a SaaS product for DevOps teams so that they can manage multiple Kubernetes clusters.

The almost-eponymously named called Fairwinds Insights, uses an extensible architecture and has been launched with a curated set of open source security, reliability and auditing tools. 

The initial suite of tools includes Fairwinds Polaris, Fairwinds Goldilocks and Aqua Security’s Kube-hunter.

Fairwinds Insights claims to be able to solve a few common problems faced by DevOps teams. 

First, it eliminates the time-intensive process of researching, learning and deploying the Kubernetes auditing tools that are available. 

Second, it automatically organises and normalises data from each tool, so engineers get prioritised recommendations across all clusters. 

Finally, it enables DevOps teams to proactively manage the hand-off from development to production. 

NOTE: For the record, we can define normalised data as relational database data which has been through a process of structuring in accordance with a series of so-called normal forms in order to reduce data redundancy and improve data integrity. By other definitions, data normalization ensures all of your data looks and reads the same way across all records in any given database (although typically a relational one).

Misconfigurations situations

The platform can integrate into deployment pipelines so misconfigurations can be identified and fixed before releasing to production.

“Many DevOps teams have sprawling Kubernetes environments and want to get a handle on it, but with lack of resources and expertise, it’s not a priority. Fairwinds Insights is the first platform that solves this problem by leveraging community-built open source tooling and operationalising it in a way DevOps teams can use at scale,” said Joe Pelletier, Fairwinds’ VP of strategy. 

Fairwinds Insights is in public beta and free for any early adopter who wants to try the software during the beta period. The free tier, located at, is limited to a seven-day history for results and up to two clusters. 



Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: