Open Source Insider


August 7, 2019  8:06 AM

DataStax: what is a ‘progressive’ cloud strategy?

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

With its roots and foundations in the open source Apache Cassandra database, Santa Clara headquartered DataStax insists that it likes to keep things open.

As such, the company is opening a wider aperture on its collaboration with VMware by now offering DataStax production support on VMware vSAN, now in hybrid and multi-cloud configurations.

For the record, VMware vSAN (the artist formerly known as Virtual SAN) is a hyper-converged software-defined storage (SDS) technology that ‘pools’ direct-attached storage devices across a VMware vSphere cluster to create a distributed, shared data store.

So, think about it… DataStax is known for its ability to provide an always-on distributed hybrid cloud database for real-time applications at scale — and, VMware is known (at least with vSAN) for its ability to coalesce distributed storage resources.

Consistent infrastructure

The end result of the two technologies combined should, in theory, if not in practice, deliver a more consistent infrastructure and data/application management experience across on-premises, hybrid and multi-cloud applications. 

The software engineering here is hybrid and multi-cloud-ready with capabilities to deliver operational and deployment consistency. There is built-in enterprise-grade availability here too. 

The firms claim that customers can avoid cloud lock-in with unified operations between environments and across clouds with a single interface for end-to-end security and infrastructure management.

Progressive cloud: defined

So then, what is a ‘progressive’ cloud strategy?

A progressive cloud strategy (in the context of this discussion at least) is one that seeks to run essentially distributed database resources (plural) uniformly from development to production across the essentially distributed multi-cloud world of the hybrid cloud — and across different departmental zones, digital workflows, world regions, datacentres and device endpoints…

… and this (as above) is what the two firms here are seeking to achieve.

“For enterprises with a progressive cloud strategy, our expanded collaboration enables them to prevent cloud vendor lock-in, improve developer productivity by being able to easily test use cases in minutes, and ultimately, rely on DataStax for enterprise data management and VMware as the platform for modern applications,” said Kathryn Erickson, senior director of strategic partnerships at DataStax. 

Erickson insists that DataStax is focused on making it easy for developers to use and manage DataStax by expanding VMware vSAN’s footprint to show that distributed systems do not need special treatment in their software stack.

DataStax Enterprise and DataStax Distribution of Apache Cassandra now spans anywhere VMware is deployed — on-premises data centers and public cloud.

As a result, customers can deploy VMware-hosted applications on-premises or on public clouds including Amazon Web Services, Microsoft Azure and IBM Cloud.

August 6, 2019  3:53 PM

Red Hat Enterprise Linux 7.7,  it’s a bit better than 7.6

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Red Hat… no, wait, stop there — not Red Hat the IBM company, actually just Red Hat — that’s how the company is still putting out news stories.

We’ll start again, open source enterprise software company Red Hat has announced a point release for Red Hat Enterprise Linux (RHEL) as it now hits its 7.7 version.

But what could Red Hat have put into version 7.7 that it failed to markedly address in version 7.6 may we ask?

The company points to terms like ‘enhanced consistency and control’ across cloud infrastructures (plural) for IT operations teams.

There’s also ‘modern supported container creation tools’ for enterprise application developers — as opposed to the old fashioned ones, that shipped in 7.6, presumably.

This version also moves to what Red Hat calls Maintenance Phase I, which does sound like the workshop power-down time that Star Wars TIE fighters need to go through in order to recharge their nuclear cells.

Infrastructure stability

In reality, Maintenance Phase I is all about Red Hat working to try and ‘maintain infrastructure stability’ for production environments and enhancing the reliability of the operating system.

Red Hat doesn’t go into detail to explain how it works to maintain infrastructure stability, but we can guess that this means looking at how the operating system behaves when exposed to different application types, running different data workloads, requiring different compute, storage, Input/Output actions, analytics engine calls (and so on and so on)… and then firming up the core build of the kernel itself so that it’s strong and flexible enough to handle life in the real world.

Future minor releases of Red Hat Enterprise Linux 7 will now focus solely on retaining and improving this stability rather than what have been called ‘net-new’ features.

Toolkit treats

Red Hat Enterprise Linux subscribers are able to migrate across platform versions as support and feature needs dictate. To help with the process, Red Hat offers tools, including in-place upgrades, which helps to streamline and simplify migrating one Red Hat version to another.

NOTE: Let’s remember that Red Hat Enterprise Linux version 8.0 does already exist, so this is Red Hat updating a version at a slightly lower level for customers who wish to progress their upgrade paths one point at a time.

Other more tangible new features include: 

  • Red Hat Insights, Red Hat’s expertise-as-a-service offering, which helps users proactively detect, analyze and remediate a variety of potential software security and configuration issues before they cause downtime or other problems.
  • Support for image builder, a Red Hat Enterprise Linux utility that enables IT teams to build cloud images for major public cloud infrastructures, including Amazon Web Services, Microsoft Azure and Google Cloud Platform.

Red Hat Enterprise Linux 7.7 also introduces support for live patching the underlying Linux kernel. Live patching support enables IT teams to apply kernel updates to remediate Critical or Important Common Vulnerabilities and Exposures (CVEs) while reducing the need for system reboots.


August 5, 2019  10:25 AM

NordVPN offers NordLynx for Linux, built around WireGuard

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Virtual Private Network (VPN) company NordVPN has introduced NordLynx technology built around the WireGuard protocol. 

WireGuard is thought to be shaking up the VPN space as a new type of protocol because of its approach to cryptography and speed — other protocols in this space include OpenVPN and IPSec out of the water.

According to the WireGuard team, this technology is designed as a general purpose VPN for running on [anything from] embedded interfaces [up to] super computers alike, fit for many different circumstances. 

Initially released for the Linux kernel, it is now cross-platform (Windows, macOS, BSD, iOS, Android) and widely deployable — it is currently said to be under ‘heavy development’ going forward.

The new technology from NordVPN combines WireGuard’s high-speed connection and NordVPN’s custom-made double Network Address Translation (NAT) system, a technology which aims to safeguards user privacy. 

Linux first

At the moment, NordLynx is available for Linux users.

“In fall 2018, we invited a small group of our users to take our WireGuard implementation for a test drive. [Now], after months of further development and testing, we’re ready to present NordLynx – our solution for a fast, private, and secure VPN connection,” sayid Ruby Gonzalez, head of communication at NordVPN.

NordLynx openly states that it is faster than the current leading protocols (the above-mentioned OpenVPN and IPSec) and this is helped by the fact that it consists of only 4000 lines of code, which also makes it easier to deploy and audit.

Although WireGuard is easy to implement and manage, its ability to secure users’ privacy often comes up as a point for discussion. 

It does not dynamically assign IP addresses to everyone connected to a server. Therefore, it requires to store at least some user data on the server, compromising their privacy. 

Double NAT is natty

Conversely, the double NAT system from NordVPN creates two local network interfaces for each user. 

The first interface assigns the same local IP address to all users connected to a server. Once the secure VPN tunnel is established, the second network interface with a dynamic NAT system kicks in. Dynamic local IP addresses remain assigned only while the session is active and allow not to store any identifiable data on the server.

The NordLynx technology is now available for all users of NordVPN for Linux — to switch to NordLynx, install WireGuard, open the terminal and enter ‘nordvpn set technology NordLynx’.

As a final note, NordVPN has completed an industry-first audit with its no-logs policy. The audit was performed by PricewaterhouseCoopers AG, Zurich, Switzerland.

 

 

 


August 5, 2019  9:03 AM

What is Kubernetes-as-a-Service?

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

According to wikis, hacker forum discussions and the team itself, Kubernetes is so-named because it translates from (κυβερνήτης in Greek) to governor, helmsman or captain — and further, ‘gubernare’ translates from Latin to government.

Which all makes perfect sense.

Because Kubernetes is an open source orchestration technology used to manage Linux containers across private, public and hybrid cloud environments.

Or… in the words of the people behind the technology: Kubernetes is a portable, extensible, open source platform for managing containerised workloads and services, that facilitates both declarative configuration and automation. 

So if that’s Kubernetes, what is Kubernetes-as-a-Service?

Kubernetes-as-a-Service (KaaS) is way of getting hold of that containerised workload orchestration and management know-how, but in smaller (ah-hem, containerised) chunks where the ability to manage and tear down on-premise and cloud-based container clusters is supplied on-demand on a project-by-project basis.

Who’s doing KaaS?

One firm based in San Diego is and it’s called Kazuhm — pronounced ‘kah-zoom’.

Resource allocation (& recapture)

Kazuhm suggests that tech teams want to use more containers to manage compute-heavy workloads, but container management solutions do not [typically] address concerns about resource allocation to successfully manage containers in multi-cloud, hybrid cloud, and on-premise distributed computing environments — specifically allowing users to run workloads on desktops. 

“With today’s focus on agility, it is more important than ever to be able to quickly and easily stand up and tear down clusters without headaches and excessive costs,” said Kazuhm CEO Tim O’Neal. “While adoption of Kubernetes is rapidly increasing, many people who recognise its benefits still do not know where to start, so they haven’t yet implemented it. We believe resource recapture and a highly user-friendly experience are two keys to making containers and distributed computing accessible to all.”   

Kazuhm allows organizations to recapture existing IT resources and unused processing power and manage workloads across a fabric of desktops, datacentres, cloud and edge. 

This creates a Kubernetes on-demand environment with organisations’ existing hardware and/or cloud resources. Kazuhm says it eliminates dedicated hardware requirements and delivers savings on cloud costs. 

Kazuhm KaaS is agnostic across clouds. Regardless of the organisation’s preferred cloud vendor, the user interface remains consistent. 

Calming command complexity

As a result, the user does not need to know commands or the specifics of how to manage containers in different clouds (e.g. AWS vs. Google Cloud). 

There is also ‘push-button’ application deployment here, so users outside of the DevOps team (such as data scientists or data engineers) can deploy container-based applications after they have been set up.  

Kazuhm’s KaaS offering is available at no cost for up to 10 on-premise nodes at the time of writing — and unlimited cloud resources can be managed the Kazuhm Basic product.

 


August 2, 2019  9:52 AM

Redgate acquires (but commits to widening) open source Flyway

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Database development company Redgate has been to the shops.

The Cambridge, UK-based firm has bought eggs, fresh bloomers (no, the bread kind) and, direct from the meat counter, a US$10 million portion (i.e. all of it) of cross-platform database migrations tool, Flyway.

Redgate’s mission in life is to enable the database to be included in DevOps, whatever database its customers are working on.

The company thinks that database development is often a stumbling block in the development of virtuous DevOps cycles and so, hence, it has a natural interest in software (like database migrations tools) that allows it to be included.

Open source pedigree

Flyway was originally developed as an open source project by Axel Fontaine to make database migrations easy on multiple platforms — it has seen an estimated 23 million downloads of its free community edition. 

A commercial customers paid-for edition also exists.

Flyway supports a wide range of databases, from Oracle to MySQL, PostgreSQL to Amazon Redshift.

The acquisition is hoped to enable Redgate to extend its product roadmap beyond database DevOps for SQL Server to new database platforms. 

Redgate wants its ambitious plans to reinforce Flyway’s place as the open source database migrations tool of choice.

 “We’ve spent the last five years developing a portfolio of SQL Server tools that enable developers to include the database in DevOps and we want to give those same advantages to every developer on any platform,” said Simon Galbraith, CEO and co-founder of Redgate.

Axel Fontaine will now work on Flyway alongside a development team at Redgate.

“Redgate has years of experience in the database market and also has the resources to further develop and enhance Flyway many times faster than I can. This will make both the community and the commercial editions better for everyone,” said Fontaine.

Open pledge

Redgate will continue to maintain a free version of Flyway, available under the open source Apache v2 license. The firm is committed to supporting and growing the open source community that has helped in its development.

It already offers free versions of tools that work with open source software, like MySQL Compare and was instrumental in backing the development of Glimpse, the open source diagnostic platform for the web.

 

 


July 31, 2019  12:16 PM

NuoDB 4.0 beats drum for cloud-native cloud-agnosticism

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Distributed SQL database company NuoDB has reached its version 4.0 iteration… and aligned further to core open source cloud platform technologies.

The new release expands cloud-native and cloud-agnostic capabilities with support for Kubernetes Operators and Google Cloud and Azure public clouds. 

This includes the recently announced Kubernetes Operator, a technology designed to simplify and automate database deployments in Red Hat OpenShift. 

The Operator uses NuoDB Admin, a simplified management tier that includes a REST API designed to improve database lifecycle management of NuoDB for cloud and container environments. 

NOTE: According to CoreOS, an Operator is a method of packaging, deploying and managing a Kubernetes application. A Kubernetes application is an application that is both deployed on Kubernetes and managed using the Kubernetes APIs and kubectl tooling. You can think of Operators as the runtime that manages this type of application on Kubernetes.

Also in this release, in addition to supporting Amazon Web Services, NuoDB is now certified for both Azure and Google Cloud Platform… which, arguably, is cloud-agnosticism in motion. 

In addition to the new REST API, NuoDB Admin includes easier database restart by automatically managing the restart order, more granular and simpler client connection load balancing with the use of process labels, plus also… there’s improved diagnostics and domain state metrics.

Say yes to the index

NuoDB 4.0 also includes indexing improvements, such as added support for online index creation and expression-based indexes. 

NOTE: Online index creation enables users to create indexes without impacting application availability. Expression-based indexes enable users to create indexes based on general expression and functions, improving performance for queries using expressions or functions. 

“From an operational perspective, NuoDB 4.0 includes improvements to how NuoDB is packaged and distributed. Delivery of the database is now separate from all drivers and some client utilities. A new client package now provides all supported drivers and client utilities, including full support for LDAP authentication. In addition, network encryption has been upgraded to TLS 1.2 using customer provided certificate keys. Learn how to set up TLS in your NuoDB database,” said Ariff Kassam, VP of product at NuoDB.

Finally, Kassam notes that 4.0 offers improved index creation performance, up to 50% faster than previous versions of the database. 

 

 


July 29, 2019  8:42 AM

Neo4j charts tighter grip on graph data-at-rest 

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

San Mateo headquartered graph database company Neo4j (with roots in open source) is working with French defence company Thales (pronounced ta-less).

A graph database is a database designed to treat the relationships between data as equally important to the data itself — it is intended to hold data without constricting it to a pre-defined model… instead, the data is stored showing how each individual entity connects with or is related to others.

A new integration has been created between Neo4j Enterprise Edition and Thales Vormetric Transparent Encryption with the aim of providing ‘data-at-rest’ encryption.

What is data-at-rest?

In basic terms, data-at-rest is data that is stored physically in any digital form in a database, data lake, spreadsheet, tape, disk or any other form of storage media or repository — it is, of course, the opposite of data-in-transit.

The firms say that the new integration provides ‘industrial-strength’ encryption-at-rest for the Neo4j graph database and helps Neo4j users meet more stringent security and compliance requirements.

Magical analyst house Gartner states, “The application of graph processing and graph databases will grow at 100% annually through 2022 to continuously accelerate data preparation and enable more complex and adaptive data science.”

Neo4j explains that its challenge comes from the real-time nature and extreme performance requirements of many of its mission-critical enterprise deployments, where sacrificing performance for security isn’t an option. 

VP of products at Neo4j Philip Rathle points to security-sensitive industries such as financial services, insurance and healthcare, where this kind of encrption will be needed most.

The Neo4j and Thales integration is meant to ensure enterprise policy and regulatory compliance for Neo4j instances including data-at-rest encryption with centralised key management, privileged user access control and security intelligence to meet compliance reporting requirements. 

Developers & graph databases

The solution is deployed without any changes to infrastructure so developers and data engineers working in security teams can implement encryption with minimal disruption.

The integration protects data wherever it resides: on-premises, across multiple clouds or within big data and container environments. 


July 29, 2019  7:25 AM

YugaByte goes 100% open under Apache

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Open source distributed SQL database company YugaByte has confirmed that its eponymously named YugaByte DB is now 100 percent open source under the Apache 2.0 license.

The additional homage to open source-ness means that previously commercial features now move into the open source core.

YugaByte says it hopes that this will directly create more opportunities for open collaboration between users, who will have their hands on 100% open tools.

The company had previously kept some features closed source including distributed dackups, data encryption and read replicas.

NOTE: As detailed on Quora by Shivam Gulati of AWS, a read replica is an additional instance copy you can setup for having efficient and better performing reads if your workload required the same.

“You can consider it as somewhat a slave server but not exactly the same,” notes Gulati.

The source code for YugaByte DB Platform is now available in the same GitHub repository as YugaByte DB under a new free trial-only, source available license developed by the Polyform Project.

YugaByte’s rebranded commercial offering exists with self-managed database-as-a-service (DBaaS) capabilities.

“Using proprietary, non-compete licenses for database features to ward off cloud providers from offering a commercial version as-a-service is short-sighted and damaging to the foundational principles of open source software. Vendors like MongoDB and Cockroach Labs who have moved to such licenses for not just add-on features but for their core database have disowned the same developers whose initial trust lifted their previously open source project off the ground,” said Kannan Muthukkaruppan, co-founder and CEO, YugaByte.

Default build

The default build target in the GitHub repository generates only the open source software binary to ensure that users who are not interested in the commercial DBaaS features can continue to have a frictionless experience.

For users interested in collaborating with the committers on the commercial features, YugaByte suggests that this change allows a more open forum to work together including discussing issues, offering design feedback and even submitting their own fixes upstream.

This YugaByte blog details more of the reasons behind the company’s decision to move to 100% open source.


July 26, 2019  9:26 AM

Fairwinds Polaris is an open source Kubernetes diagnostics tool

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Cloud-native infrastructure solutions provider Fairwinds has been shooting the breeze, blowing fresh air, firing up the wind turbines, enjoying the scent of… (Ed — enough already, we get it, the company has a windy name) … news coming out of its production department detailing the availability of Polaris, an open source tools to keep Kubernetes environments performing optimally. 

Polaris automates configuration best practices based on Fairwinds’ previous experience patterns logged through building and managing cloud-native production deployments. 

It is now available to the Kubernetes community as a free service known as Polaris Snapshot or as an open source download. 

Fairwinds Polaris consists of two key components: a dashboard that provides ratings for clusters that are currently deployed; and a webhook that can help prevent future configuration errors based on defined standards.

Small errors, big problems

“Keeping cloud-native environments healthy is a challenge. Even the smallest errors in Kubernetes deployment configurations can lead to major issues ranging from poor container performance to production outages and security breaches,” said Bill Ledingham, CEO at Fairwinds. 

Ledingham claims that Polaris is built on a set of well-tested standards and so provides a way to identify configuration issues early, fix them and prevent future problems.

“Using Fairwinds Polaris to scan Kubernetes workloads allows teams to spot potential issues early in the application lifecycle and stay aligned with best practices. Result categorisation by Kubernetes namespace in Polaris helps application and infrastructure teams to address configuration issues in parallel, resulting in faster time to production,” said Debashis Das, VP of architecture at Veracode.

Polaris Dashboard

Polaris runs a variety of checks on Kubernetes deployments against established best practices and tested standards. 

It presents a dashboard that scores a cluster’s health, provides reports for each individual workload… and breaks out results by category, name space and deployment. 

Each check includes links to corresponding documentation and further resources on the topic. In addition to an overview of the current state of deployments, Polaris provides a roadmap for making improvements.

Polaris Webhook

The webhook provides a way to automatically enforce a configuration standard for all future cluster deployments. Once an issue on the dashboard is addressed, the webhook is deployed to ensure that future configurations adhere to that standard. When deployed in a cluster, the webhook will prevent any deployments that have “error” level configuration violations. 

Polaris is the release of Fairwinds’ most recent open source software development project.

Others include RBAC Manager for simplifying the management of Role Bindings and Service Accounts; rok8s Scripts, for managing application deployment life cycles in Kubernetes with scripts; and Reckoner, for streamlining the installation and management of multiple Helm chart releases.


July 24, 2019  8:26 AM

What is Artificial General Intelligence (AGI)?

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Microsoft is investing $1 billion in OpenAI, a San Francisco-based non-profit focused on open source Artificial Intelligence (AI).

OpenAI has worked on open source AI advancements in areas including robotic hand development. 

Other areas of AI work by the company are heavily aligned towards what we might call the ‘social implications’ of AI in real world deployments.

OpenAI’s mission is to ensure that so-called Artificial General Intelligence (AGI) benefits all of humanity — the OpenAI Charter describes the principles that guide it as it executes on this mission.

What is Artificial General Intelligence (AGI)?

AGI is AI that is designed to work with people to help solve currently intractable multidisciplinary problems, including global challenges such as climate change, more personalised healthcare and education etc. Modern AI systems work well for the specific problem on which they’ve been trained, but getting AI systems to help address some of the hardest problems facing the world today is argued to require generalisation and deep mastery of multiple AI technologies.

Microsoft and OpenAI say they will will focus on building a computational platform in Azure of ‘unprecedented scale’, which will train and run increasingly advanced AI models, include hardware technologies that build on Microsoft’s supercomputing technology.

“The creation of AGI will be the most important technological development in human history, with the potential to shape the trajectory of humanity,” said Sam Altman, CEO, OpenAI. “Our mission is to ensure that AGI technology benefits all of humanity, and we’re working with Microsoft to build the supercomputing foundation on which we’ll build AGI. We believe it’s crucial that AGI is deployed safely and securely and that its economic benefits are widely distributed. We are excited about how deeply Microsoft shares this vision.”

Satya Nadella, CEO, Microsoft added a comment to confirm that OpenAI technology will work with Microsoft Azure AI supercomputing technologies — and that his ambition is to democratise AI — while always keeping AI safety front and centre — so everyone can benefit.

All of it will adhere to the two companies’ shared principles on ethics and trust, apparently.

Microsoft CEO Satya Nadella and OpenAI CEO Sam Altman at the Microsoft campus in Redmond, Wash. on July 15, 2019. (Photography by Scott Eklund/Red Box Pictures)

 

 

 


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: