Open Source Insider


August 21, 2019  9:39 AM

Golang or go home: how Curve is taking Golang to new heights

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

This is a guest post for the Computer Weekly Open Source Insider blog written by Matt Boyle in his capacity as lead software engineer at Curve.

Curve allows users to spend money from all their accounts with one Curve card – and hopes to simplify your finance through one secure mobile app.

Boyle writes…

Emerging only in 2009, Golang is still relatively new and not as widely used as other mainstream coding languages.

This young language was incubated inside Google, and has already been proven to perform well on a massive scale. We wanted to share with you a few reasons why we love Golang (Go) and how Curve is using it.

Go has excellent characteristics for scalability and services written using it typically have very small memory footprints. Because code is compiled into a single static binary, services can also be containerised with ease, making it much simpler to build and deploy. These attributes make Go an ideal choice for companies building microservices, as you can easily deploy into a highly available and scalable environment such as Kubernetes.

Go has everything you need to build APIs as part of its standard library.

It has an easy to use and performant http server out of the box, which eliminates some of the exploration and paralysis that can occur when teams are faced with designing a new project. If you were to use other languages such as Java or Node, this is often a significant obstacle in a team dynamic.

Automated formatter

There’s also another way it makes for smoother group workflow: code formatting is a first class concern, with an automated formatter [Ed: yes, that’s now a word] built into the language. With other languages, a lot of time and energy can be wasted agreeing on code formatting and which style guide to follow.

Go completely removes the need for this conversation.

Go is very easy to learn. Although finding engineers with significant production Go experience can be challenging, at Curve we have had great success with hiring people from Java and PHP backgrounds and upskilling them in Go.

It usually only takes about a week or two to begin actively contributing production-ready code. We have also found that developers end up preferring using Go. It really is simple yet effective: Go favours “what you see is what you get” – which means readable clear code with few complex abstractions.

This makes peer review a much easier task; whether its a colleague’s code or even huge open source projects such as Kubernetes.

We are strong advocates of TDD and Go has a fantastic test framework built into the language. Just by naming a file with _test.go and adding some test functions within that file, Go can automatically run all of your unit tests at lightning speed. This makes TDD easy to learn and use as part of the development cycle.

Kinky, in places

There are still a few kinks to work out, but we’ve found that it doesn’t take away from the functionality of Go.

For example, one particularly contentious feature is that it does not have explicit interfaces. Opinions are divided on this as many developers are used to the concept, but it can make it tricky to determine what interfaces your struct does satisfy. This is because you do not write X implements Y as you may in other languages. However, it is something you quickly learn to be okay with.

Dependency management was also originally overlooked by the team developing Go at Google. As such, the open source community stepped in and created Glide and Dep. Both were admirable attempts at solving dependency management but also came with their own set of problems.

As of Go 1.11, support has been added for modules and this has become the official dependency management tool. This has received mixed feedback, and there is definitely more improvements to be made in this area.

Vibrant open source community

Despite these growing pains, what really takes Go above and beyond is its vibrant community. In London there is a great meetup community that is very welcoming and open to collaboration. Everyone is friendly, helpful and keen to develop Go further, together. The Go open source community is thriving — some game-changing projects such as Istio, Kubernetes and Docker are all written in Go and available to download, contribute to and extend on GitHub.

It is this dynamic and innovative yet straightforward makeup that makes Go the ideal coding language for developing a company like our own.

Curve attended the Gophercon Golang conference in the UK this year… details of the event are shown in the link above.

Boyle: keen on community spirit.

August 14, 2019  7:22 AM

Codefresh freshens produce at the Kubernetes code marketplace

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Codefresh is the first Kubernetes-native CI/CD technology, with CI denoting Continuous Integration and CD denoting Continuous Delivery, obviously.

The organisation has this month worked to improve its open source marketplace with features that focus on faster code deployment.

First deployed in December 2018, the Codefresh Marketplace [kind of like an app store] allows developers to find commands without having to learn a proprietary API — this is because every step, which is browsable in the pipeline builder, is a simple Docker image.

The Marketplace contains a more set of pipeline steps provided both by Codefresh and partners, such as Blue-Green and Canary deployment steps for Kubernetes, Aqua security scanning and Helm package and deployment.

As Octopus Deploy reminds us here, “Canary deployments are a pattern for rolling out releases to a subset of users or servers. The idea is to first deploy the change to a small subset of servers, test it, and then roll the change out to the rest of the servers. The canary deployment serves as an early warning indicator with less impact on downtime: if the canary deployment fails, the rest of the servers aren’t impacted.”

Blue-Green deployment (as defined by Cloud Foundry here) is a technique that reduces downtime and risk by running two identical production environments, one called Blue and one called Green.

“At any time, only one of the environments is live, with the live environment serving all production traffic. For this example, Blue is currently live and Green is idle,” notes Cloud Foundry at the above link.

Private steps

Additional new functionality in Codefresh includes the ability to create private steps for a specific team, a new section for items maintained by Codefresh and automatic scanning and security checking of Marketplace additions.

“Our steps Marketplace provides building blocks for your pipelines. It is very easy to search for a keyword and see if there is a step for that method,” said Dan Garfield, Chief Technology Evangelist for Codefresh. “We look forward to communities adding more plugins as the adoption of Docker within companies skyrockets and the benefits of Docker-based tooling become more clear.”

All plugins are open source and users can contribute to the collection by creating a new plugin.

The Marketplace can be found at https://steps.codefresh.io/.


August 13, 2019  8:51 AM

Facebook open sources Hermes JavaScript engine

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Hermes is the Greek god of trade, heraldry and commerce… but also the Greek god of thieves and trickery.

Facebook was presumably thinking of Hermes’ more virtuous qualities when it named its JavaScript engine project after the Greek deity.

Hermes was built to make native Android apps built using the React Native network load faster.

React Native provides the Fetch API for networking needs.

Now open sourced by Facebook under an MIT licence, Hermes is supposed to supercharge startup times, drain less memory and result in a smaller overall application code footprint.

Why focus on startup times? 

Because application startup times impact what the tech industry likes to call Time To Interaction or TTO (a measure of the period between an application being launched and the user being able to use it)… and that’s a real make or break factor for software houses that pump out mass market applications.

How does it do it?

Part of the secret sauce in Hermes is its ability to execute what is known as bytecode precompilation.

Bytecode precompilation allows code to be processed employing a technique known as Ahead Of Time (AOT) compilation.

“Commonly, a JavaScript engine will parse the JavaScript source after it is loaded, generating bytecode. This step delays the start of JavaScript execution. To skip this step, Hermes uses an Ahead Of Time compiler which runs as part of the mobile application build process. As a result, more time can be spent optimising the bytecode, so the bytecode is smaller and more efficient. Whole-program optimisations can be performed, such as function deduplication and string table packing,” noted Facebook, in a technical statement.

Facebook itself says that as mobile applications are growing larger and more complex, larger apps using JavaScript frameworks often experience performance issues as developers add features and complexity.

According to a summary press statement, “To increase the performance of Facebook’s apps, we have teams that continuously improve our JavaScript code and platforms. As we analysed performance data, we noticed that the JavaScript engine itself was a significant factor in startup performance and download size. With this data in hand, we knew we had to optimise JavaScript performance in the more constrained environments of a mobile phone compared to a desktop or laptop.” 

Hermes currently targets the ES6 specification and the team intends to keep current with the JavaScript specification as it evolves. 

 

 

 


August 7, 2019  8:06 AM

DataStax: what is a ‘progressive’ cloud strategy?

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

With its roots and foundations in the open source Apache Cassandra database, Santa Clara headquartered DataStax insists that it likes to keep things open.

As such, the company is opening a wider aperture on its collaboration with VMware by now offering DataStax production support on VMware vSAN, now in hybrid and multi-cloud configurations.

For the record, VMware vSAN (the artist formerly known as Virtual SAN) is a hyper-converged software-defined storage (SDS) technology that ‘pools’ direct-attached storage devices across a VMware vSphere cluster to create a distributed, shared data store.

So, think about it… DataStax is known for its ability to provide an always-on distributed hybrid cloud database for real-time applications at scale — and, VMware is known (at least with vSAN) for its ability to coalesce distributed storage resources.

Consistent infrastructure

The end result of the two technologies combined should, in theory, if not in practice, deliver a more consistent infrastructure and data/application management experience across on-premises, hybrid and multi-cloud applications. 

The software engineering here is hybrid and multi-cloud-ready with capabilities to deliver operational and deployment consistency. There is built-in enterprise-grade availability here too. 

The firms claim that customers can avoid cloud lock-in with unified operations between environments and across clouds with a single interface for end-to-end security and infrastructure management.

Progressive cloud: defined

So then, what is a ‘progressive’ cloud strategy?

A progressive cloud strategy (in the context of this discussion at least) is one that seeks to run essentially distributed database resources (plural) uniformly from development to production across the essentially distributed multi-cloud world of the hybrid cloud — and across different departmental zones, digital workflows, world regions, datacentres and device endpoints…

… and this (as above) is what the two firms here are seeking to achieve.

“For enterprises with a progressive cloud strategy, our expanded collaboration enables them to prevent cloud vendor lock-in, improve developer productivity by being able to easily test use cases in minutes, and ultimately, rely on DataStax for enterprise data management and VMware as the platform for modern applications,” said Kathryn Erickson, senior director of strategic partnerships at DataStax. 

Erickson insists that DataStax is focused on making it easy for developers to use and manage DataStax by expanding VMware vSAN’s footprint to show that distributed systems do not need special treatment in their software stack.

DataStax Enterprise and DataStax Distribution of Apache Cassandra now spans anywhere VMware is deployed — on-premises data centers and public cloud.

As a result, customers can deploy VMware-hosted applications on-premises or on public clouds including Amazon Web Services, Microsoft Azure and IBM Cloud.


August 6, 2019  3:53 PM

Red Hat Enterprise Linux 7.7,  it’s a bit better than 7.6

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Red Hat… no, wait, stop there — not Red Hat the IBM company, actually just Red Hat — that’s how the company is still putting out news stories.

We’ll start again, open source enterprise software company Red Hat has announced a point release for Red Hat Enterprise Linux (RHEL) as it now hits its 7.7 version.

But what could Red Hat have put into version 7.7 that it failed to markedly address in version 7.6 may we ask?

The company points to terms like ‘enhanced consistency and control’ across cloud infrastructures (plural) for IT operations teams.

There’s also ‘modern supported container creation tools’ for enterprise application developers — as opposed to the old fashioned ones, that shipped in 7.6, presumably.

This version also moves to what Red Hat calls Maintenance Phase I, which does sound like the workshop power-down time that Star Wars TIE fighters need to go through in order to recharge their nuclear cells.

Infrastructure stability

In reality, Maintenance Phase I is all about Red Hat working to try and ‘maintain infrastructure stability’ for production environments and enhancing the reliability of the operating system.

Red Hat doesn’t go into detail to explain how it works to maintain infrastructure stability, but we can guess that this means looking at how the operating system behaves when exposed to different application types, running different data workloads, requiring different compute, storage, Input/Output actions, analytics engine calls (and so on and so on)… and then firming up the core build of the kernel itself so that it’s strong and flexible enough to handle life in the real world.

Future minor releases of Red Hat Enterprise Linux 7 will now focus solely on retaining and improving this stability rather than what have been called ‘net-new’ features.

Toolkit treats

Red Hat Enterprise Linux subscribers are able to migrate across platform versions as support and feature needs dictate. To help with the process, Red Hat offers tools, including in-place upgrades, which helps to streamline and simplify migrating one Red Hat version to another.

NOTE: Let’s remember that Red Hat Enterprise Linux version 8.0 does already exist, so this is Red Hat updating a version at a slightly lower level for customers who wish to progress their upgrade paths one point at a time.

Other more tangible new features include: 

  • Red Hat Insights, Red Hat’s expertise-as-a-service offering, which helps users proactively detect, analyze and remediate a variety of potential software security and configuration issues before they cause downtime or other problems.
  • Support for image builder, a Red Hat Enterprise Linux utility that enables IT teams to build cloud images for major public cloud infrastructures, including Amazon Web Services, Microsoft Azure and Google Cloud Platform.

Red Hat Enterprise Linux 7.7 also introduces support for live patching the underlying Linux kernel. Live patching support enables IT teams to apply kernel updates to remediate Critical or Important Common Vulnerabilities and Exposures (CVEs) while reducing the need for system reboots.


August 5, 2019  10:25 AM

NordVPN offers NordLynx for Linux, built around WireGuard

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Virtual Private Network (VPN) company NordVPN has introduced NordLynx technology built around the WireGuard protocol. 

WireGuard is thought to be shaking up the VPN space as a new type of protocol because of its approach to cryptography and speed — other protocols in this space include OpenVPN and IPSec out of the water.

According to the WireGuard team, this technology is designed as a general purpose VPN for running on [anything from] embedded interfaces [up to] super computers alike, fit for many different circumstances. 

Initially released for the Linux kernel, it is now cross-platform (Windows, macOS, BSD, iOS, Android) and widely deployable — it is currently said to be under ‘heavy development’ going forward.

The new technology from NordVPN combines WireGuard’s high-speed connection and NordVPN’s custom-made double Network Address Translation (NAT) system, a technology which aims to safeguards user privacy. 

Linux first

At the moment, NordLynx is available for Linux users.

“In fall 2018, we invited a small group of our users to take our WireGuard implementation for a test drive. [Now], after months of further development and testing, we’re ready to present NordLynx – our solution for a fast, private, and secure VPN connection,” sayid Ruby Gonzalez, head of communication at NordVPN.

NordLynx openly states that it is faster than the current leading protocols (the above-mentioned OpenVPN and IPSec) and this is helped by the fact that it consists of only 4000 lines of code, which also makes it easier to deploy and audit.

Although WireGuard is easy to implement and manage, its ability to secure users’ privacy often comes up as a point for discussion. 

It does not dynamically assign IP addresses to everyone connected to a server. Therefore, it requires to store at least some user data on the server, compromising their privacy. 

Double NAT is natty

Conversely, the double NAT system from NordVPN creates two local network interfaces for each user. 

The first interface assigns the same local IP address to all users connected to a server. Once the secure VPN tunnel is established, the second network interface with a dynamic NAT system kicks in. Dynamic local IP addresses remain assigned only while the session is active and allow not to store any identifiable data on the server.

The NordLynx technology is now available for all users of NordVPN for Linux — to switch to NordLynx, install WireGuard, open the terminal and enter ‘nordvpn set technology NordLynx’.

As a final note, NordVPN has completed an industry-first audit with its no-logs policy. The audit was performed by PricewaterhouseCoopers AG, Zurich, Switzerland.

 

 

 


August 5, 2019  9:03 AM

What is Kubernetes-as-a-Service?

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

According to wikis, hacker forum discussions and the team itself, Kubernetes is so-named because it translates from (κυβερνήτης in Greek) to governor, helmsman or captain — and further, ‘gubernare’ translates from Latin to government.

Which all makes perfect sense.

Because Kubernetes is an open source orchestration technology used to manage Linux containers across private, public and hybrid cloud environments.

Or… in the words of the people behind the technology: Kubernetes is a portable, extensible, open source platform for managing containerised workloads and services, that facilitates both declarative configuration and automation. 

So if that’s Kubernetes, what is Kubernetes-as-a-Service?

Kubernetes-as-a-Service (KaaS) is way of getting hold of that containerised workload orchestration and management know-how, but in smaller (ah-hem, containerised) chunks where the ability to manage and tear down on-premise and cloud-based container clusters is supplied on-demand on a project-by-project basis.

Who’s doing KaaS?

One firm based in San Diego is and it’s called Kazuhm — pronounced ‘kah-zoom’.

Resource allocation (& recapture)

Kazuhm suggests that tech teams want to use more containers to manage compute-heavy workloads, but container management solutions do not [typically] address concerns about resource allocation to successfully manage containers in multi-cloud, hybrid cloud, and on-premise distributed computing environments — specifically allowing users to run workloads on desktops. 

“With today’s focus on agility, it is more important than ever to be able to quickly and easily stand up and tear down clusters without headaches and excessive costs,” said Kazuhm CEO Tim O’Neal. “While adoption of Kubernetes is rapidly increasing, many people who recognise its benefits still do not know where to start, so they haven’t yet implemented it. We believe resource recapture and a highly user-friendly experience are two keys to making containers and distributed computing accessible to all.”   

Kazuhm allows organizations to recapture existing IT resources and unused processing power and manage workloads across a fabric of desktops, datacentres, cloud and edge. 

This creates a Kubernetes on-demand environment with organisations’ existing hardware and/or cloud resources. Kazuhm says it eliminates dedicated hardware requirements and delivers savings on cloud costs. 

Kazuhm KaaS is agnostic across clouds. Regardless of the organisation’s preferred cloud vendor, the user interface remains consistent. 

Calming command complexity

As a result, the user does not need to know commands or the specifics of how to manage containers in different clouds (e.g. AWS vs. Google Cloud). 

There is also ‘push-button’ application deployment here, so users outside of the DevOps team (such as data scientists or data engineers) can deploy container-based applications after they have been set up.  

Kazuhm’s KaaS offering is available at no cost for up to 10 on-premise nodes at the time of writing — and unlimited cloud resources can be managed the Kazuhm Basic product.

 


August 2, 2019  9:52 AM

Redgate acquires (but commits to widening) open source Flyway

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Database development company Redgate has been to the shops.

The Cambridge, UK-based firm has bought eggs, fresh bloomers (no, the bread kind) and, direct from the meat counter, a US$10 million portion (i.e. all of it) of cross-platform database migrations tool, Flyway.

Redgate’s mission in life is to enable the database to be included in DevOps, whatever database its customers are working on.

The company thinks that database development is often a stumbling block in the development of virtuous DevOps cycles and so, hence, it has a natural interest in software (like database migrations tools) that allows it to be included.

Open source pedigree

Flyway was originally developed as an open source project by Axel Fontaine to make database migrations easy on multiple platforms — it has seen an estimated 23 million downloads of its free community edition. 

A commercial customers paid-for edition also exists.

Flyway supports a wide range of databases, from Oracle to MySQL, PostgreSQL to Amazon Redshift.

The acquisition is hoped to enable Redgate to extend its product roadmap beyond database DevOps for SQL Server to new database platforms. 

Redgate wants its ambitious plans to reinforce Flyway’s place as the open source database migrations tool of choice.

 “We’ve spent the last five years developing a portfolio of SQL Server tools that enable developers to include the database in DevOps and we want to give those same advantages to every developer on any platform,” said Simon Galbraith, CEO and co-founder of Redgate.

Axel Fontaine will now work on Flyway alongside a development team at Redgate.

“Redgate has years of experience in the database market and also has the resources to further develop and enhance Flyway many times faster than I can. This will make both the community and the commercial editions better for everyone,” said Fontaine.

Open pledge

Redgate will continue to maintain a free version of Flyway, available under the open source Apache v2 license. The firm is committed to supporting and growing the open source community that has helped in its development.

It already offers free versions of tools that work with open source software, like MySQL Compare and was instrumental in backing the development of Glimpse, the open source diagnostic platform for the web.

 

 


July 31, 2019  12:16 PM

NuoDB 4.0 beats drum for cloud-native cloud-agnosticism

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Distributed SQL database company NuoDB has reached its version 4.0 iteration… and aligned further to core open source cloud platform technologies.

The new release expands cloud-native and cloud-agnostic capabilities with support for Kubernetes Operators and Google Cloud and Azure public clouds. 

This includes the recently announced Kubernetes Operator, a technology designed to simplify and automate database deployments in Red Hat OpenShift. 

The Operator uses NuoDB Admin, a simplified management tier that includes a REST API designed to improve database lifecycle management of NuoDB for cloud and container environments. 

NOTE: According to CoreOS, an Operator is a method of packaging, deploying and managing a Kubernetes application. A Kubernetes application is an application that is both deployed on Kubernetes and managed using the Kubernetes APIs and kubectl tooling. You can think of Operators as the runtime that manages this type of application on Kubernetes.

Also in this release, in addition to supporting Amazon Web Services, NuoDB is now certified for both Azure and Google Cloud Platform… which, arguably, is cloud-agnosticism in motion. 

In addition to the new REST API, NuoDB Admin includes easier database restart by automatically managing the restart order, more granular and simpler client connection load balancing with the use of process labels, plus also… there’s improved diagnostics and domain state metrics.

Say yes to the index

NuoDB 4.0 also includes indexing improvements, such as added support for online index creation and expression-based indexes. 

NOTE: Online index creation enables users to create indexes without impacting application availability. Expression-based indexes enable users to create indexes based on general expression and functions, improving performance for queries using expressions or functions. 

“From an operational perspective, NuoDB 4.0 includes improvements to how NuoDB is packaged and distributed. Delivery of the database is now separate from all drivers and some client utilities. A new client package now provides all supported drivers and client utilities, including full support for LDAP authentication. In addition, network encryption has been upgraded to TLS 1.2 using customer provided certificate keys. Learn how to set up TLS in your NuoDB database,” said Ariff Kassam, VP of product at NuoDB.

Finally, Kassam notes that 4.0 offers improved index creation performance, up to 50% faster than previous versions of the database. 

 

 


July 29, 2019  8:42 AM

Neo4j charts tighter grip on graph data-at-rest 

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

San Mateo headquartered graph database company Neo4j (with roots in open source) is working with French defence company Thales (pronounced ta-less).

A graph database is a database designed to treat the relationships between data as equally important to the data itself — it is intended to hold data without constricting it to a pre-defined model… instead, the data is stored showing how each individual entity connects with or is related to others.

A new integration has been created between Neo4j Enterprise Edition and Thales Vormetric Transparent Encryption with the aim of providing ‘data-at-rest’ encryption.

What is data-at-rest?

In basic terms, data-at-rest is data that is stored physically in any digital form in a database, data lake, spreadsheet, tape, disk or any other form of storage media or repository — it is, of course, the opposite of data-in-transit.

The firms say that the new integration provides ‘industrial-strength’ encryption-at-rest for the Neo4j graph database and helps Neo4j users meet more stringent security and compliance requirements.

Magical analyst house Gartner states, “The application of graph processing and graph databases will grow at 100% annually through 2022 to continuously accelerate data preparation and enable more complex and adaptive data science.”

Neo4j explains that its challenge comes from the real-time nature and extreme performance requirements of many of its mission-critical enterprise deployments, where sacrificing performance for security isn’t an option. 

VP of products at Neo4j Philip Rathle points to security-sensitive industries such as financial services, insurance and healthcare, where this kind of encrption will be needed most.

The Neo4j and Thales integration is meant to ensure enterprise policy and regulatory compliance for Neo4j instances including data-at-rest encryption with centralised key management, privileged user access control and security intelligence to meet compliance reporting requirements. 

Developers & graph databases

The solution is deployed without any changes to infrastructure so developers and data engineers working in security teams can implement encryption with minimal disruption.

The integration protects data wherever it resides: on-premises, across multiple clouds or within big data and container environments. 


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: