Open Source Insider

June 21, 2017  12:40 PM

An ablution revolution, MongoDB on why software needs a clean & open backend

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Software needs to go to the bathroom and make sure it has a clean and open backend.

This is the mandate and call to arms now being laid down by open source document-oriented data model database company MongoDB.

As already reported here on Computer Weekly, MongoDB has this month staged its annual developer-centric conference and exhibition in the grand city of Chicago, Illonois.

Clean & open backend

This notion of a clean and open backend of course refers to our wider understanding of Backend-as-a-Service (BaaS) and the work that goes on below the front end logic of the application (i.e. on the device and around the User Interface) deep down in the backend… now very often in the cloud datacentre.

Given that devices are now running applications that rely on such a high degree of backend engine/boiler room guts (often referred to as boilerplate coding), MongoDB has insisted that this backend must be both clean and open so that the application itself can safely rely on the required degree of backend plumbing, which will typically include tasks connected to the management, storage and retrieval of data.

Backend servicing

Other backend business will include application servicing elements such as data privacy controls and tasks related to integrating and composing various application services.

The firm’s latest product release is called MongoDB Stitch, this (claim the developers at MongoDB) is a software ablution revolution that will help orchestrate commonly used application services including authentication and also service areas such as payments and messaging etc.

MongoDB Stitch is indeed a Backend-as-a-Service (BaaS) that provides a single API to the database and the services that modern applications depend on. The firm says that this helps developers focus on delivering the user experience rather than focusing on operations or writing boilerplate/backend code.

As MongoDB co-founder and CTO Eliot Horowitz explains, things have massively improved for software application developers on the one hand i.e. application services now exist as ‘commodity components’ that can be engineered into modern applications far more quickly.

It’s a dirty job

The difficulty is that the very existence of these services means that there is still a ‘drudgery’ involved in terms of tying those services together, implementing tricky but essential access control and indeed managing the data that those services interact with.

It’s a dirty job but someone has to do it says MongoDB… and this is what Stitch sets out to do with a promise to allow coders to focus on creating higher-value differentiated applications (with more features) more quickly.

“MongoDB Stitch retains the full power and scalability of MongoDB – whether using your existing database or starting from scratch. Initially available with new or existing MongoDB Atlas clusters, MongoDB Stitch will expand to support any MongoDB database,” said the firm, in a press statement.

This new offering also claims to simplifies security and privacy through configurable data access controls and integration with authentication providers. It presents a single API to the MongoDB database and the other services that modern applications depend on, without locking developers into services from a single vendor.

Other features include:

  • Simplified operations – manages backend infrastructure and the lower layers of the application stack, with the MongoDB Atlas database service and the other service integrations handled by MongoDB Stitch.
  • Share existing MongoDB data with new apps – Safely exposes selected fields from an existing database to a new application. New apps can add additional data to the same database.
  • Scalable – Supports an application’s capacity and performance needs as requirements grow. Get started for free, and then pay only for what is needed as business demands grow.

A humongous shift?

Should we buy MongoDB’s mantra here then? Should we accept that modern software applications just have to depend on the NoSQL dynamic schema world of the MongoDB boys and girls? Is there really a humongous shift underway here?

Commentators suggest that yes, developers really do want things this way and that reliance on relational databases (which is obviously still prevalent and widespread) is often down to the existing skillset that developers (and the DevOps and operations teams they work in) have as of 2017.

“Developers today want platforms that maximise convenience and minimise back end coding, with managed services for functions like user-based access,” said James Governor, analyst and co-founder of RedMonk. “MongoDB Stitch is a backend as a service platform designed to make it easier than ever for developers to build rich and secure apps.”

Clean & sparkling

Yes this is just one side of the story and the relational converts will still have their say… but the days of the canonical (single source of truth) database could be numbered in a world of changing devices and changing polymorphic data forms.

Now, please wash your hands.


June 20, 2017  1:57 PM

MongoDB World 2017: keynote noteworthies

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

MongoDB staged its 2017 user, developer, customer and partner showcase convention in the US city of Chicago this June.

Essentially open source at the backbone, MongoDB is a database company built upon a document-oriented data model.

As defined here on TechTarget, MongoDB is one of a number of post-millenial databases that fall under the NoSQL banner.

Collections & documents

Instead of using tables and rows as in relational databases, MongoDB is built on an architecture of collections and documents.

Documents comprise sets of key-value pairs and are the basic unit of data in MongoDB. Collections contain sets of documents and function as the equivalent of relational database tables.

Keynote noteworthies

Keynote day one presentations featured a session from the chief data officer for the city of Chicago, Tom Schenk Jr. Speaking during this year’s MongoDB World keynote, Schenk explained that his firm has built a city management software system with MongoDB on the backend to track civil amenities that may require updates, maintenance and other attention.

The WindyGrid project has been open sourced and the city is actively asking for developers to get involved as they work to (hopefully) improve the software so that it be shared with other cities around the world.

Dev Ittycheria, president and CEO of MongoDB took the stage next to talk big picture and give the audience a little background on how the firm has grown over the last decade.

Ittycheria was soon joined by MongoDB CTO and co-founder Eliot Horowitz. Talking about the need for visualisation tools in Business Intelligence, Horowitz referenced an example of four wildly different data visual charts could also show the same core data variation average… but the data itself could have a different specific shape. This gave him the chance to mention the need for a richer data fabric that also holds what we can call polymorphic data.

As noted on the above link on StackOverflow, “Document-oriented database are schema-less. This means that database don’t care about schema of data. But each document has own schema / structure. Polymorphic data mean that in one collection you could have many versions of document schema (e.g. different field tape, fields that occur in some documents etc.”

Beauty of polymorphic data

“Every time you take data out of MongoDB and expose that data in a tabular view you lose information. The beauty of MongoDB is that you can keep polymorphic data in your document,” said Horowitz.

Talking about the changes coming in version 3.6 of the core database, Horowitz talks about the not so secret sauce that is the document model that MongoDB relies upon.

Horowitz talked about how MongoDB had at first been referred to as a schema-less database (but of course that’s silly because even dynamic schema has schema). In the next version the software will support JSON Schema. In terms of function, JSON schema is a vocabulary that allows you to annotate and validate JSON documents. The 3.6 version will be available in November, but many of the new features are already available on GitHub on a ‘try it at your own risk’ basis.

Changes since 2007

  1. Now the web is a first class UI system.
  2. Now mobile first (and indeed mobile only is here) works.
  3. The IoT is having a significant impact.
  4. Services are a key reality now and there is a surface for almost everything you need and most applications today rely on a bunch of surfaces.

So these (above) changed mean that a ‘modern app’ needs an API to the CRUD. It needs security, access control logic and validation, — and it needs a way to stitch all these services together.

NOTE: CRUD stands for Create, Read, Update and Delete.

As developers need to a do a lot of the boiler room (sometimes called boilerplate) work together to make everything work on the backend, it has driven the popularity of the so called Backend-as-a-Service (BaaS) and indeed the Mobile Backend-as-a-Service (MBaaS). So enter MongoDB Stitch…

As application and user interface logic continues to move into the front-end, the remaining backend is often dominated by ‘plumbing’ to handle storing and retrieving data, security, data privacy, or integrating and composing various services.

MongoDB Stitch provides developers with a simple way to handle all these routine tasks including working with their data, controlling data access, and orchestrating commonly used services – from authentication, through payments, messaging etc.

“In recent years we’ve gotten a tantalizing glimpse into a world where developers can spend all their time writing the code that distinguishes their applications,” said Horowitz. “In this world, services become the building blocks of innovation, and the need to reimplement the same commodity components disappears completely. But the mere existence of services isn’t enough, because there is still the drudgery of tying those services together, implementing tricky but essential access control, and of course managing data.”

Stitch was demo’d at MongoDB World 2017.

Analyst and co-founder at RedMonk James Governor agrees that developers today want platforms that maximise convenience and minimise back end coding, with managed services for functions like user-based access. Governor thinks that MongoDB Stitch is a backend as a service platform designed to make it easier than ever for developers to build rich and secure apps.

MongoDB Stitch is initially available in public beta release with the cloud database service MongoDB Atlas.

For an additional analysis of Stitch read this report.


June 15, 2017  11:33 AM

Acquia CTO: killing the headless CMS horseman

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Dries Buytaert is the founder and project lead of Drupal an open source platform for building websites used by 2% of the world’s websites with 35,000 active contributors.

Buytaert is also co-founder and chief technology officer of Acquia, a firm that commercially sells access to a platform to build and operate application services using Drupal.

Headless CMSes

Commenting on the fact that more and more developers are choosing Content-as-a-Service solutions (known as headless CMSs), Buytaert explains that that these content repositories  offer no-frills editorial interfaces and expose content APIs for consumption by an expanding array of applications.

Not happy with the performance of the headless CMS, Buytaert bemoans these systems for their implicit functionality designed to separate concerns of structure and presentation so that front-end teams and back-end teams can work independently from each other.

The claim from Buytaert and team is that Drupal can work as a good CMS for editors who need control over the presentation of their content and a rich headless CMS for developers building out large content ecosystems in a single package.

In-context admin

Where headless CMSs (arguably) fall flat is in the areas of in-context administration and in-place editing of content.

“Our outside-in efforts, in contrast, aim to allow an editor to administer content and page structure in an interface alongside a live preview rather than in an interface that is completely separate from the end user experience. Some examples of this paradigm include dragging blocks directly into regions or reordering menu items and then seeing both of these changes apply live,” blogged Buytaert.

He further states that by their very nature, headless CMSs fail to provide a fully-fledged editorial experience integrated into the front ends to which they serve content.

“Unless they expose a content editing interface tied to each front end, in-context administration and in-place editing are impossible. In other words, to provide an editorial experience on the front end, that front end must be aware of that content editing interface — hence the necessity of coupling,” added Buytaert.

API-first is key

The Drupal team is adopting what it calls an API-first to make using Drupal as a content service easier (and more optimal) for developers.

In an API-first approach, Drupal provides for other digital experiences that it can’t explicitly support (those that aren’t web-based).

This could be an new way of expressing inherent openness and always keeping multiple deployment and use case options open — yes, we’re interested.

June 15, 2017  10:58 AM

Microsoft joins Cloud Foundry Foundation

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Microsoft has joined the Cloud Foundry Foundation (CFF).

The firm comes in with gold member status confirmed from the start.

Cloud Foundry’s mantra is ‘ubiquitous and flexible’ cloud computing.

Cloud Foundry is an open source cloud Platform-as-a-Service (PaaS) for software application developers to use for building, deploying and running applications — being cloud, it also has direct application ‘scale’ functions.

Cloud Foundry is licensed under Apache 2.0 and supports Java, Node.js, Go, PHP, Python, Ruby, .NET Core and Staticfile. The open source PaaS is highly customizable, allowing developers to code in multiple languages and frameworks.

Extended (but cloud related) links

Microsoft thinks that the move to embrace Cloud Foundry will also help it forge extended (but cloud related) links with members including Pivotal (which plays a key parent role with the technology) and German softwarehaus SAP.

Microsoft is also extending Cloud Foundry integration with Azure. This includes back-end integration with Azure Database (PostgreSQL and MySQL) and cloud broker support for SQL Database, Service Bus and Cosmos DB.

According to director of compute for Azure Corey Sanders, “We even included the Cloud Foundry CLI in the tools available in the Cloud Shell for easy CF management in seconds. Here are some additional details on the integration offered between Azure and Cloud Foundry.”

Truly, madly, deeply involved

Sanders also notes that the Azure team has been “deeply involved” in enabling the Open Service Broker API ecosystem in Kubernetes and making it easier for developers to use Azure services through the Service Catalog as part of an effort that started with Deis.

Microsoft’s open playbook continues to extend itself and the firm’s integrations, cross-cloud fertilisation and open platform partnerships roll forward once more. Did it have to anyway or it would have faltered? Well (very arguably) yes… but at least it is doing it with some force.

June 12, 2017  7:57 AM

The secret to DevOps, it’s all about ‘automation’ after all

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Open source (and enterprise commercial) software operations company Puppet has rolled out its 2017 ‘state of DevOps report’ resulting from a survey of some 3,200 technical professionals.

The ‘findings’ this year suggest that, as we have hinted more than once recently, the crucial bridge to the new efficiencies in software application development shops come from automation.

Puppet says that the highest performing organisations have automated as much as 72 percent of all configuration management processes.

Manual configuration sucks

Manual configuration, it appears, really does suck, mostly.

Low performers are spending almost half of their time (46 percent) on manual configuration.

“Splunk’s participation in Puppet’s 2017 State of DevOps Report reflects our close alliance with Puppet and our shared commitment to delivering real-time visibility across the continuous delivery pipeline through the Puppet Enterprise App for Splunk,” said Andi Mann, chief technology advocate, Splunk. “The report findings validate the importance and impact of automation across DevOps for faster delivery, higher quality and rapid recovery – all of which are fundamental to what Splunk and Puppet deliver to customers.”

The report also considered leadership types and how they impact performance. The five characteristic behaviours that benefit developers are:

  1. Vision
  2. Inspirational communication
  3. Intellectual stimulation
  4. Supportive leadership
  5. Personal recognition.

For the second year in a row, the report looked at lean product management practices to see if changes upstream in the product management process affect business outcomes downstream. Lean product management practices supposedly help teams ship features that customers want, more frequently. This faster delivery cycle lets teams experiment, creating a feedback loop with customers, benefiting the entire organisation.

“The results of the 2017 State of DevOps Report show that high-performing IT teams are deploying more frequently and recovering faster than ever before, yet the automation gap between high and low performing teams continues to grow. The report will help organisations understand how to identify their own inhibitors and embrace change on their DevOps journey,” said Nigel Kersten, chief technical strategist, Puppet.

Automation again

Electric Cloud CEO Steve Brodie also zones in on automation and says that the key takeaways here emphasise the power of automation and the critical importance of helping teams connect the dots between business metrics and the underlying tools and processes.

“At the heart of successful businesses are high-performing teams that evolve and incorporate modern, high-performing applications and processes to drive consistent growth,” said Ashish Kuthiala, senior director for DevOps and Agile and Portfolio Offerings, Hewlett Packard Enterprise.

The full 2017 State of DevOps Report is available for download here.

The report was written in partnership with DevOps Research and Assessment (DORA) and sponsored by Amazon Web Services (AWS), Atlassian, Deloitte, Electric-cloud, HPE, Splunk and Wavefront.

June 8, 2017  10:55 AM

Google hoists Spinnaker for continuous delivery

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Google knows (one would imagine) about the C-word.

C for Continuous Delivery, Continuous Integration, Continuous Deployment and Continuous Management.

The online world runs on a continuous always-on basis (obviously), so applications, data delivery channels, analytics engines and all other network-connected elements of total system operation must also adhere to the same continuous mantra.

The continuous mantra

With cloud and continuous continuity in mind then, search giant Google is now giving the Spinnaker 1.0 continuous delivery and cloud deployment tool to the open source community.

Spinnaker is multi-cloud continuous delivery platform.

“With Spinnaker, you simply choose the deployment strategy you want to use for each environment e.g. red/black for staging, rolling red/black for production — and it orchestrates the dozens of steps necessary under-the-hood,” said Google Cloud Platform product manager Christopher Sanson.

Ease of use

Sanson explains that, using Spinnaker, developers don’t have to write their own deployment tool or maintain a complex web of Jenkins scripts to have enterprise-grade rollouts.

Spinnaker targets use cases that need multi-cloud continuous delivery where software changes happen with high velocity.

Originally created at Netflix, the Spinnaker team point out that this product has been ‘battle-tested’ in production by hundreds of teams over millions of deployments.

The software ships with support for Google Compute Engine, Google Container Engine, Google App Engine, AWS EC2, Microsoft Azure, Kubernetes and OpenStack.

June 2, 2017  7:06 AM

Parallel pleasure: deep-geek chip consortium opens test tool

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

The HSA Foundation has made available to developers the HSA PRM (Programmer’s Reference Manual) conformance test suite as open source software.

HSA who?

Yes, sorry… the HSA (Heterogeneous System Architecture) Foundation is a non-profit consortium of SoC IP vendors, OEMs, Academia, SoC vendors, OSVs and ISVs, whose goal is making programming for parallel computing easy and pervasive.

Parallel pleasure

The test suite is used to validate Heterogeneous System Architecture (HSA) implementations for both the HSA PRM Specification and HSA PSA (Platform System Architecture) specification.

But what is HSA?

HSA is a standardised platform design designed to unlock the performance and power efficiency of the parallel computing engines found in most modern electronic devices.

It allows developers to apply the hardware resources—including CPUs, GPUs, DSPs, FPGAs, fabrics and fixed function accelerators—in today’s complex systems-on-chip (SoCs).

“The HSA Foundation has always been a strong proponent of open source development tools directly and through its member companies,” said HSA Foundation chairman Greg Stoner. “Open sourcing worldwide the PRM conformance test suite is yet another example of an expanding array of development tools freely available supporting HSA.”

The HSA Foundation through its member companies and universities has also released many additional projects which are all available on the Foundation’s GitHub site.

May 31, 2017  9:31 AM

Oracle delays Java 9, modularity issues blamed

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

It was back in 2014 that we last reported a major step in Java with the launch of version 8.

Java 9 had been expected to drop by July of this year in 2017.

Reports now suggest that Java 9 Standard Edition will be delayed until September.

With Oracle OpenWorld scheduled for Oct 1, it might be logical to hold on just a few more days.

The source of the delay is suggested to be an “ongoing controversy” relating to a planned (but later rejected) approach to modularity.

The issue has been confirmed by Georges Saab, vice president of software development in the Java platform group at Oracle and chairman of the OpenJDK governing board.

Writing on InfoWorld, Paul Krill explains the following, “The Java Platform Module System, a key capability of Java Development Kit 9 and the subject of Java Specification Request (JSR) 376, failed in a vote by the Java executive committee earlier this month. IBM, Red Hat, and Twitter, among others, voted against the plan, because they believed it would be too disruptive to developers and would fragment the Java community.”

More info on Java 9 can be read at Henn Idan’s excellent Java blog here.

May 29, 2017  10:47 AM

Massive Git: Perforce Helix4Git is Git at scale

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Longtime open source advocate Perforce has updated its version control software. The new Helix4Git is designed to accelerate build processes in large-scale Git environments.

How do build processes run faster?

Perforce says that Helix4Git has continuous build, integrate and test processes along with mirroring functionality.

This, in effect, creates what the firm calls ‘a single source of truth’ across multiple Git repositories.

 Perforce chief product officer Tim Russell explains that other updates also include the arrival of Helix Core 2017.1 as Perforce’s version control solution — and also Helix Swarm 2017.1, a large-scale collaboration and code review tool tightly integrated with Helix Core.

Large binary objects

“Helix Core has always been the preferred solution for large scale development environments due to its performance and scale supporting all digital assets, including large binary objects often found in visually-centric products. With version 2017.1 Helix Core has improved file transfer speeds up to sixteen times to improve collaboration in geographically distributed teams,” said Perforce’s Russell.

In addition, Helix Swarm 2017.1 includes new features to improve user efficiency with better management of notifications, reviews and changelist clutter.


Swarm lets users remove unnecessary notifications from reviews while automating task activity prompts within the Action Items dashboard. Users can also automate changelist clean up, including shelved file removals, once a change has been committed.

A Perforce changelist is a list of files, their revision numbers and operations to be performed on these files.

May 25, 2017  11:45 AM

Yahoo! open sources Daytona, one load test to rule them all

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Once loved now (arguably) oft-maligned former darling of search Yahoo! (yes, we even left the exclamation point in to be nice) has open sourced its Daytona an application-agnostic framework for automated performance testing and analysis.

Yahoo! software engineers Sapan Panigrahi and Deepesh Mittal explain that the automation, intelligence and control aspects of Daytona that give it clout include:

  • Repeatable test execution,
  • Standardized reporting
  • Built-in profiling support for integrated app performance testing on applications

Performance metrics are aggregated and then presented in a unified user interface.

What differentiates this product?

Its differentiation lies in its ability to aggregate and present aspects of application, system and hardware performance metrics in a comprehensive interface.

Developers, architects and systems engineers can use the framework in an on-premises environment or any public cloud to test:

  1. Websites
  2. Databases
  3. Networks
  4. Workloads
  5. Defined services
  6. Whole applications
  7. Application components

The firm insists that Yahoo! is committed to being “a good open source citizen” — so Daytona comes on the heels of recent contributions of Screwdriver, Athenz, and TensorFlowOnSpark

“At Yahoo, Daytona has helped us make applications more robust under load, reduce the latency to serve end-user requests, and reduce capital expenditure on large-scale infrastructure,” detail Panigrahi & Mittal.

Heterogenous headaches

Prior to Daytona, Panigrahi & Mittal explain that the teams created multiple, heterogenous performance tools to meet the specific needs of various applications.

“This meant that we often stored test results inconsistently, making it harder to analyse performance in a comprehensive manner. We had a difficult time sharing results and analysing differences in test runs in a standard manner, which could lead to confusion,” note the pair.

With Daytona, Yahoo! is now able to integrate all its load testing tools under a single framework and aggregate test results in one common central repository.

“We are gaining insight into the performance characteristics of many of our applications on a continuous basis. These insights help us optimise our applications which results in better utilisation of our hardware resources and helps improve user experience by reducing the latency to serve end-user requests,” write Panigrahi & Mittal.

Ultimately, Daytona helps us reduce capital expenditure on our large-scale infrastructure and makes our applications more robust under load. Sharing performance results in a common format encourages the use of common optimisation techniques that can be used across many different applications.

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: