Splunk is having a busy week of things at its annual .conf user event. Deciding which news announcements to try and table during a once a year symposium is always difficult, so this year the operational intelligence company has mixed in a little philanthropy with the platform product news.
The Splunk Pledge is a new philanthropic programme that exists as part of the wider Splunk4Good initiative.
The new Pledge programme commits to donate a minimum of $100 (GBP 77m at the time of writing) million over a 10-year period in software licenses, training, support, education and ‘volunteerism’ to nonprofit organizations and educational institutions in order to support academic research and generate social impact.
Big data brings big goodness
“Splunk is deeply passionate in our belief that big data can bring societal good. That is the driving force behind Splunk Pledge,” said Doug Merritt, President and CEO, Splunk. “At nonprofits, IT budgets typically average one percent – making it challenging to fully leverage technology to accomplish their mission. By committing to help nonprofits and educational institutions with resources readily available, like free licenses and support, free education, and volunteerism by our staff, we can make a difference in the world.”
Nonprofit organisations of course often rely on donations from commercial bodies to make their budgets go further. In this instance, Splunk Pledge is argued to do the following:
- reduce operating costs,
- improve cybersecurity posture,
- streamline IT operations,
- perform research,
- analyse diverse data sources for visibility into infrastructure.
“Splunk will also offer complimentary training and support for organisations receiving technology donations, ensuring each beneficiary can use the donation to its full potential,” said the firm, in a press statement.
Splunk is also announcing the global expansion of its successful Splunk Academic Program — the initiative currently has a nationwide reach of 339 institutions and more than five million students through Splunk partner Internet2 (a research and education network).
“Facilitating the research and education objectives of our member universities is Internet2’s core mission and we are pleased Splunk is making resources and certification available to our community and its global partners at no charge,” said Shel Waggener, senior vice president, Internet2. “We look forward to working with Splunk to enhance and streamline access to these valuable course offerings in the coming years.”
NOTE: Splunk employees receive paid time off to volunteer at the nonprofit organisation of their choice through Splunk Pledge.
There’s a reason why open source is surfacing with so much prevalence at the enterprise layer… and that reason is data.
This is the opinion of Michael Grinich, CEO and co-founder of Nylas — the firm produces Nylas N1, an extensible open source mail app for Mac, Windows & Linux.
Grinich argues that the value of modern enterprise software lies not in code, but in open-source ‘variety integrations’ that enrich data.
The post-cloud era cometh
“There’s a big shift happening right now in enterprise software. I think we’re moving into the post-cloud era. That means our platforms will no longer be defined by where the apps run, but instead defined by something else. That something else is data. The next-generation enterprise software platforms will be all about data and services. Intelligent systems are only as smart as their data,” writes Grinich.
Grinich contends that when data becomes more important than the codebase, you see companies shift their priorities.
“More and more companies are open sourcing the code for their products. The value of these products is no longer just the codebase. It’s no coincidence that the hardest software components — operating systems, drivers, firmware — they’re all open source. It’s easy to imagine a future where healthcare, banking, energy, government and other large sectors must adopt an open source philosophy in order to adapt and grow,” he said.
Grinich concludes as follows — “If you want a hint to the future of software platforms, don’t think about the app. Think about data.”
Why you will know Nylas soon
It’s makers insist that Nylas is much more than just a startup trying to improve email. It’s a company that in several years, will look like a data platform, not an in-user application.
The team says it is diving into how data is interconnected, how it’s shared across people within an organisation and the intelligence behind it.
In a world where open source at the FOSS-hobbyist level is becoming about as rare as all-proprietary Microsoft infrastructure technology, the race to provide certified enterprise level versions of every technology stack that exists continues apace.
In this regard then, CloudBees has announced CloudBees Jenkins Enterprise as a new distribution of the Continuous Integration (CI) aimed, obviously, directly at enterprises.
Curated community extensions
Using what may well become a de facto term meaning “stuff we allow from outside” … CloudBees says that Jenkins Enterprise provides a stable and certified distribution of Jenkins, along with what are known as “curated (and certified) extensions from the community and third parties” no less.
According to CloudBees, “Key to all successful open source platforms is their ability to add features and integrate with third-party tools and solutions, through extensions. As the use of those platforms grow, the increasing number of extensions from a wide number of sources places a significant maintenance challenge on enterprises.”
Combinatorial version explosion
While individual extensions may be stable, the large number of extensions bundled in combination with each other can cause a combinatorial explosion of possible versions and issues. This challenge gets even more complicated as users have to move from one version of their entire setup, to the next.
The firm says that such a challenge requires support and testing that goes well beyond what any enterprise can and wants to do on their own – and so (if we follow CloudBees’ argument), a vendor-provided distribution combining a curated list of fully-tested and interoperable extensions is the way to go.
The enterprise route (says CloudBees) also provides a stable runtime environment as well as a safe migration path to future versions of that distribution.
Examples of enterprise-level openness
Examples include Red Hat (RHEL/Fedora and JBoss/Wildfly), Acquia (Drupal), Mulesoft (Mule ESB) and Hortonworks (Hadoop). CloudBees’ implementation of the same methodology, applied to open source components, is now being made available as the CloudBees Jenkins Enterprise distribution.
“One of the biggest strengths of Jenkins is its extensibility. Community contributors easily extend Jenkins with features and enable it to connect to virtually any tool used throughout the continuous delivery pipeline. However, until now, the community extensions, individually and especially in combination with each other, have not been tested and verified with the same level of rigor that the CloudBees team provides for the Jenkins core,” said the company, in a press statement.
In line with this news we also hear that in order to establish its new CloudBees Assurance Programme, CloudBees has invested in engineering, QA and machine resources dedicated to verifying the stability, security, inter-compatibility and upgradability of the Jenkins’ core along with a set of the most popular open source Jenkins extensions.
CloudBees Jenkins Enterprise is available as part of the CloudBees Jenkins Platform 2 and CloudBees Jenkins Platform – Private SaaS Edition 1.1 releases.
Twilio (pronounced TWILL-e-o) will now acquire the WebRTC media processing technologies built by the team behind the Kurento open source Project. Kurento is a web real time communications media server (and a set of client APIs) for the development of video applications for Internet based and smartphone platforms.
Essentially, this technology is focused on the development of advanced video in web and mobile applications.
What kind of technologies go into advanced video? Media server for ‘large group’ communications, transcoding, recording and advanced media processing — that’s what kind. All of which will be integrated into Twilio Programmable Video.
“These new capabilities will enable developers to address the more advanced needs of enterprise and large-scale consumer video applications as well as next-generation video applications such as those involved in augmented reality, computer vision, robotics, and the Internet of Things,” said the firm, in a press statement.
The firm claims that to date, the adoption of video communication has been largely limited to conferencing systems and face-to-face applications for consumers. This is because advanced uses of video that require real-time media processing have been out of reach for mobile and web developers.
While the popular WebRTC standard equips developers with client-side technology for adding video, the requisite media server infrastructure is expensive and requires specific technical expertise to implement. The addition of advanced WebRTC media server technology to the Twilio Video platform aims to change this by enabling API access to real-time media processing.
Developers plugging in here are promised to get the ability to analyse, transform, augment, and store audio and video streams to power video applications.
Luis Lopez, CEO and co-founder of the Kurento project argues that Twilio has one of the best sets of APIs and joining forces with their team enables the team to complete their vision and bring work to Twilio’s million plus registered developer accounts.
“Twilio and the team behind Kurento share a common vision of enabling developers through powerful platforms and straight-forward APIs,” said Jeff Lawson, Twilio CEO and co – founder. “As Twilio takes another step on our mission to fuel the future of communications by enabling developers, we’re excited to join forces with the builders of Kurento to extend the uses of our video platform.”
Tikal Technologies, S.L., who originally developed Kurento, will maintain the Kurento open source project, and be responsible for managing contributions from the Kurento community.
Cloud native computing as championed, advocated and evangelised by the Cloud Native Computing Foundation (CNCF) itself is an approach that uses an open source software stack to deploy applications as microservices.
Each microservice is packaged into its own container… and those same containers are then ‘dynamically orchestrated’ in order to ‘optimise resource utilisation’ — as one would expect given the controllable, flexible and composable nature of the cloud model itself.
The CNCF works to host critical components of the software stacks in use here including Kubernetes and Prometheus – overall the organisation insists that it exists as a neutral home for collaboration as part of The Linux Foundation.
Central orchestration processing
The term you will now want to get used to in the context of these events is ‘central orchestration processing’ as these composable elements of cloud are now intelligently connected.
PrometheusDay will feature technical talks covering major Prometheus adopters – the technology itself is an open source system for monitoring (and providing an alert system for) a wide range of enterprise IT events when containers and microservices.
“Cloud native computing can be a very fragmented process, as the architecture departs from traditional enterprise application design,” said Dan Kohn, executive director of the Cloud Native Computing Foundation. “Our new flagship event CloudNativeCon will be able to build on the great following and momentum KubeCon has already established. The event will squarely focus on helping developers better understand how to easily and quickly assemble these moving parts by driving alignment among different cloud native technologies and platforms.”
Companies showing their support for cloud native computing include Apprenda, Cisco, CoreOS, Google, IBM, Intel and Red Hat.
Content delivery firm Varnish Software has announced its Varnish Plus Cloud product — essentially, a full version of the Varnish Plus software suite that can be accessed via the AWS (Amazon Web Services) Marketplace.
The software itself has been developed specifically for SMBs that wish to forgo the hardware expense required to deploy Varnish Plus on site.
Built on top of flexible web accelerator (the open source Varnish Cache) Varnish Plus Cloud is supposed to give advanced users access to a special set of modules and expert support.
Cache in hand
According to Clive Longbottom, all-round technology enrichment maestro and founder and service director of the analyst firm Quocirca, the modern usage of cloud platforms of course introduces the concept of ‘elastic resources’ i.e. where extra storage, compute or networking resources can be applied to a workload in real-time to meet spikes in that workload.
“However, relying purely on the provision of such resources from the back-end platform can result in major costs being incurred. The use of a data cache can enable traffic spikes to be dealt with without the need for major additional resources being brought to bear, so maintaining the correct customer experience while keeping costs under control,” said Longbottom
Varnish insists that Plus Cloud customers will benefit from faster implementation (there’s no need to work through internal permissions or to wait for hardware), control (via the ability to configure via the flexible VCL (Varnish Configuration Language); and flexibility with options to spin a virtual server base up or down depending upon web traffic trends.
“By making Varnish Plus available via the AWS Marketplace, Varnish Software is extending the reach of its core product to more organisations, in more places at a more flexible price point,” said Per Buer, founder and CTO of Varnish Software.
Via the Varnish Plus Cloud support subscription, organisations receive access to Varnish experts and software developers.
Veritas and Red Hat have announced a collaboration aimed at supporting business critical enterprise applications on OpenStack. Essentially, the work here is focused on providing predictable quality of service to OpenStack applications and workloads… and all of that regardless of scale, obviously (because this is enterprise real world deployment, right?).
Red Hat OpenStack Platform is an Infrastructure-as-a-Service (IaaS) designed to enhance OpenStack with advanced features needed for cloud environments.
It is a co-engineered solution that integrates Red Hat Enterprise Linux with Red Hat’s OpenStack technology. Veritas has selected Red Hat OpenStack Platform to build OpenStack solutions that provide a predictable quality of service with direct-attached storage (DAS) using Veritas storage management technologies.
Through the collaboration, Veritas says it aims to bring the ability to execute data protection tasks through integration with backup software without impacting production operations to Red Hat OpenStack Platform environments.
“Red Hat OpenStack Platform is a leading production-ready OpenStack distribution for enterprises for their private cloud infrastructure. We are working with Red Hat so that organizations can confidently adopt OpenStack for their most demanding enterprise workloads,” added Mike Palmer, senior vice president, solutions for data insight and orchestration, Veritas.
OpenStack has gained traction — the 2016 OpenStack User Survey indicates that production use of OpenStack is at 65%, up from 33% two years ago.
Yet (says Veritas) organisations can still face challenges when it comes to executing their traditional, Mode 1 enterprise workloads on OpenStack due to the high performance and reliability requirements. The company asserts that effective storage management that offers the necessary quality of service is a key part of successfully adopting OpenStack for these enterprise production workloads.
“Red Hat OpenStack Platform is in production across hundreds of customers, spanning multiple verticals. We are delighted to collaborate with Veritas to bring enterprise customers more choice and draw on their long legacy of enterprise storage management, resiliency and data protection to help our customers address the performance and reliability requirements of traditional tier 1 workloads running on Red Hat OpenStack Platform,” commented Radhesh Balakrishnan, general manager for OpenStack, Red Hat.
ESG analyst Scott Sinclair has underlined this story by saying that the quality of service or ‘noisy neighbor’ challenge is one of the key issues enterprises face as they look to deploy workloads on OpenStack. This collaboration may provide some advancement for toughened up enterprise workloads going forward if all factors play out as the companies here intend.
Embattled bygone era search firm Yahoo (exclamation point not included) has open sourced Pulsar, a scalable low latency ‘pub-sub’ messaging system. The technology provides simple pub-sub messaging semantics over topics, guaranteed at-least-once delivery of messages, automatic cursor management for subscribers and cross-datacenter replication.
What is pub-sub messaging?
Pub-sub messaging is a very common design pattern that is increasingly found in distributed systems powering Internet applications. These applications provide real-time services and need publish-latencies of 5 milliseconds (on average) and no more than 15ms at the 99th percentile. At Internet scale, these applications require a messaging system with ordering, strong durability and delivery guarantees. In order to handle the “five 9’s” durability requirements of a production environment, the messages have to be committed on multiple disks or nodes.
Yahoo engineers explain that they could not find any existing open-source messaging solution that could provide the scale, performance and features Yahoo required to provide messaging as a hosted service, supporting a million topics.
“So we set out to build Pulsar as a general messaging solution, that also addresses these specific requirements,” say Joe Francis and Matteo Merli of Yahoo Platforms.
Using Pulsar, one can set up a centrally-managed cluster to provide pub-sub messaging as a service; applications can be onboarded as tenants.
Pulsar is also horizontally scalable; the number of topics, messages processed, throughput and storage capacity can be expanded by adding servers to the pool.
“Pulsar has a robust set of APIs to manage the service, namely, account management activities like provisioning users, allocating capacity, accounting usage, and monitoring the service. Tenants can administer, manage, and monitor their own domains via APIs. Pulsar also provides security via a pluggable authentication scheme, and access control features that let tenants manage access to their data,” said Francis and Merli.
Pulsar includes a client library that encapsulates the messaging protocol; complex functions like service discovery, as well as connection establishment and recovery, are handled internally by the library.
Stepsize is a UK startup focused on developer tools. The firm is aiming to put a degree of Artificial Intelligence (AI) into DevOps. Stepsize Layer is a desktop application for developers that automatically adds context to code bases. It does this by hooking up tools used to develop software, structuring historical data and attaching this to the piece of code.
Users can select the code, view a timeline and run smart logic to determine the true author. Fully integrated with Slack, developers can contact the author and find the relevant context.
Co-founder & CEO of Stepsize Alex Omeyer spoke to the Computer Weekly Open Source Insider blog to explain how his firm’s technology has evolved.
CW: Where did the genesis for this idea come from?
Omeyer: While taking an online course on deep learning for natural language processing, [we] started wondering about whether we could apply similar techniques to programming languages which have more structure and less ambiguity than English or French.
Being self-taught developers, we started bouncing ideas on how to apply machine learning to the software development process and Stepsize was born.
After playing around with neural networks using publicly available data from Stack Overflow and GitHub, we concluded that the right approach was to gather and generate the right data that would allow us to iteratively streamline and automate various parts of the software development process. With this approach, programming languages would eventually be abstracted away.
So we set out to figure out how to help developers with their daily work while performing this data collection exercise. Speaking to many developers working in teams, we came to the realisation that every day they have to contribute to codebases they’re largely unfamiliar with, but no tool specialises in helping them understand the past – the who, why and when of any given piece of code. We started building Stepsize Layer to contextualise code and allow developers to work on shared codebases perfectly informed.
CW: How does it actually work?
Omeyer: As detailed above, Layer hooks up with all the tools devs use to build software, structuring the historical data contained in these tools and attaching it to the code.
Users can simply select a snippet of code, hit a shortcut and our open source editor plugin will send the code selected to the app which will then run some logic to surface the information relevant to this code. It uses information contained in Git and other tools to identify the author of a piece of code, and provide a full history of the evolution of his code.
This includes displaying all the commits, pull requests and associated user stories / issues on a timeline. We plan on adding information from continuous integration, code review and many other tools, as well as analyse test coverage and link it back to the code. Our aim is for all the information relevant to a piece of code to be directly tied to it so developers never have to dig through a bunch of tools to understand it.
Layer also integrates with Slack, a tool that many developers use to communicate and discuss all aspects of software development. Users can send a Slack message to the author of the piece of code they’re investigating directly from the app. Layer uses the Slack API to push a message in the selected channel and embed all the context necessary for dev teams to have a productive conversation about the code.
CW: How will this change the way developers work?
Omeyer: By providing developers with all the context they need to understand a piece of code at all times, Layer allows devs to make better judgement calls, make less mistakes, collaborate with their colleagues and overall be more efficient.
They no longer have to solely rely on static software documentation, disturb a colleague (who might no longer work at the business or work in a different timezone), or spend months getting familiar with the system to understand the codebase. Instead of checking the interfaces of the numerous tools they use before contributing to the code – a task so daunting that devs would rather take a stab at the job without the necessary information and risk introducing technical debt or breaking something – devs can open Layer without leaving their workflow to shine the light on the code.
Layer allows us to collect rich metadata describing code – the intent behind the code in the form of a ticket and commit message, the corresponding design mockup, how the code is tested to ensure it works as expected, how it was reviewed by peers, technical issues in production and how they were subsequently fixed, etc. As we build out this dataset of contextualised code, we’ll be able to develop and train machine learning algorithms that leverage this data to make devs’ and their businesses’ lives easier.
Think automatically scoping out features, assisting with sprint planning, assigning tasks in the most efficient way, estimating delivery dates, templating code for common logic and features, truly automatic and accurate code review and estimates of risk for pull requests and much more. We’ll keep pushing this concept further until we’ve simplified the development process to such an extent that more people are able to build software without having to go through years of trial and error.
CW: What’s the feedback from software engineers so far?
Omeyer: Early versions of the tool reminded many of our users of a ‘souped-up Git blame’ (a Git command that allows devs to identify the Git username of the last person to modify a line of code so that they can then contact him or her with any questions). Unlike git blame, Layer runs some smart logic to identify the person truly responsible for several lines of code and surface the rich history of this piece of code. This in itself was enough to get some devs to use the app every day at work.
Our users engage with us on a regular basis to request more integrations with other dev tools that contain info relevant to their code, and smarter features (e.g. adding “sticky notes” to their code, matching a piece of code with the relevant tests, automatically matching a ticket with a commit etc.).
CW: How will AI change software development?
Omeyer: With the current rate of technological change and the sheer size of some of the software projects out there, we can’t expect developers to process all the information necessary to make optimal decisions during the development process. In the medium term, AI will reduce the cognitive load on developers by making the information they need available when they need it (think Google Now for devs), as well as assisting them with their daily tasks. They will become much more efficient and effective.
This in itself would already be a huge boost to innovation, but we think AI can do even more for software development. Stepsize is working towards a future where software development is available as a service thanks to AI. Literal software as a service. Code will be abstracted away completely and anyone will be able to collaborate with an intelligent agent to bring their ideas to life.
This will have an unprecedented impact, not only on software development, but on the world. Think about what roughly 20 million developers have been able to accomplish in such a short period of time. They represent less than a percent of the global population, and yet, they are central to modern day human progress. Imagine the impact on the world if it contained as many people capable of developing software as there are people capable of reading and writing today.
You can visit Stepsize here.
Facebook has open sourced its Zstandard compression algorithm. The technology itself is said to outperform ‘zlib’, which has previously been considered to the reigning standard in this field.
What is a compression algorithm?
A compression algorithm works to reduce the size of data being handled — lossless and lossy compression are terms that describe whether or not, in the compression of a file, all original data can be recovered when the file is uncompressed.
In terms of usage, lossless compression would suit (for example) text or spreadsheet files and lossy compression is better suited for (for example) video and sound, where a certain amount of information loss will not be detected by most users.
According to facebook.github.io/zstd/, “Zstandard is a real-time compression algorithm, providing high compression ratios. It offers a very wide range of compression / speed trade-off, while being backed by a very fast decoder. It also offers a special mode for small data, called dictionary compression and can create dictionaries from any sample set. Zstandard library is provided as open source software using a BSD license.”
In this case, Zstandard compression is a lossless compression technology.
Zstandard in action
Zstandard works in a manner which directs the data in hand to be as ‘branchless’ as is physically and mathematically possible — in doing so we can say that Zstandard is able to reduce the number of potential ‘pipeline flushes’ which can occur (during un-compression) as a result of incorrect branch predictions.