Once loved now (arguably) oft-maligned former darling of search Yahoo! (yes, we even left the exclamation point in to be nice) has open sourced its Daytona an application-agnostic framework for automated performance testing and analysis.
Yahoo! software engineers Sapan Panigrahi and Deepesh Mittal explain that the automation, intelligence and control aspects of Daytona that give it clout include:
- Repeatable test execution,
- Standardized reporting
- Built-in profiling support for integrated app performance testing on applications
Performance metrics are aggregated and then presented in a unified user interface.
What differentiates this product?
Its differentiation lies in its ability to aggregate and present aspects of application, system and hardware performance metrics in a comprehensive interface.
Developers, architects and systems engineers can use the framework in an on-premises environment or any public cloud to test:
- Defined services
- Whole applications
- Application components
“At Yahoo, Daytona has helped us make applications more robust under load, reduce the latency to serve end-user requests, and reduce capital expenditure on large-scale infrastructure,” detail Panigrahi & Mittal.
Prior to Daytona, Panigrahi & Mittal explain that the teams created multiple, heterogenous performance tools to meet the specific needs of various applications.
“This meant that we often stored test results inconsistently, making it harder to analyse performance in a comprehensive manner. We had a difficult time sharing results and analysing differences in test runs in a standard manner, which could lead to confusion,” note the pair.
With Daytona, Yahoo! is now able to integrate all its load testing tools under a single framework and aggregate test results in one common central repository.
“We are gaining insight into the performance characteristics of many of our applications on a continuous basis. These insights help us optimise our applications which results in better utilisation of our hardware resources and helps improve user experience by reducing the latency to serve end-user requests,” write Panigrahi & Mittal.
Ultimately, Daytona helps us reduce capital expenditure on our large-scale infrastructure and makes our applications more robust under load. Sharing performance results in a common format encourages the use of common optimisation techniques that can be used across many different applications.
Open source Continuous Automation firm Chef has used its ChefConf 2017 event to announce new capabilities focused on the transition process to cloud-native and container-first environments with consistent automation and DevOps practices.
As we now discuss automation in the context of its benefits for developers on a weekly basis, Chef points to the need for ‘consistent automation’ across hybrid infrastructure and application portfolios.
Chef Automate the company’s Continuous Automation Platform, is being extended with capabilities for:
- Compliance Automation – Chef Automate now integrates directly with InSpec to provide workflows and practices for validating security requirements and compliance controls. Defining compliance as code enables security requirements to ‘shift left’ into DevOps processes.
- Application Automation – Chef Automate’s integration with Habitat (see below) will extend to enterprises the application supervisor capabilities required for deploying and managing applications from legacy monoliths to container-based, cloud-native, microservices and others.
“It’s natural that we move to an application-centric operating model. When we introduced Habitat a year ago, we started on the path of delivering that model to everyone. With the initial release of our build service, we’re showing how close we are to a world where a new security vulnerability triggers a fully automated response: we can rebuild your applications, we can patch your infrastructure, and we can validate that those changes can be safely and securely applied in production,” said Adam Jacob, co-founder and CTO, Chef.
Both product groups above have just experienced updates.
Looking at Habitat specifically, this technology now enjoys a new Builder service for packaging, managing and running apps.
No self-respecting data management firm operates today without a healthy dose of machine learning at the heart of its technology stack. Data search, logging, security and analytics shop Elastic clearly resonates with this new de facto reality as it now adds machine learning into its core arsenal or capabilities.
Into the Elastic 5.4 release then… (as a result of the recent acquisition of data anomaly detection business Prelert) Elastic’s machine learning features will work on any time series data set to automatically apply machine brain intelligence.
What functions evidence machine learning?
That’s an easy question to answer i.e. functions such as:
- identifying anomalies,
- streamlining root cause analysis,
- reducing false positives within real-time apps.
The concept behind this technologies is that it should be used when trying to spot infrastructure problems, cyber attacks or business issues in real-time.
“Our vision is to take complexity out and make it simple for our users to deploy machine learning within the Elastic Stack for use cases like logging, security and metrics,” said Shay Banon, Elastic Founder and CEO. “I’m excited that our new unsupervised machine learning capabilities will give our users an out-of-the-box experience, at scale to find anomalies in their time series data — and in a way that is a natural extension of search and analytics.”
Elastic Stack is being used to by developers for collecting, enriching and analysing log files, security data, metrics and text documents etc.
Why machine learning is tough
The firm says that machine learning is tough to bring online. Why is this?
Because the biggest challenge lies in developing real-time operational systems for existing workstreams and use cases.
“Scarce and expensive data science skills are needed to figure-out the correct statistical models for different, diverse data sets and hand-crafted rules are brittle and often generate many false-positives,” says Elastic.
Elastic’s new machine learning capabilities use a familiar Kibana UI . The software installs into Elasticsearch and Kibana with a single command as part of X-Pack.
If you want 99.999 per cent system availability, uptime and performance, then you probably want ‘several nines’ in your database delivery stats gauge.
In the quest for five nines and beyond then, open source database management software company Severalnines has said that it has now integrated the open source database load balancing technology ProxySQL with its own database management system, ClusterControl.
What does ProxySQL do?
ProxySQL enables MySQL and MariaDB database systems to manage intense high-traffic database applications without losing availability.
ClusterControl is claimed to be the first and only database management system to automate the deployment and management of this new technology.
High-traffic database applications draw an enormous amount of queries daily, obviously. But to make them work, DBAs and sysadmins will typically experience headaches when trying to automatically scale to handle connections.
Improper load balancing leads to downtime, an effect estimated to cost companies an average of $300K (GBP 388K) per hour according to magical analyst research organisation Gartner.
Changing database topology
According to Severalnines, load balancers are an essential component in database high availability; especially when attempting to make database topology changes transparent to applications and implementing read-write split functionality. This is especially true with high-traffic websites where potentially hundreds of thousands of concurrent connections are constantly trying to access your data.
“ProxySQL has an advanced multi-core architecture to handle that large number of connections, multiplexed to potentially hundreds of backend servers squeezing every drop of performance out of your database cluster; all with zero downtime,” said the firm, in a press statement.
But having a highly-available load balancer designed for high-traffic isn’t the end-all solution to delivering high availability.
You also need a way to deploy and manage the database cluster, handle upgrades, run backups, ensure failover and recovery and the ability to scale the data nodes to accommodate ever-growing data.
Vinay Joosery, Severalnines CEO says that for this, you will need ClusterControl, an all-inclusive open source database management system that removes the need for multiple management tools.
ClusterControl now offers ProxySQL in its commercial version.
The Linux Foundation has used its news chain to unveil the OpenChain Specification 1.1 and an accompanying Online Self-Certification service.
The technology is positioned as a means for organisations to ensure consistent compliance management processes in what is being called the open source software supply chain.
Fantastic (first) four
The OpenChain Project has welcomed Siemens, Qualcomm, Pelagicore and Wind River as the first four organisations to self-certify to the OpenChain Specification 1.1.
According to the Linux Foundation, the OpenChain Project is a community effort to establish best practices for effective management of open source software compliance.
The project aims to help reduce costs, duplication of effort and ease friction points in the software supply.
The OpenChain Project aims to build trust in open source by making things simpler, more efficient and more consistent.
“The OpenChain Project is about open source compliance across the many entities in the modern IT supply chain,” said Kate Stewart, senior director of strategic programs, The Linux Foundation.
Trusted package, to trusted chain
Stewart explains that the long-established SPDX Project addresses the question of ‘how do you trust the contents of a software package?’
But now… the OpenChain Project addresses the question of ‘how do you trust companies in a supply chain?’
“Organisations can only build trust in other entities when they have the opportunity to demonstrate the way they are handling open source software meets the criteria of a good compliance process,” said Dr. Miriam Ballhausen, OpenChain Conformance Work Team Lead.
The latest version of the specification represents the work of more than a hundred contributors.
Social media management firm Hootsuite has hooted and tooted this month about its eponymously named Hootsuite Integration Fund.
Lots of loot
The fund is a US$5 million initiative established to to support software application development professionals (and rookies) who are specifically targeting what has been called ‘enterprise-strength integrations with and for’ the Hootsuite platform.
The new fund comes at the same time as a new partner portal.
Did someone say digital transformation, again?
Hootsuite’s adoption is due in large part to the contributions of developers to its ecosystem claims Matt Switzer, senior vice president of strategy and corporate development at Hootsuite.
“We’re investing in this integration fund to encourage [developers] to continue to develop applications that enable our customers to connect social to marketing, analytics and other business solutions they rely on every day,” said Switzer.
Enormously engorged ecosystem
Hootsuite claims to offer the, “Largest ecosystem of any social media management platform.”
This ecosystem is made up of hundreds of applications in the App Directory and more than 2.5 million app installs.
Developers can learn more about the Integration Fund and apply through the developer application process available at www.hootsuite.com/developers/fund.
The small print
The decision for the use of funds towards a particular integration will be based on customer needs and alignment with the Hootsuite platform strategy.
Funded applications will also receive go-to-market support and opportunities to participate in co-marketing initiatives.
In addition to the Integration Fund, Hootsuite has launched a new and expansive developer portal to provide full access to Hootsuite’s SDKs and APIs, a developer blog and technical support.
With its openly stated operational remit of ‘aggressive acquisitions’ (albeit positively aggressive), Oracle is (very) arguably a firm known for buying, swallowing, acquiring those companies it decides to consume.
With Oracle’s stewardship Java still very much in existence, not every report detailing the state of the language and platform appears to suggest happy days among the developer community now being served from the Ellison mothership.
Sun may no longer be there for Java, but various historical touchpoints to the firm’s years can still be seen. One of which is OmniOS, an Oracle-free open source variant of Solaris.
However, as reported by Gavin Clarke on The Register, the suggestion is now that Omni will be killed off (or at least, active development will be stopped) after what is five years of work.
The OS itself is in fact a distribution of Illumos, which is derived from the OpenSolaris open source operating system.
Quite a bit still alive
As detailed here, “OmniTI chief executive Robert Treat promised his firm would still run ‘quite a bit’ of the OmniOS project’s infrastructure and that some staff may continue to contribute.”
Further details have yet to surface on the OmniOS site itself.
The analysis features a ranking of what are estimated to be the UK and Ireland’s fastest-growing technologies of Q1 2017.
It’s worth noting that Stack Overflow enjoys 50 million monthly visitors and is widely regarded as (arguably) a serious and trusted resource by many programmers due to the peer-sharing nature of the content exchanged on the site itself.
All very well as and boring, inconsequential and intangible as any other tech vendor survey? Well… perhaps not intangible, these rankings actually come out of this developer platform’s ability to measured the number of views, questions, answers and ‘upvotes’ on queries and their associated language tags to generate this ranking.
For developers, ‘upvotes’ matter
Other technologies enjoying key growth stats include C#, HTML, PHP, jQuery and Android.
According to the firm, “In most cases, the popularity of these languages closely mirrors the most in-demand skills from employers in the UK and Ireland.”
However, the report exposes a potential issue with supply and demand, as AWS (Amazon Web Services) has become the fifth most sought-after technology by employers in the region.
Kevin Troy, director of insights at Stack Overflow has said that throughout 2017, Stack Overflow will release four new reports that take a close look at the developer ecosystem in the UK and Ireland.
The reports are created using a combination of proprietary data (captured from Stack Overflow’s machine learning platform, that tracks IP address and user behaviour over time) and qualitative data from Stack Overflow’s annual user survey.
Find the Developer Ecosystem Report: Tech Hiring Edition here.
The GitHub Developer Program (programme, if we’re using Her Majesty’s English) has been around for around three years now.
Essentially, this initiative exists to encourage developers to test out application builds that integrate with GitHub.
As many readers will know, GitHub is a web-based version control repository and Internet hosting service… it is also a software development platform in the context of its wider use.
Now, open to all
GitHub now says it is opening the programme up to all developers, even those who don’t have paid GitHub accounts.
According to the team, “We’re also introducing participation levels that come with existing program perks from us and our partners, like development licenses for GitHub Enterprise and a new category of benefits that help you build and scale even faster. [A total of] 17,000 developers around the world are already aboard—if you’re kicking around ideas for applications that integrate with GitHub, now’s the time to get started.”
How it works
Depending on the size of a developer’s user base, they will be placed into one of three levels.
Each group level gets a specific set of benefits, resources and tools available to help advance to the next stage of development.
Microsoft says it is now scheduling roughly monthly releases of ReactXP to run approximately in line to React Native releases.
Thin & lightweight
ReactXP itself is best described as a thin and lightweight cross-platform abstraction layer built on top of React and React Native.
Question: what does thin and lightweight mean in this sense?
Answer: The core software components and APIs are limited to the functionality required for almost all applications.
As detailed by mspoweruser, “[ReactXP] implements foundational components that can be used to build more complex components. It also implements a collection of API namespaces that are required by most applications. It supports the following platforms: web (React JS), iOS (React Native), Android (React Native) and Windows UWP (React Native).”
React on React
ReactXP is designed with cross-platform development in mind. In general, it exposes APIs, components, props, styles and animation parameters that are implemented in a consistent way across React JS (HTML) and React Native for iOS and Android.
According to the ReactXP developer portal (Microsoft GitHub), the authors of React use the phrase ‘learn once, write anywhere’.
“With React and React Native, your web app can share most its logic with your iOS and Android apps, but the view layer needs to be implemented separately for each platform. We have taken this a step further and developed a thin cross-platform layer we call ReactXP,” says the team.
If developers write an app to this abstraction, they can share view definitions, styles and animations across multiple target platforms — and, still provide platform-specific UI variants selectively where desired.