Oracle is committed to Java and has a very well defined proposal for the next version of the Java EE specification.
This was the official confirmation from Oracle spokesperson Mike Moeller in his role as veep for marketing communications and global PR.
Moeller made his proclamation in a statement printed by ArsTechnica and said in full, “Oracle is committed to Java and has a very well defined proposal for the next version of the Java EE specification — Java EE 8 — that will support developers are as they seek to build new applications that are designed using microservices on large-scale distributed computing and container-based environments on the cloud. Oracle is working closely with key partners in the Java community to finalise the proposal and will share the full details with the broader Java community at JavaOne in September.”
This news comes at a tough time for people trying to ‘still believe’ in Oracle’s plans for the Java language and platform.
Sidelining the JCP?
Rumours of layoffs, a sidelining of the JCP (Java Community Process), lack of investment and a general attitude of abandonment have been circulating for months.
The last time we focused heavily on any Java news was during the launch of Java 8 in March 2014.
Al this being said, Oracle has publicly said before now that the company is slimming down Java EE (Enterprise Edition)… but, at the same time, it doesn’t want anyone else to work on Java or Java EE.
Java still enjoys a wide level of penetration and support… don’t forget that Apache Hadoop is a Java framework, it has played huge roles in Twitter development and Minecraft, plus it has massive potential for the Internet of Things.
Say a little Java prayer, please.
Black Duck exists to make open source code usage safer. The firm’s software is built to help deal with the fact that many firms today use a mix of custom and open source code. It detects, prioritises and fixes known open source vulnerabilities.
HP has now integrated Black Duck’s core ‘hub’ solution into its HPE Security Fortify Software Security Center (SSC).
“Use of open source has increased dramatically in the last five years — it can comprise 50 percent or more of a large organisation’s code base,” said Lou Shipley, Black Duck CEO.
In response, HPE security veep Jason Schmitt presented an appropriately banal corporate platitude without detailing real application vulnerabilities and open source implementation concerns.
What the software actually does
Actual features in the Black Duck Hub and HPE Security Fortify integration include a deep discovery function for rapid scanning and identification of open source libraries, versions, license and community activity.
This intelligence is powered by the Black Duck KnowledgeBase, an open source database with information on more than 1.5 million open source projects and 76,000+ known open source vulnerabilities.
Black Duck also brings forward tools to create an inventory of all open source in use and a map to known security vulnerabilities. Plus also open source vulnerability remediation prioritisation, mitigation guidance and automated policy management.
Mirantis OpenStack 9.0 arrives this week as a newly updated set of software designed to simplify lifecycle management of OpenStack.
Based on the Mitaka Openstack release, Mirantis co-founder Boris Renski explains that the current ‘improvements’ brought forward are based on real-world production deployments of his firm’s technology in firms such as AT&T and Volkswagen.
The changes here are mainly in the area of so-called ‘post-deployment operations’.
OpenStack Mitaka is the 13th OpenStack release. It was built by a community of 2,336 developers, operators and users from 345 organisations.
How cloud operators change clouds
Cloud operators can use Fuel to scale the cloud up or down, selectively make changes to their configuration and deploy new functionality to an existing cloud, such as Murano, a self-service application orchestration and catalogue.
Additionally, operators of large-scale infrastructure can now export Fuel configuration values into 3rd party configuration management tools.
OpenStack Mitaka is the 13th OpenStack release, and was built by a community of 2,336 developers, operators and users from 345 organisations. Mirantis had 327 Mitaka committers (ranked No. 1), 87 core contributors (ranked No. 1), resolved 3,700+ bugs (ranked No. 1), contributed 1.37 million lines of code (ranked No. 1) and conducted 52,000+ reviews (ranked No. 1).
This is a special guest piece written for the Computer Weekly Open Source Inside blog by Romanian technology journalist Andrada Fiscutean. As part of various roles in her homeland, Fiscutean works as news editor for PRO FM radio in Bucharest and covers Southeast Europe for a selection of other media channels. She has 17 years of experience working as a radio journalist and eight as a print & online reporter/editor covering science and technology.
Bill Gates in Romania
When Bill Gates came to Romania, in 2007, the country’s president at that time told him that illegal copies of Windows have helped the nation become a software powerhouse. “Piracy (…) set off the development of the tech industry,” former Romanian president Traian Basescu told Bill Gates. The Microsoft chairman was speechless.
Back then, 70 percent of the country’s computers had at least one piece of illegal software. Now, the piracy rate is estimated at 60 percent, one of the highest in the European Union. The commercial value of illegal software accounted for €145m ($161m) last year. Pirated copies are used not only at home, but in the business and public sectors as well.
In the past two decades, the Romanian open source community has had quite a few initiatives to promote Linux throughout companies and the administration, open source enthusiast Petru Ratiu has said openly. “Inertia is so strong, so things are moving slowly,” he said.
Linux powers 3.66 percent of the desktop computers in Romania, according to StatCounter’s June 2016 data. Worldwide, this OS only has a 1.46 percent market share.
Romania, the second-poorest country in the European Union, could benefit even more from an operating system that’s free to use and reliable. The government could choose to adopt such software, instead of paying for it, said Ovidiu-Florin Bogdan, Kubuntu council member.
Free as in speech, not as in Romanian beer
Open source enthusiast Petru Ratiu stressed that although Linux might be cost-effective, it’s not completely free, as it implies payments like the ones associated with support and training. As for the administration, he emphasised the need for open data and open formats.
Both the public and the business sectors could capitalise on the open source talent pool Romania has. “There’s something outstanding published every week,” Ovidiu-Florin Bogdan, Kubuntu council member, said.
Romanian techies have been involved in diverse projects over the years, ranging from an upgradable and removable girlfriend written in Ruby, to more hard core Linux stuff such as GNU interactive tools, Mozilla’s console.mihai();, a PvPGN Server, and BBStatus, a traffic monitoring solution for network equipment using ICMP or SNMP.
There are also local distros, such as TFM Linux, a custom solution that powers the broadcasts of PRO FM, one of the main radio networks in the country. The same distro was used, until two years ago, by top TV network PRO TV for video streaming.
A diffused community
The Linux enthusiasts are, however, a diffused community, and most of their contributions to open source remain unknown. Some of the work is posted on GitHub, and has been gathered by Ionica Bizau, the Romanian with the highest rank on the repository hosting service.
“We’re the most anarchical community,” Petru Ratiu said. “Nobody wants to assume leadership and be the voice. The broad picture brings us closer, while the details pull us apart.”
The Linux guys meet at events such as Make Open Source Software or the annual Perl Conference YAPC::Europe, which will take place in Cluj-Napoca this summer. Next in line is, however, an informal weekend in Bran, where Dracula’s castle is located, scheduled between the 15th and 17th of July. There, they’ll grab a pint and talk about how Linux can make a stronger statement in Romania.
Almost all the get-togethers are spiced up with stories from the old Linux days: how they installed Slackware from floppy disks, or how pirates drilled holes in CDs and wrote a code to instruct the optical unit to avoid those sectors, in order to hinder others from further distributing the illegal software they sold.
During those times, Romania’s software community was even more anarchical than today. But that helped, they argue. “In the late 1990s, self-taught enthusiasts created LANs across neighbourhoods based on Linux servers, and pushed forward the development of the Internet infrastructure in Romania, one of the best in Europe,” Petru Ratiu said.
Back then, every techie wanted to try their hand at Red Hat or Slackware and later Ubuntu. “It was incredibly easy to find people hooked on Linux. Every apartment building had a sys admin,” Petru Ratiu said.
Romanian open source adoption, implementation and enterprise-level deployment is clearly a work in progress… but at least the work has started.
Open source software penetration is making great headway in governmental circles both here in the United Kingdom and throughout the rest of Europe… and the world.
So much is this the case in some territories that official acts are being passed. This month sees the Bulgarian Parliament pass amendments to its Electronic Governance Act stating that all software should be developed in a public repository under an open source license.
Article 58a of the Act states the following:
- where the subject of the contract includes the development of computer programs:
- a) computer programs must meet the criteria for open source software;
- b) all copyright and related rights on the relevant computer programs, their source code, the design of interfaces and databases whose design is subject to the order should arise for the principal in full, without limitations in the use, modification and distribution;
- c) Development should be used repository and revision control maintained by the Agency in accordance with Art. 7c pt. 18;
The fabulously named Bulgarian deputy prime minister’s advisor Bozhidar Bozhanov has said that whatever custom software the government procures will be visible and accessible to everyone.
In response to the above news, Rami Sass, CEO and co-founder of real-time open source analysis company WhiteSource has said that government software is funded by the public, belongs to the public and should be open for their use.
“The main excuse government agencies give for not open sourcing their code, is the security of their software. But software security cannot be dependent on being secretive, it should be based on being secure. Even if specific parts of software code are deemed confidential (such as for national security reasons) it is always just a part of the code and not the entire software,” said Sass.
WhiteSource’s Sass concludes by saying that government software tends to be at the cutting edge due to the vast databases and the sensitive data it processes.
“This is a further reason to open it for the public good. Even more so, knowing that the software code is going to be open and subject to public scrutiny and checks, will drive developers to create the best, most secure code they possibly can, driving up standards in the industry,” he added.
Basho Technologies has announced Intellicore’s adoption of its Riak TS to power its Sports Data Management Platform, used by the FIA Formula E Championship to provide real time race analysis to its customers.
The race boffins aggregate live raw data from the racing cars themselves through telemetry, then normalise the data into something consumable/visual and feed this back to Formula E, broadcasters and Formula E customers using the second-screen app — and no, not the divers.
Data shown could be a car’s real time position on the track, lap completion, top speed, g-force etc.
During a FIA Formula E Championship race, Intellicore collects raw data from the pioneering electric racing cars, providing spectators with live statistics and analysis as the race takes place.
Formula E racing stats
During a typical 50-minute Formula E race, the cars of the nine teams generate approximately 1Gb of telemetry data per driver. Data is received from the track at a rate of 400 packets per second, each packet containing up to hundreds of telemetry events, over 1.2m packets are received during a race. The Intellicore platform needs to aggregate, normalise and redistribute that data in real time to Formula E worldwide audience via the official mobile and tablet Formula E app and other downstream consumer apps. The platform relies on Riak TS and is set up to handle up 40,000 transactions per second, to store, process and serve Intellicore data needs.
Riak TS aggregates all data points and ensures an accurate high speed live transmission of second screen information.
“We were hit by the challenges of volume, latency and cost when it came to our data.” said Christian Trotobas, CEO, Intellicore. “We needed a solution that was able to scale rapidly, and one which wouldn’t fail whilst dealing with mission critical workloads. Coupled with this we were looking for something with operational simplicity. With Riak TS we got all of this, and where usually you need someone in the team that is dedicated to maintaining and optimising the servers, our development team is able to cover this independently which has really streamlined our progress, and saved resources to put towards innovation.”
Formula E spectators will have access to a specially designed app that uses Intellicore’s solution to aggregate and visualise data such as a car’s top speed, its position on a live map, the video feed of drivers’ cockpit and the percentage of lap completion.
H2O is water, obviously. H20 is also the name for a piece of open source machine learning platform and big data analytics software. So then, H2O.ai is the Silicon Valley based startup firm that produces H20, the software, not the element.
The loveable nerds at H2O claim to be devoting their existence on the planet to ‘operationalizing data science’ (Z left in, for effect) by developing and deploying algorithms and models for R, Python and the Sparkling Water API for the Apache Spark open source cluster computing framework.
Some of H2O’s mission critical applications include predictive maintenance, operational intelligence, security, fraud, auditing, churn, credit scoring, user based insurance and (in the medical arena) areas like predicting sepsis and ICU monitoring.
So contextual preambles out of the way, what’s happening?
H2O.ai has announced the availability of Sparkling Water 2.0 and that means new features such as the ability to interface with Apache Spark, Scala and MLlib via H2O.ai’s Flow UI, build ensembles using algorithms from both H2O and MLlib and give Spark users H2O’s visual intelligence capabilities.
“Sparkling Water was designed to allow users to get the best of Apache Spark – its elegant APIs, RDDs and multi-tenant Context – along with H2O’s speed, columnar-compression and fully-featured machine learning algorithms. Sparkling Water also allows for greater flexibility when it comes to finding the best algorithm for a given use case. Apache Spark’s MLlib offers a library of efficient implementations of popular algorithms directly built using Spark. Sparkling Water empowers enterprise customers to use H2O algorithms in conjunction with, or instead of, MLlib algorithms on Apache Spark,” said H2O.ai CEO Sri Ambati.
Sparkling Water 2.0 also delivers a new visualisation component to MLlib. H2O.ai has recently built out a team of data visualisation experts whose sole focus is to make AI and machine learning algorithms more easily consumable.
Ambati says that his firm is totally committed to the open source movement — everything the firm does will be contributed back upstream to the community codebase.
What if we could take the total amount of power in any cloud computing datacentre and provide a means of defining that as one total abstracted compute resource? This notion has given birth to DC/OS, a technology base built on Apache Mesos to abstract a datacentre into a single computer, pooling distributed workloads and (allegedly) simplifying both rollout and operations.
DC/OS is described as having a “mature battle-tested architecture” and is engineered to also provide monitoring and troubleshooting tools with best practices built-in, obviously.
DC/OS abstracts a full data center into a single logical computer
Product manager at Mesosphere Keith Chambers is quoted on Linux.com putting forward and explanation and validation for exactly how he thinks we are able to call DC/OS and operating system, as such.
“We call DC/OS an operating system for a number of reasons, including how users go about installing the things they want to run on it. DC/OS abstracts a datacenter full of servers into a single logical computer (i.e., 1,000 dual-core servers become 1 computer with 2,000 cores), which means developers and operators don’t need to worry about individual servers or VMs and can simply tell DC/OS about their task’s or service’s resource requirements,” says Chambers.
DC/OS runs on top of any bare-metal, private or public clouds. It allows for the development of advanced big data, analytics and machine learning pipelines with Spark, Kafka and Cassandra inside of software application development streams.
DC/OS services ecosystem
The promise here is a route to being able to roll out the latest datacentre technologies being developed along with automation and so-called ‘operational best practices’ from the DC/OS partner ecosystem.
Microsoft joined the DC/OS project at its inception in April 2016, you can read the firm’s full statement and explanation of the technologies presented here. This docs.mesosphere explanation is also invaluable over and above this brief.
“Easily integrate with release automation and CI/CD tools to accelerate your software release lifecycle from development to production,” reads the project’s website.
Infrastructure technologies for distributed systems and cluster management solutions (and also for large scale individual application frameworks) are becoming ever more prevalent across the computing landscape — ensuring that we bake in controls to provide portability of workloads across these systems will be key.
As a piece of software, Chef is best described as a configuration management tool for handling ‘machine setup’ on physical servers and virtual machines, typically (but not exclusively) in cloud computing environments.
Walls writes as follows:
So you want to get started with DevOps, yay!
Instead of starting a potentially very long, conceptual conversation about what DevOps means, it’s more effective to identify a small but non-trivial project or area of your business that would benefit from being able to develop and deploy software faster, at scale… and more easily.
This means getting teams and people from across the IT stack together and getting them working on that one project or area, because the results and benefits become rapidly apparent.
Which all sounds marvellous, but what then, are the key metrics for DevOps success?
It’s time to ask ourselves some key questions here.
Q – What is your ultimate DevOps goal?
Consider this statement, “A lot of peoples’ ultimate goal is Continuous Delivery (CD). So measuring how long it takes to create and deploy software is of critical importance. This ‘Time to Delivery’ metric shows how long it takes to get new product from code checking to production.”
Ideally you want to see your Time to Delivery drop from months, to weeks, to days… and then literally to hours and minutes.
For many enterprises who are just getting started on their journey to the holy grail of Continuous Delivery, the truth is that Time to Delivery can still be months. While this isn’t necessarily a cause for panic, it’s certainly true that any business focused on delivering value through software, should be looking at how to significantly increase velocity.
Q – Is DevOps working, is the Q-for ‘quality’ there?
Alongside our measure of Time to Delivery, Quality of Release is another key metric. It shows how many bugs and errors are appearing in each iteration of the product. Businesses should be looking to rectify these, and ideally, develop a pre-production environment and workflow that uses automated testing. To really improve Quality of Release, managing it has to be ingrained in the workflow.
Q – How do we get to the guts in DevOps Metrics?
Over the past five years, we’ve seen cost savings, as a key metric, be talked about less and less. This is because cutting costs or headcount, is rarely a transformative approach or way of thinking. As the saying goes, “You can’t cut your way to the top”. This means the best metrics are those which help you measure the extent to which you are creating and providing greater value.
The format for such metrics will vary according to your individual business and industry.
Arguably, DevOps metrics should always, ultimately, connect to a measurable improvement in the customer or staff experience, or at least the effects of it. This could be greater engagement, improved Net Promoter Score, or increased customer utility, such as online billing and account management.
In any industry, when getting started with DevOps and deciding on your metrics, it’s essential to begin with an accurate, current or near-future understanding what matters most to the customer. The same applies if your customers are internal.
Why are DevOps metrics so taboo?
Operational metrics such as Time to Delivery, and Quality of Release are starting to be more widely understood and discussed beyond hardcore DevOps audiences. However, even in DevOps circles and broader IT and company leadership, there’s a lot more fuzziness when it comes to having well defined business goals and working towards them.
This is getting better, there’s been a marked improvement over the past three years in particular. I’d say about 45 per cent of the clients we speak to don’t have a direct measurement for business success in mind, when we begin speaking to them. So there’s still a lot of opportunity for progress.
Key business metrics for getting going with DevOps?
To build on my previous point that the best DevOps metrics are those which measure the extent to which you are creating value… It’s worth considering this question in context of the wider shift towards seeing IT as a source of competitive advantage, rather than as a cost centre.
If you can keep this goal in mind, and choose metrics that will act as supporting evidence for this much larger organisational mind-shift, you will likely start, and stay, on the right track.
- How about key project metrics?
“For any project or software product, defect tracking is an essential metric and an issue that’s easy to hammer out with automated testing. This is really a branch of Quality of Release, so together with Time to Delivery, these are highly salient metrics for virtually all projects.”
How will metrics will differ, depending on business size?
It’s important to remember that until you are properly collecting data from your business in the first place, it’s hard to use it to inform the correct choice of metrics. You can’t know what’s missing until you start looking.
Even once you have the data, you can’t choose your metrics until you know what business goals you’re driving towards.
For larger businesses, there’s often so much information stored inside legacy systems, where the data goes in but never out – setting up the mechanisms to collect and review it, is the first building block of success.
Once this has been done, some of the most valuable metrics are emergent, in that they arise from the metadata showing how the organisation is interacting with itself (as in, between teams and departments) and the customer.
Again – if you can pull ‘goal X’ from ‘data set ‘Y’, and share it within the company – you’re taking a big step towards raising awareness of the power of IT to act as a business value multiplier.
Help! How do I start on transformational goals?
For smaller organisations, it can be hard to begin with transformational goals and metrics, or set up company-wide dashboards and monitoring, because the IT department has even less capacity than in bigger businesses.
So it’s usually best to carry out some automation projects first, to pull out the manual work and free up IT staffs’ time. This effectively creates the capacity you need to drive larger change projects.
There now… aren’t you glad we got into the guts by now and didn’t try to provide you with a dictionary definition of what DevOps is?
AsteroidOS is a new open source operating system specifically designed to serve software application development on smartwatches. The project is gaining traction and has been reported to now be looking for developer and community contribution engagement.
AsteroidOS is built with Qt 5.5 & QML
Programmers interested in AsteroidOS can port the operating system to new smartwatches. There is also the option to create an Asteroid app by using an SDK generated by OpenEmbedded, a build framework for embedded Linux.
Its makers say that AsteroidOS has been created from the ground-up with modularity and freedom in mind.
According to the community behind the software, “If you want to help AsteroidOS, you should start by joining #asteroid on Freenode where you’ll be able to discuss with other members of the community. If you’re not a developer, you can help by translating AsteroidOS, designing UI/UX, testing reporting bugs, etc.”
AsteroidOS needs developers to create end-user applications using the Qt5 framework. Checking the beginner tutorial is a good way to start. Then you can take a look at the TODO List to find interesting apps you could create.
AsteroidOS uses OpenEmbedded as its build system. That’s why it needs people to maintain the meta-asteroid layer, the OpenEmbedded layer that provides the basis of AsteroidOS.
Extending the Asteroid’s hardware support by creating a BSP for smartwatches that aren’t already supported is also a great way to help.