The Linux operating system (OS) will turn 30 in the year 2021.
We know that Linus Torvalds first penned (typed) his work plans for what turned out to be Linux on a Usenet posting as follows:
“I’m doing a (free) operating system (just a hobby, won’t be big and professional like GNU) for 386(486) AT clones,” wrote Torvalds.
No brief history of Linux is needed here, there are plenty of write ups detailing the origins of UNIX, MINIX, the birth of GNU and Richard Stallman’s creation of the GNU General Public License.
Rather than nostalgia then, in betwixt the 25th and 30th anniversaries as we stand in 2018, let’s look at how far the major distributions (and the smaller but still mighty ones too) have come over the last year and what they might be able to bring forward between now and the 30th celebrations.
The thousand yard stare
Okay then, let’s just have one little piece of history before we look forward.
There’s a fabulous moment at the very start of the ‘Revolution OS’ movie when open source advocate, software programmer and technical author Eric Raymond describes meeting a Microsoft executive in an elevator.
They exchange pleasantries, but Raymond suggests that the be-suited Redmond exec looks down on him somewhat as he asks him: “So what do you do?”… wearing scruffy hacker clothes, as he was.
“I just looked at hime and gave him the thousand yard stare and said ‘I’m your worst nightmare’,” said Raymond.
We’ve come a long way since the time of thousand yard stares, obviously.
In an era when even Microsoft ‘loves’ Linux, even the Redmond team appear to have largely exonerated themselves from any accusations of mere openwashing and much of the proprietary past – even if if the firm’s new religion open strategy does align all roads to the Azure cloud, ideally.
The big three (& the others)
So to the state of the Linux nation then.
Today in 2018 we know that the ‘big three’ open (but commercially licensed, maintained and appropriately support option served) Linux distributions are:
- Red Had Enterprise Linux (RHEL) – by Red Hat.
- Ubuntu for enterprise – by Canonical.
- SUSE Linux – by SUSE, owned by Micro Focus.
The big three are joined by a second group of in some cases quite well-known other operating systems. Never ever referred to as also-rans, lower-tier of lesser in any real sense, the ‘other’ open source operating systems that come to mind include names like Debian, Mint Linux, Arch Linux, Cent OS and the list goes on.
Enough intros to the distros already… let’s dive in and note the most interesting recent developments and ask what to expect on the road ahead.
To be continued in other stories…
Core level API-centric software quality specialist SmartBear has now come forward with Swagger Inspector, a free cloud-based API testing and documentation tool.
The product is used to validate and test an Application Programming Interfaces (API) and generate its OpenAPI documentation.
As the so-called API economy now comes into being — and exists as a defined elemental ‘thing’ inside the wider software application development universe — there is (very arguably) additional need for tools that can quantify, qualify and indeed validate and test how software developers will integrate with APIs and get them to function as intended.
A key point is enabling programmers with the ability to make sure any given set of APIs is accurately documented without adding unnecessary overhead into their existing workflow.
“As APIs are increasingly playing a pivotal role in digital transformation, it becomes imperative to deliver quality, consumable APIs at a faster pace,” said Christian Wright, EVP and GM, API Business at SmartBear.
Any API U like
Swagger Inspector will work to check any API, be it REST, SOAP, GraphQL or other.
The product can create the OpenAPI documentation for any API and host the documentation on SwaggerHub, the design and documentation platform.
SwaggerHub is (according to SmartBear) used by over 100,000 architects and developers.
Visual analytics company Tableau Software has launched a new data engine technology called Hyper. The software is included in the Tableau 10.5 version release.
Hyper is designed to ‘slice and dice’ massive volumes of data in a short period of time.
The concept is: 5X faster query speed and up to 3X faster extract creation speed — and so organisations will be able to (theoretically) scale their analysis to more people.
Also included in the release is Tableau Server on Linux and the ability to embed multiple visualisations in a single view with Viz in Tooltip.
Hyper, along with the rest of Tableau 10.5 capabilities, including drag-and-drop power trend lines, a new Box connector, and Tableau Mobile updates is all now currently available.
Fast data ingest
According to Francois Ajenstat, chief product officer at Tableau, Hyper is built for for fast data ingest and analytical query processing on large or complex data sets.
“With enhanced extract creation and refresh performance and support for even larger datasets, customers can choose to extract their data based on the needs of the business, without concern for scheduling limitations. Furthermore, to keep customers in the flow of their analysis, Hyper can complete queries on large data sets in seconds. With fast query performance, complex dashboards open faster, filters are snappier, and adding new fields to visualisations is almost instantaneous,” said Ajenstat
Hyper also helps scale extracts for broad usage by leveraging the latest multi-core processor advancements and employing novel workload parallelisation techniques.
This is an in-memory system designed for transactional and analytical workloads that uses query optimisation techniques and a single columnar storage state for all workloads.
Customers can use Tableau’s hybrid architecture with live and extract options, as well as its portfolio of more than 65 connectors to more than 75 data sources.
Tableau Server on Linux
Tableau 10.5 also introduces Tableau Server on Linux so that users can combine Tableau’s analytics platform with Linux’s enterprise capabilities.
With identical end user functionality to Tableau on Windows, customers already using Linux in their IT environments can integrate Tableau Server into their processes and workflows.
With this new deployment option, customers who prefer Linux no longer need to maintain both Windows and Linux environments to support Tableau. Additionally, for customers who wish to run Tableau in the public cloud, Tableau Server on Linux also works.
Tableau Server on Linux includes support for CentOS, Ubuntu, Red Hat Enterprise Linux and Oracle Linux distributions. IT teams can also control user authentication through LDAP, Active Directory, or local authentication. Making the migration to Tableau Server on Linux is done via a backup and restore.
As the ‘company behind’ Ubuntu, Canonical has brought forward the first iteration of Slack as a snap on its software platform.
Slack is a cloud-based set of proprietary team collaboration tools and services that go some way beyond core ‘messaging’ functionality into areas including project management.
In terms of positioning, Slack falls into the same collaboration/communication category as Microsoft Teams, Sharepoint, Yammer, HipChat, Jive, iApple and Salesforce Chatter.
A snap is a containerised application delivery [package manager] mechanism that loads application dependencies (typically in the form of code and data libraries) that the application needs to install and subsequently run.
Expanding Slack universe
In terms of deployment, snaps ensure that an application package is available for installation and deployment throughout a number of Linux environments. As such, this move will open the door to Slack on ArchLinux, KDE Neon, Linux Mint, Manjaro, Debian, Fedora, OpenSUSE, Solus and – obviously – Ubuntu.
The Slack snap has automatic updates and rollback features giving developers greater control in the delivery of each offering.
“Slack is helping to transform the modern workplace and we’re thrilled to welcome them to the snaps ecosystem”, said Jamie Bennett, VP of engineering, devices & IoT at Canonical. “Today’s announcement is yet another example of putting the Linux user first – Slack’s developers will now be to push out the latest features straight to the user. By prioritising usability and with the popularity of open source continuing to grow, the number of snaps is only set to rise in 2018.”
Thousands of snaps have been launched since the first in 2016, with snaps’ automated updates and rollback features – the option for applications to revert back to the previous working version in the event of a bug.
Reports of Java’s death have been greatly exaggerated — said, well, pretty much every Java engineer that there is.
The Java language and platform may have been (in some people’s view) somewhat unceremoniously shunted into a side ally by the self-proclaimed aggressive corporate acquisition strategists (their words, not ours) at Oracle… but Java still enjoys widespread adoption and, in some strains, growing use and development.
So where next for Java?
IBM distinguished engineer and Java grandmaster John Duimovich has been working with Java pretty much since it appeared in 1995. His views on the key growth spaces are (arguably, potentially) worthy of some consideration.
Duimovich insists that 2018 will be actually be the year of Eclipse.
As many readers will know, Eclipse is a free, Java-based development platform known for its plug-ins that allow developers to develop and test code written in other programming languages.
Throughout 2018 we can look to key projects like EE4J (an open source initiative to create standard APIs) and MicroProfile (an open forum to optimise Enterprise Java for a microservices architecture across multiple implementations and collaborating on common areas of interest with a goal of standardisation).
Both of the above projects now come under the stewardship of the Eclipse Foundation.
Convergence with containers
“As part of the broader effort to simplify development and management, containers and runtimes like Java will become more tightly coupled. They’ll be optimised together to enable seamless management and configuration of Java applications. Consistent memory management and easier wiring between Java constructs and containers will take hold so developers can leverage the benefits of containers and Java runtimes, which are essentially they’re another form of containers,” said Duimovich.
Throwing us a perhaps unexpected curveball, Duimovich also says that Kotlin will become the next hot language.
He thinks that Kotlin’s concise coding syntax and interoperability with Java have already made it popular for many developers. But now, it has first-class support on Android, which is bound to boost its use for mobile.
Other positives include the new six-month release interval for Java for more frequent changes and the faster introduction of features.
Serverless reshaping of Java
Finally here, Duimovich points to the demand is growing for serverless platforms.
“Initially driven as a consumption model but now expanding from simple, event programming models to composite flow-based systems. This innovation will continue as cloud developers want to shift their focus on the application and not worry about servers,” said Duimovich
This above factor means Java runtimes will need to be optimised and re-architected for a serverless world where fast start-ups and smaller footprints matter even more.
Java is still hot, go and get a fresh brew.
There has been an undeniable popularisation of so-called ‘low-code’ programming platforms.
This is a strain of technology designed to provide automated blocks of functionality that can be brought together by non-technical staff to perform specific compute and analysis tasks to serve their own business objectives.
If not quite always completely ‘drag-and-drop’ in terms of the way they operate, low-code interfaces are presented in the most intuitive way possible i.e. they rarely resemble the command line.
Among the newer names bubbling in this space is Bonitasoft.
Bonita (pretty, cute) and soft (like, software, yeah) – get it?
As an essentially open source Business Process Management (BPM) company, Bonitasoft is now partnering with the Amazon Web Services (AWS) in an attempt to broaden its appeal and reach.
“When you choose an application development platform with low code features, you gain greater control and visibility over critical business workflows and processes, increasing efficiency and throughput,” claims the firm.
The newly released Bonita 7.6 has been opened to market in line with the AWS deal. It is presented in a fully open source iteration and as a commercially supported licensed version with support and advanced features.
The new Bonita Continuous Delivery add-on is intended to reduce deployment time by offering automatic provisioning capabilities.
Bonita Continuous Delivery (once added in as an add-on) can connect with Ansible and Docker to perform said provisioning.
“With cloud-based access though AWS, digital process applications on the Bonita platform are easily highly distributed and secure,” said Miguel Valdes Faura, CEO and founder of Bonitasoft. “We are pleased to have AWS as part of our partner ecosystem.”
Bonita applications are compatible with on-premises and AWS clouds.
Every company is now a software company, this much we know to be true.
If we accept this new truism, then a camera company must be a software company… or at least it must be a digital camera company.
The firm you previously [probably] identified as a camera company Ricoh is actually a Japanese multinational imaging and electronics company with divisions spanning into management solutions, IT services, commercial and industrial printing and industrial systems – plus, it also makes a few cameras.
Ricoh is now championing open source by launching an initiative that includes an online marketplace where third party developers can upload and share Android-based plug-ins for the firm’s Ricoh Theta V consumer-grade 360-degree camera.
The Ricoh Theta V Partner Program launches in full in Spring 2018.
“This program demonstrates that the Theta V is more than a camera…it is truly a software-defined IoT device,” claims Ricoh.
So what kind of customisations are we talking about here? Developers can create apps and software that extend and enhance the capabilities of 360-degree imaging – plus also new features and functionality for the camera itself.
These can include customised capabilities that enhance the Ricoh Theta V’s use for specialty applications and in vertical markets.
As part of the program, Ricoh is making available the Ricoh Theta V Application Program Interface (API) and Software Development Kit (SDK) and will provide tools and guidance to support plug-in development — as noted by Wataru Ohtani, corporate associate vice president and general manager of the Smart Vision business group at Ricoh Company Ltd.
This camera is being used for consumer and business applications such as documenting vacation memories to photojournalism, law enforcement, real estate listings and virtual tours.
He is responsible for creating angr, a Python framework for analysing binaries that is used by the US Department of Defence to scan IoT devices before it introduces them to its networks.
In slightly more depth, angr combines both static and dynamic symbolic (‘concolic’) analysis, making it applicable to a variety of tasks.
NOTE: Concolic testing (a portmanteau of concrete and symbolic) is a hybrid software verification technique that performs symbolic execution, a classical technique that treats program variables as symbolic variables.
The Computer Weekly Developer Network gained access to Vigna to discuss the mechanics of angr and find out more about how software application development professionals should regard this technology.
CWDN: What is angr & who is it for?
Vigna: Well, angr is a highly modular Python framework that performs binary analysis using VEX as an intermediate representation. The name ‘angr’ is a pun on VEX, since when something is vexing, it makes you angry. It is made of many interlocking parts to provide useful abstractions for analysis. Under the hood, pretty much every primitive operation that angr does is a call into SimuVEX to execute some code.
All IoT firmware is binary and only vendors have the source code. But often, IoT vendors don’t share source code, so security teams are left to find their own way to analyse the binary code. That means that, if you want to analyse IoT devices for vulnerabilities, then you need good binary analysis tools.
Binary analysis goals: program verification; program testing; vulnerability excavation; vulnerability signature generation; reverse engineering; vulnerability excavation; exploit generation.
CWDN: What can we do with angr?
Vigna: In short, analyse a lot of binaries. More specifically, we can perform: symbolic execution; built-in analyses: CFG, BinDiff, Disassembly, Backward-Slice, Data-Flow Analysis; value-set analysis, etc; binary rewriting; type inference; symbolically-assisted fuzzing (driller); automatic exploit generation.
CWDN: Why did you create angr?
Vigna: The researchers at the University of California Santa Barbara Security Lab (which I am a part of) were interested in finding bugs in software, in publishing papers about finding bugs in software and wanted there to be a reasonable system for performing static analysis and symbolic execution on binary code.
On a more practical level, for organisations buying connected devices, security has risen to the top of the agenda. With the creation of angr, those buying pieces of firmware/software can now independently analyse it first without getting source code (as mentioned above, vendors don’t traditionally hand that over). This can go a long way to avoid another Mirai-botnet scenario.
CWDN: What is different about angr?
Vigna: There are other binary analysis tools, including Binary Analysis Platform (BAP), Reverse Engineering Intermediate Language (REIL), VEX, TCG – TinyCode that do elements of what angr does, but they don’t consolidate it all in one place and are not as widely or as easily used.
The proof is in the pudding – Cisco, Huawei, universities, researchers and even government research labs are using it. As a more specific example, the DoD uses it to analyse the hardware that it buys.
CWDN: Who can use (& get) angr?
Vigna: Thanks for asking, angr is an open source solution and can be found at anger.io. In over 20 years of researching and developing security technology, it has become clear to me that for research to have the most real-world impact, it must be given away, with no strings attached. This helps the technology to drive innovation, and means that there is less resistance to adopting it. Ultimately, I think it helps to make software better. Plus, as it is University owned property, it doesn’t need to make money.
It’s time for a little open source history.
The ‘open source’ label itself was created at a strategy session held by members of the group that we now call the Open Source Initiative (OSI) on February 3rd, 1998 in Palo Alto, California USA.
The term was proposed by Christine Peterson, she is co-founder and past president of Foresight Institute, a nanotech public interest group.
That same month, the OSI was founded as a general educational and advocacy organisation with the aim of raising awareness and adoption for open development processes, methodologies, working practices and all manner of open goodness.
20 years on
Today, 20 years on in 2018 (we’re presuming that you can add 20+1998 and get 2018) we can see the group making some defining statements about where open source is and how the open approach to design, development, team workflows and more has helped us get to where we are today with a technology that has ‘even’ been embraced by Microsoft.
The OSI says that open source has become ubiquitous, well – perhaps not quite, but it has indeed become recognised across industries as a fundamental component to infrastructure, as well as a key factor that can (in many cases) help create innovation.
To commemorate the brace of decades being noted here, the Open Source Initiative is launching the OpenSource.Net portal, which will serve both as a community of practice and a mentorship programme.
No openwashing, thanks
With so many vendors claiming to have ‘got the open religion’ but in fact doing nothing more than openwashing a few ‘less than key’ elements of their total technology stacks, the OSI says its next goals to promote open source’s viability/value to issues and look for areas where it can promote and champion implementation and what it calls ‘authentic participation’.
Not that most will need a reminder (and the above should have been enough of a clue anyway), but we at the Computer Weekly Open Source Insider blog define openwashing as the act of offering a certain amount of source code out as open but:
a) keeping the cash cow code of the vendor’s projects proprietary and closed
b) exploiting all commercial elements of project business for all they are worth and putting no discernible or definable hours into any so-called open projects
c) offering no substantial ‘code commits’ to any project that is opened and no substantial ‘code commits’ to any related projects from other groups
… so indeed, openwashing is an unpleasant neologism and the OSI seeks to avoid it through work with all its members listed here.
Future OSI goals
The OSI summarises some of its goals going forward as follows:
- Development: To examine how open source has benefited code development at different companies in terms of costs, quality, customisation, security, support, and interoperability — the OSI will also look at how each firm manages open source development/contributions.
- Business: To examine what business practices align best with open source and look at how each firm collaborates with others to enhance products and services.
- Community Building: To look into how open source has helped firms connect with developers, businesses, non-profits, government and/or educational institutions.
- Talent Nurturing: To examine how participation in the open source community helped companies attract and retain the best talent.
- Leadership: To look at the future of open source and ask how open source will shape each industry vertical going forward.
Named after the South American waterfall, but with the ‘creative’ use of a lower case first letter, iguazio is a data platform company specialising in continuous analytics and event driven-applications.
The firm has recently introduced nuclio, an open source multi-cloud serverless platform.
Note: when we talk about the notion of serverless, we do of course still mean computing that relies upon servers – the difference is that the architecture is meant to embrace the notion of compute functions being carried out in ephemeral containers.
Serverless computing does not eliminate servers, but instead seeks to emphasise the idea that computing resource considerations can be moved into the background during the design process.
Taking serverless forward here then, the company, nuclio, claims that its functions are faster than bare-metal, processing up to 400,000 events per second with a single process.
Cloud lock-in freedom
Continuing the serverless freedom train of thought, the concept here is that developers can look to create applications with minimal operational overhead that are free from lock-ins to cloud-specific APIs or services.
According to nuclio, “[We say] iguazio completes a full-blown cloud experience of data services, AI and serverless – all delivered in one integrated and self-managed offering, at the edge or in a hosted cloud. [Essentially, it’s] a set of on-prem services that enable modern application delivery with a true cloud experience close to data sources.”
Developers will be interested in this platform’s automated function deployment, scaling and operation – it also supports a variety of open or cloud-specific event and data sources.
Further, there is a set of debugging, logging, monitoring and out of the box CI/CD features.
“[Today] nuclio addresses common serverless development and operational challenges and can be used either as an open source FaaS project or as a managed service on iguazio’s platform. It accelerates data ingestion, data enrichment, analysis and AI so that developers can run faster real-time actions like for IoT sensor data or image classification, simplifying developer and operational usability,” said iguazio’s CTO, Yaron Haviv.
In summary, nuclio is open source with an open architecture intended to enable portability. It works on low-power IoT devices, developer IDEs, Docker, Kubernetes or cloud platforms.