How does the The Apache Software Foundation work out who are the best people?
Let’s remember that the foundation itself is a non-profit corporation established to support Apache software projects, including the Apache HTTP Server… so anybody joining its ranks has to be of a certain standard, obviously.
The name ‘Apache’ was chosen from respect for the Native American Indian tribe of Apache, well-known for their superior skills in warfare strategy and their inexhaustible endurance. It also makes a cute pun on “a patchy web server” — a server made from a series of patches — but this was not its origin.
So again we ask… how does do the members of the Apache Foundation work out who the best developers are?
Sally Khudairi, founder/CMO of OptDyn and vice president at The Apache Software Foundation says that we need to go beyond the ‘binary’ issue of (just) testing for skills requirements.
She argues that this need to be augmented/replaced by ‘can this candidate think critically/creatively?’ sorts of questions to demonstrate intellectual abilities.
“[In the recruitment process] both our Founder/CEO Alex Karasulu and CTO Niclas Hedhman would review the candidate’s code (on GitHub or wherever it was contributed) as well as their interactions on mailing lists to get a clearer idea on how the individual performs technically an well as socially. There is something to be said about the transparency of open source,” said Khudairi.
Alex Karasulu says he usually breaks candidates into two categories.
Karasulu is founder/CEO of OptDyn and member/project mentor at The Apache Software Foundation.
“The first category are the unknowns. They need to be low balled and put into a 60-90 day trial period. The other category are those who have activity in open source and those I would check what they’ve done code wise and their interactions with others like on mailing lists. You can also check how well they’ve organised their code and how persistent they have been on their projects which shows passion. Passion is a key ingredient: you don’t want to work with people without the passion to innovate. Of course this is if I have time and am involved,” said Karasulu.
Niclas Hedhman, CTO of OptDyn and member/project mentor at The Apache Software Foundation agrees with Karasulu.
Hedhman also says that both Google and Microsoft have research on this topic and have found that they are incapable of predicting the post-hire performance from any test, any questions and any previous field/experience. A reality which he finds quite discouraging.
“When I hired for myself, there was two ways that worked for me: find the person I want by looking at open source work performed — and personal recommendation where the recommender had something to lose (reputation). Everything else was a flip a coin accuracy, mostly negative (don’t meet expectations) outcomes. I agree with that big firms end up asking strange questions on details,” said Hedhman.
Hedhman recounts an experience at JP Morgan where he was given a one hour coding task. Candidates got the test cases to pass for a event receiver limiter in front of a high frequency trading engine. They had 60.0 minutes to complete and any time over the application went straight to dustbin.
In his opinion, this is a mad way to hire.
Lars Bøgild Thomsen is director of infrastructure at OptDyn. Thomsen explains that some years ago, he saw a list of questions used by Google to assess system admins. He was ‘absolutely baffled’ how much it focused on remembering details rather than on understanding how to figure out the details.
“I would assume a lot of developer assessment fall into the same trap – assessing how much the applicants remember rather than how good and fast they are at figuring it out. Hence the ‘broom’ approach – something I really (seriously) would like to see in practice one day: the idea being when new staff walk into the door give them a broom and tell them to wipe the floors UNTIL they themselves find something more important to do,” said Thomsen.
The concept above is, if they are too proud to sweep – kick them out right away – if they still wipe floors after 1 week – kick them out too.
Existing staff have to be in on it and obviously share stuff if the sweepers walk up and ask.
“It’s so bizarre it is funny but it came up based on a discussion that a lot of people seem extremely unwilling or unable to figure out where their effort is most needed. Or where they themselves could make a difference. It wouldn’t work with everybody but I honestly do believe that whoever passed that test would be valuable assets in just about any organisation,” he said.
Techie recruitment continues to be a hard nut to crack, so we have to thank the team at Apache and OptDyn for sharing so openly.
Multi-cloud automation company Mesosphere has now come forward with Mesosphere Kubernetes Engine (MKE), Mesosphere DC/OS 1.12 and the public beta of Mesosphere Jupyter Service (MJS).
Mesosphere Kubernetes Engine is Kubernetes-as-a-Service on multi-cloud and edge… and (as readers will likely know) Kubernetes is an open source container-orchestration system for automating the deployment, scaling and management of containerised applications.
This provides what is known as ‘high-density resource pooling’ for cloud apps, a type of load balancing that provides cloud applications with the processing, storage, networking ability and analytics (resources) they need in environments where data throughput is extreme (high-density)… and it does all this without the need for virtualisation.
MJS is intended to simplify delivery of Jupyter Notebooks, popular with data scientists, to streamline how they build and deliver AI-enabled services. Readers will also note that Project Jupyter exists to develop open source software, open standards and services for interactive computing across dozens of programming languages.
“Companies need to move fast to stay relevant in today’s competitive landscape. To do this, IT teams are leveraging leading tools such as Kubernetes, Jupyter Notebooks, advanced security and software registries to drive software innovation,” said Florian Leibert, Mesosphere CEO.
Leibert insists that by natively integrating Kubernetes and Jupyter into DC/OS, his team is able to deliver fast deployment and centralised management — but still enabling experimentation and providing developer choice.
Mesosphere DC/OS 1.12 is all about giving cloud developers edge and multi-cloud infrastructure from a single control plane. With Mesosphere Kubernetes Engine (MKE), enterprise IT can centralise scattered Kubernetes clusters on multiple cloud providers managed from a single platform.
Where is all this leading us?
We know that many data sets are too large to fit on laptops or individual workstations. This forces data scientists and engineers working with Jupyter Notebooks to repeatedly work with smaller data sets, constraining progress and increasing the risk of data leaks.
It leads us towards what we could call ‘on-demand data science’ – that is, data scientists get instant access to the Jupyter Notebooks computing environment, preconfigured with a good deal of the tools they need.
The move to compartmentalise, package and automate many of these functions is (arguably) very much ‘where cloud-native application is at’ right now… but it’s complex stuff, we need to move carefully.
It’s easy to knock open source; there are frailties in fragilities in all code and open source libraries have been variously castigated by a litany of security software vendors attempting to ply their wares.
But open source is having a harder time penetrating big-scale enterprise — this we know to be true.
IBM Red Hat acquisitions notwithstanding, the road to open source penetration in enterprise through tangible software application development interaction is not as high as it could be.
Developer cloud provider company Digital Ocean says it has spoken to over 4,300 developers around the world to compile an open source barometer report.
Key suggestions emanating from this study suggest that twenty years in, just over half of developers are contributing to open source projects (55 percent).
That’s less than what companies expect from their employees though: three out of four respondents said their companies expect them to use open source software as part of their day-to-day development work.
Entry point frustration
Developers say they want an easier entry point into the open source community.
Respondents listed not knowing where to start as the top obstacle to participating in open source projects (45%). Other reasons include doubting they have the right skills (44%), and companies not giving their employees time to contribute (30%).
There’s also a disconnect between companies’ encouragement of open source within their organisations and their actual investment. Only 18 percent of respondents said their company is a member of an open source-related organisation, and 75 percent said their company invests US$1k or less every year.
The ‘Currents’ report is Digital Ocean’s fifth annual study of this kind and this year it has been solely focused on open source.
Inspiration for the nation
What’s inspiring people to participate in open source?
According to the report, “The top motivation is [the opportunity to] improve coding skills — developers in the UK especially cited this (78 percent vs. 69 percent overall). A close second was being part of a community — even though developers tend to work independently, they still look for ways to connect with other coders and learn new technology. Thirty seven percent of developers said they would contribute more if their companies gave them additional time to do so.”
Of the 4,349 survey respondents, 58 percent self-identified as developers, 22 percent as students and 10 percent as systems administrators — the rest identified as managers, technical support or ‘other’.
As we approach the end of year ‘silly (survey) season’, this report stands out as comparatively open and informative and a whole lot better than most of the contrived ‘survey analysis studies’ simply designed to push a self-serving pre-defined agenda with loaded questions.
Unless you were hiding under a rock holding a sign reading ‘don’t tell me about any big corporate technology news’ all week, you’ll know that IBM agreed to buy Red Hat for a total enterprise value of approximately $34 (£27) billion.
Yay, it’s all good then — enterprise open source will get an equal (or perhaps even bigger) injection of effort and investment than seen in Microsoft’s acquisition of GitHub.
Even better, IBM says that Red Hat will now join IBM’s Hybrid Cloud team as a distinct unit, but that there is a focus on ‘preserving the independence and neutrality’ of Red Hat’s open source development heritage.
Actually this is happening everywhere, aside from Microsoft’s $7.5 billion purchase of GitHub, Salesforce’s $6.5 billion acquisition of MuleSoft, we also saw the $5.2 billion merger between Cloudera and Hortonworks — and you can convert those numbers into £GBP if you really feel you must.
Is everything happy days then?
Tyler Jewell, CEO of OSS integration company WSO2 and former CEO of Codenvy suggests that not everything is as rosy in the Red Hat deal as one might imagine.
He claims that during his own short tenure at Red Hat, discussions of the company being acquired were broadly discussed with Google, Microsoft and IBM as likely acquirers.
“While [this is] a fantastic validation of open source software, the acquisition is potentially damaging for customers if IBM moves Red Hat toward its traditional closed and proprietary model,” notes Jewell, in a blog post. “IBM will kill the Linux spirit that lives within Red Hat — potentially opening the door for distributions other than CentOS and Fedora to gain wider acceptance.”
Jewell’s naysaying negativity continues as he suggests that IBM’s commercial- and patent-first culture will erode Red Hat’s open source and innovation advantages.
Further, he claims that this acquisition is a disaster for the Kubernetes and Docker communities, because these important competitors with differing views will be forced to rationalise their offerings.
Cheer up Tyler, it might not be that bad, we’re living through a Brexit nightmare after all.
Karthik Ramasamy isn’t that happy either.
A software engineer by trade, Ramasamy co-created Twitter’s real-time engine, which it then open sourced called Apache Heron.Ramasamy was engineering manager and technical lead for real time analytics at Twitter before co-founding Silicon Valley startup Streamlio.
Ramasamy states that this acquisition is yet another data point validating the critical role that open source plays in modern IT infrastructure and IT business. He is positive and says that the particular ‘size of this deal’ has the potential to breathe new life into the open source ecosystem
“However, at the same time it does add a note of caution — to date, IBM has been largely a huge contributor to open source without entering the business of open source in a significant way. This acquisition represents a new phase for IBM, in which the open source ecosystem will tread carefully to see whether IBM continues to be a broad-based contributor to open source or narrows its focus to open source projects directly tied to its own offerings,” said Ramasamy.
Tu amigo de Amido
Principal consultant and DevOps leader at cloud applications company Amido is Richard Slater.
Slater says that IBM bought ALL of Red Hat (because them’s the rules and you can’t perform a half-baked acquisition at this scale), but really… IBM bought Red Hat for OpenShift, which RedHat has been building as a comprehensive enterprise self-contained Kubernetes solution.
“On the basis that IBM has been lagging in the cloud market and is desperately in need of having a presence in a microservices world then this is a pretty astute acquisition,” said Slater, in a blog.
Slater further comments to claim that there is some fear in the industry that IBM will be the end of RedHat.
“[This could represent] the end of products it currently offers such as RHEL and OpenShift, which will be consigned to the scrapheap along with IBMs mainframe and physical server divisions as part of the great ‘Big Blue’ falling rapidly from a great height,” laments Slater.
Well now Richard, you need to cheer up a bit too.
It’s important to remember that it’s very easy to jump on the negative press bandwagon and that none of the spokespeople quoted here work for companies as expansive, successful, historically innovative, philanthropic or capable or reinvention as IBM.
Time will tell… Red, Blue or some combination of the two.
You don’t normally expect a Jim Whitehurst LinkedIn update to hit you over a cuppa at 7am on a Monday morning.
Whitehurst is CEO of commercially supported open source enterprise Linux operating system Red Hat Enterprise Linux (RHEL) company Red Hat.
The Red Hat CEO’s update stated that IBM has now announced the acquisition of Red Hat for US$190.00 per share in cash, representing a total enterprise value of approximately $34 billion.
Did IBM need more open source then?
Well, Red Hat obviously has open source in oodles — the company’s RHEL is joined by other significant products including Ansible, JBoss Enterprise Application Platform (or JBoss EAP), Fuse and Red Hat OpenShift Container Platform… to name a few highlights.
So is this the light that drew IBM to the fire? Answer: no… and a bit yes.
IBM has a wide variety of open source projects and initiatives, has been a key players in helping to ‘save’ Linux over the years and is the originator of the Eclipse Integrated Development Environment (IDE) used extensively for Java programming.
It’s more to do with IBM’s desire to become more than just the ‘stuff old company that does a lot of mainframe stuff’ and be a more expansive player in the hybrid cloud market, a space that Red Hat excels in with a product set that (once combined with IBM’s) make the planet’s biggest hybrid multi-cloud provider.
Both Red Hat and IBM are traditionally active (some would say strong) in significant hybrid cloud technologies such as Linux, containers, Kubernetes, multi-cloud management — they also both work openly (some would say vibrantly and successfully) in the cloud management and automation space.
Will it be a case of hands off or hands on?
IBM says that Red Hat will now join IBM’s Hybrid Cloud team as a distinct unit, but that there is a focus on ‘preserving the independence and neutrality’ of Red Hat’s open source development heritage.
Computer Weekly author Tim Anderson says that, “My own instinct is that we will see more IBM influence on Red Hat, than Microsoft influence on GitHub, to take another recent example of an established tech giant acquiring a company with an open source culture.”
IBM CEO Ginni Rometty thinks the deal is a game-changer, well, she would, wouldn’t she?
“It changes everything about the cloud market. IBM will become the world’s #1 hybrid cloud provider, offering companies the only open cloud solution that will unlock the full value of the cloud for their businesses,” said Rometty.
Whitehurst meanwhile says that open source is the default choice for modern IT solutions, well, he would, wouldn’t he?
“Joining forces with IBM will provide us with a greater level of scale, resources and capabilities to accelerate the impact of open source as the basis for digital transformation and bring Red Hat to an even wider audience – all while preserving our unique culture and unwavering commitment to open source innovation,” said Whitehurst.
Red Hat will now also get access to IBM’s professional (some would say expansive) sales network, so there is an argument to say the we could see (even) wider proliferation of open source technologies across world enterprises. So IBM did have an open source eye open, but not quite as much as it has a hybrid cloud focus in mind. Either way, bring it on.
Applications have features, so, logically, feature management (and delivery) is now a thing.
The very logically named Rollout is a company that describes itself as a feature delivery and management specialist that accelerates software development and release.
The firm has now announced feature management platform functionality to offer Configuration-as-Code (CaC) and an integration with GitHub.
The intention is that developers should be able to treat ‘feature flags’ as they treat any other mission-critical infrastructure.
A feature flag (sometimes also called a feature flipper, feature toggle or feature switch) is a software application development programming technique designed to provide an alternative approach to shouldering the burden of maintaining multiple source code branches (also known as feature branches)… and so provide a means of testing new features prior to their deployment, or indeed their full release.
Rollout says it also plans to open source its feature management platform.
“Rollout’s CaC approach, the integration with GitHub and the decision to open source its feature flag technology means that developers now enjoy the benefit of both worlds — faster development and deployment of new features without the risk of breaking things,” said Kyle Daigle, director of ecosystem engineering for GitHub.
Feature flags modeled with CaC means that software developers can design, implement and deploy application features using the same processes used to manage code changes, like full change management, ability to rollback, track ownership of changes etc.
“Until now, control of feature flags has circumvented existing software development and deployment processes or known checks and balances for software development and production changes,” said Erez Rusovsky, co-founder and CEO of Rollout.io. “The decision to integrate our configuration as code with Git & GitHub and open source our platform means that product and development teams have consistent and reliable control of every feature in production, resulting in applications that are easier to develop and maintain.”
Rollout’s feature management solution with CaC will initially leverage GitHub as an integration point with support for Atlassian, Gitlab and other hosting services coming soon.
The company says it designed its management system with feature flags and controls for gradually releasing features in development and production with the ability to rollout back features when needed.
Open source software configuration management company Puppet used its colourfully named Puppetize Live event in San Francisco this month to detail a new product designed to help measure their software delivery performance and to benchmark progress.
Puppet Insights (currently in private beta) provdes what the company calls ‘DevOps performance metrics’.
It is consists of dashboards and reports that claim to provide a bird’s-eye view of the software delivery cycle.
Puppet CEO Sanjay Mirchandani also highlights the Puppet Discovery product — a tool, logically enough, devoted continuously discovering IT resources.
“New features available in the latest version of Puppet Discovery include OpenStack support; sophisticated filtering of hosts and packages; additional tasks for package management and running ad hoc commands,” noted Mirchandani and team.
Additionally then, Puppet has a Vulnerability Remediation beta, to automate the process of detecting and remediating security vulnerabilities.
Vulnerability Remediation integrates with security providers — Qualys, Tenable and Rapid 7 — and provides a prioritised list of recommendations based on severity, and a simple workflow to install new packages across target hosts.
The company also updated Puppet’s flagship product.
“Combining a remote, agentless offering with an ongoing agent-based solution, Puppet Enterprise 2019 offers the advantage of automating anything from anywhere and at anytime… and can extend changes across a team’s infrastructure at scale,” noted the company, in a press statement.
Finally from the newswires, the company detailed Puppet Bolt 1.0 — an agentless multi-platform automation tool.
The firm says that teams new to automation can quickly get started with no prerequisites or Puppet knowledge. In the latest release, users can apply existing blocks of Puppet code to remote nodes directly from a workstation. This enables users to take advantage of the more than 5,700 modules available on the Puppet Forge.
There’s open source and there’s open source.
There’s genuine free and open source software (FOSS) and then there’s largely locked down proprietary non-dynamic library open source that is generally supplied as a commercially supported version of an open source kernel base that doesn’t see whole lot of real world code commits — and, no, there’s no acronym for that.
Then, there’s other ways of evidencing real open openness such as non-technical contributions (could be language translation/localisation etc.) and then there’s plain old contributions.
Scale-out Postgres database technologies Citus Data is donating 1 percent of its equity to non-profit PostgreSQL organisations in the US and Europe.
The United States PostgreSQL Association says it has received the stock grant and will work with the PostgreSQL Europe organisation to support the growth, education, and future innovation of the open source Postgres database in both the US and in Europe.
To coincide with Citus Data’s equity donation, the company is joining the Pledge 1% movement, alongside technology organisations such as Atlassian, Twilio, Box etc..
“When people think about contributing to open source and building sustainable open source communities, there are different approaches,” said Citus Data CEO Umur Cubukcu. “You can open source software you’ve created, you can maintain certain features and projects, and you can contribute to events with speakers and sponsorships — all of which our team spends a lot of time on. W are excited to create a new way to contribute to open source, by donating 1 percent of our equity to the non-profit PostgreSQL organisations.”
Founded in 2011, the founders of Citus Data set out to bring the performance and economics of scale-out systems to the field of relational databases.
To give applications the memory, compute, and disk resources of a distributed database cluster, the team at Citus Data created an extension to Postgres that transforms PostgreSQL into a distributed database — something that was previously not possible with any other relational database, whether proprietary or open source.
We used to say: if you’re a developer, then you’re a full stack software application developer.
Then we moved on to say: all developers are essentially mobile developers because all applications today need to have some form of mobile deployment and (in many cases) mobile optimisation features as the rise of mobile-native apps took told.
It’s (obviously) the same story for cloud — hence the rise and popularisation of the term cloud-native.
What’s next then? Container-native is here… so should API-native be next.
Perhaps we should step back and just say open source native (well, look at what’s been happening at Microsoft after all).
German open source Linux-based product company SUSE (say: sou-suh) certainly hopes that open source native will be the way of things to come.
The company has now announced the growth of its SUSE Academic Programme — an initiative designed prepare student developers for all industries with open source knowledge, training materials and a low-cost education buying programme options.
The programme spans more than 400 universities, schools, libraries and other academic institutions in the UK, North America and Europe and was founded in May of 2017.
Sander Huyts, SUSE vice president and academic programme lead points to the suggestion that the demand for open source skills is at an all-time high and increasing every year.
According to the Linux Foundation’s 2018 Open Source Jobs Report, hiring open source talent is a priority for 83 percent of hiring managers, an increase from 76 percent in 2017.
Phillip Chee, computer science technologist and professor at Fleming College in Peterborough, Ontario, said, “The materials provided in the SUSE Academic Programme are very impressive. I am using the programme to develop a lab for the students to install a small cloud and incorporating SUSE OpenStack Cloud into our operating system theory class.”
The SUSE programme has had an impact at the University of Oxford, the University of Cambridge, Czech Technical University, San Diego State University, New York City College of Technology and the University of British Columbia.
San Francisco headquartered software analytics company New Relic has acquired Belgian container and microservices monitoring firm CoScale.
Neither firm is essentially open source in its core approach, but the technologies being interplayed here essentially are.
CoScale’s expertise is in monitoring container and microservices environments, with a special focus on Kubernetes — the open source container orchestration system for automating deployment, scaling and management of containerized applications originally designed by Google.
New Relic notes that Kubernetes has become the de facto standard for orchestrating containerized applications, which it indeed has.
The company claims that CoScale has been a leader in providing container-native (not a term we have used much up until now) monitoring for Docker, Kubernetes and OpenShift, always with the aim of providing full-stack container visibility in production environments.
Key CoScale team members will join New Relic and relocate to its European Development Center in Barcelona, Spain.
As of now, a Google search for “what is container native” automatically extends and auto-completes to “what is container native storage”, but that may be because Red Hat directly brands a product in the space as Container Native Storage (CNS).
We could perhaps quite reasonably suggest that, soon, this may change to “what is container native development” as we now look to use this increasingly de facto form to govern the way we use cloud resources in live production software application development environments.