Open Source Insider


February 6, 2019  11:19 AM

Intel Nauta: for Deep Learning on Kubernetes

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Enterprises are still exploring use cases to augment their business models with Artificial intelligence (AI)… this is a market that is very much still-nascent.

Magical analyst house Gartner has conjoured up figures to suggest that real world AI deployments could reach nearly $4TN by 2022… and Deep Learning (DL) is key to the growth.

But, while DL in the enterprise is palpable, it is still a complex, risky and time-consuming proposition because it is tough to integrate, validate and optimise DL software solutions.

In an attempt to answer these challenges, we can look to Nauta as a new open source platform for distributed DL using Kubernetes.

What is Nauta?

Nauta (from Intel) provides a multi-user, distributed computing environment for running DL model training experiments on Intel Xeon Scalable processor-based systems.

Results can be viewed and monitored using a command line interface, web UI and/or TensorBoard.

Developers can use existing data sets, proprietary data, or downloaded data from online sources and create public or private folders to make collaboration among teams easier.

For scalability and management, Nauta uses components from the Kubernetes orchestration system, using Kubeflow and Docker for containerized machine learning at scale.

DL model templates are available (and customisable) on the platform — for model testing, Nauta also supports both batch and streaming inference.

Intel has said that it created Nauta with the workflow of developers and data scientists in mind.

“Nauta is an enterprise-grade stack for teams who need to run DL workloads to train models that will be deployed in production. With Nauta, users can define and schedule containerised deep learning experiments using Kubernetes on single or multiple worker nodes… and check the status and results of those experiments to further adjust and run additional experiments, or prepare the trained model for deployment,” said Intel, in a press statement.

The promise here is that Nauta gives users the ability to use shared best practices from seasoned machine learning developers and operators.

At every level of abstraction, developers still have the opportunity to fall back to Kubernetes and use primitives directly.

Essentially, Nauta gives newcomers to Kubernetes the ability to experiment – while maintaining guard rails.

 

 

February 5, 2019  9:41 AM

Service Mesh: what is it, where are we… and where are we going?

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

This is a contributed post for the Computer Weekly Developer Network written by Ranga Rajagopalan is his capacity as CTO and co-founder of Avi Networks.

Avi Networks is known for its Intelligent Web Application Firewall (iWAF) technology.

The firm offers a software-only product that operates as a centrally managed fabric across datacentres, private clouds and public clouds.

Rajagopalan writes…

Cloud-native applications – shorthand these days for containers and microservices – have a lot going for them, but most [or at least many] of the benefits come from their ability to accelerate dev and test processes, reducing the time it takes to bring applications online, fix bugs, add new features and so on.

Move those applications out of the dev & test sandbox and into the wider production world, however, and cloud-native applications [often] add new issues in terms of scalability, security and management… so potentially wiping out those benefits altogether.

The solution is a service mesh [A service mesh is a configurable infrastructure layer for a microservices application that can work alongside an orchestration tool such as Kubernetes], but it’s not a simple product you can just bolt on to make cloud-native applications production ready.

It’s more a framework, which can be used to connect cloud-native components to the services they need… and one which can be delivered in a variety of ways.

A matter of scale

Scalability is absolutely fundamental to the problems posed by cloud-native technologies, which work by breaking applications down into much smaller parts (microservices) each wrapped in its own lightweight and very portable virtual environment (container).

So, whereas a conventional web application might span a handful of virtual machines, a cloud-native app can comprise a collection of hundreds or even thousands of microservices, each in its own container running anywhere across a hybrid cloud infrastructure.

On the plus side, containers can be turned on and off, patched, updated and moved around very rapidly and without impacting on the availability of the application as a whole.

Each, however, also needs to find and communicate both with its companions and shared load balancing, management, security and other application services. And that’s far from straightforward given the sheer number of containers involved and their potentially high turnover rates.

This need to communicate adds too much weight to cloud-native apps and would be a nightmare to manage at scale through traditional means. Hence the development of service mesh, a dedicated infrastructure layer for handling service-to-service requests, effectively joining up the dots for cloud-native apps by providing a centrally managed service ecosystem ready for containers to plug into and do their work.

Project Istio, open source

Despite the relative immaturity of this market, there’s a lot going on to put this all into practice, both by vendors in the services space (particularly application delivery, traffic and security management solutions) and the big-name cloud providers. This has led to the development of a number of proprietary service mesh ‘products’.

But, of [perhaps] greater interest is Istio, an open source initiative, originally led by Google, IBM and Lyft, but now with an ever-growing list of other well-known names contributing to and supporting its development including, Cisco, Pivotal, Red Hat and VMware.

Istio is now almost synonymous with service mesh, just as Kubernetes is with container orchestration. Not surprisingly, Istio’s initial implementations are very much bound to Kubernetes and cloud native application architecture. The promise of service mesh is alive today within a Kubernetes cluster, but the value will grow exponentially when a service mesh can be applied to all applications across clusters and clouds.  

Where next for service mesh?

As mentioned above, more and more companies are joining the service mesh conversation under the Istio banner. This is the type of support that helped Kubernetes evolve from a project to the de facto container orchestration solution in just a few short years.

The rich combination of established technology giants and [perhaps more] innovative startups will continue to foster and develop the Istio service mesh to include more features and support more applications. By extending Istio with other proven technologies, you can easily apply cross-cluster and even cross-cloud communication — opening the door to apply the value of service mesh to existing applications in the datacentre.

The promise of granular application services delivered close to the application is an idea that is readily applicable to traditional applications running on virtual machines or bare metal infrastructure. With the disaggregation of application components made possible by the containerised microservices architecture, this mode of service delivery is a necessity and will eventually become ubiquitous across all application types beyond

Companies looking to adopt cloud-native technologies will, almost certainly, need a service mesh of some description and the smart money is on Istio being part of that solution. Whatever format it takes, however, the chosen solution needs to deliver value that fits with new and existing applications.

Avi CTO Rajagopalan: big fan of clouds, all kinds.


January 30, 2019  7:05 PM

Nginx: managing monolithic app traffic is an API game

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Nginx is the company that likes to be called NGINX, except it’s not… because it’s not an acronym, it’s supposed to say “engine-X”, which is a cool snazzy name, right?

Actually, Nginx would only ever rank as Nginx, because almost all reputable press outlets only allow acronyms up to a maximum of three letters.

There’s always an exception that proves the rule and SuSE might be the fly in the ointment. Or could it be TIBCo (who would prefer we say TIBCO, for The Information Bus Company) that makes this an imperfect rule?

Either way, it’s tough to read NGINX news without automatically self-editing yourself back to Nginx, which might be a shame… because the firm’s application delivery platform has just been augmented with by availability of its API management module for Nginx Controller.

Nginx controller manages application and web server performance. The API controller is special (Nginx would say ‘unique’) in that it is capable of separating runtime (day-to-day) traffic from management traffic, which is (very arguably) rather neat when we look at the amount of reliance web-connected firms are placing on API management and the need there now is to improve API response times.

“The Nginx API management solution enables infrastructure and operations (I&O) teams to define and publish APIs, manage traffic to and secure backend applications, monitor performance issues and analyse API usage,” notes the company, in a press statement.

Nginx API Management Module is built on an architecture that provides control‑plane functionality with Nginx Plus, an API gateway.

The firm reminds us that Nginx is also a component in many traditional API management solutions, providing the underlying gateway for Axway, IBM DataPower, Kong, MuleSoft, Red Hat 3Scale, and others.

This technology provides what is said to be a ‘simple interface’ to define APIs, manage upstream groups and backend servers, route resources to upstreams, and publish the resulting API definitions to Nginx Plus gateways.

Both Nginx Controller and Nginx Plus are flexible and can be deployed on any environment due to their small footprint – bare metal, VMs, containers, and public, private and hybrid clouds.

 


January 29, 2019  2:59 PM

Alfresco dines out on framework extensions

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Open source process automation, content management and information governance software company Alfresco has hit the version 3.0 iteration for it ADF 3.0 (Application Development Framework).

Company CTO John Newton claims that his firm has added some major extensibility features to Alfresco ADF to simplify building web apps.

The software ADF itself uses the firm’s own Alfresco Digital Business Platform — it provides a set of reusable Angular-based UI (user interface) components and services, command-line tooling and client APIs (Application Programming Interfaces) that surface Alfresco process, content and governance services.

“We believe a digital business needs more than a content repository, it needs a Digital Business Platform that developers can extend and customise. Enterprises today need a single source of truth for information across their end-user apps and back-end systems. That’s the power of process-led, cloud-native content management – a single platform to manage, secure, and collaborate on content,” said Alfresco CTO Newton.

Developers will now be able to extend the main ADF components with their own code. This new extensibility mechanism helps maintain code customisations while remaining up-to-date with the latest ADF versions.

Future-proof

By including this new extension framework, developers can isolate their custom work and upgrade (“future proof”) to later versions of ADF without losing their original code.

Alfresco has rewritten the Javascript API in Typescript so that packaging of App Source Codes is optimised for better performances. Typescript, designed for large app development, is also supported by Angular and uses it as its primary language.

This is said to ensure performance improvements of ADF apps when deployed in production.

Alfresco’s ADF 3.0 also now supports the newest version of the cloud-native, open source BPM project, Activiti 7.0.

With the support for Angular 7.0, developers will gain access to the performance improvements and several major improvements and new design features comprising virtual scrolling, drag and drop etc.


January 29, 2019  2:38 PM

Dynatrace goes for Go (Golang)

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Software applications need management, monitoring, testing and continual levels of deep tissue massage to ensure they run as intended and deliver to the user requirements for which they were initially built.

This is the space that Application Performance Monitoring (APM) specialist Dynatrace has been working to position itself since the company was initially formed in its first iteration back in Austria in 2005.

The company’s application monitoring (and testing) tools are available as cloud-based SaaS services or as on-premises software… and this month we see the firm extend its work to provide automatic code level insights into applications that rely on the popular Go programming language.

Go Golang

Sometimes referred to as GoLang, Go was initially developed by Google as an open source programming general purpose programming language.

It is a static programming language (meaning variables must be declared upfront) and  statically typed and explicit (code must be manually structured to execute specific tasks).

Looking at the latest news from Dynatrace, the company insists that [when alterations need to be made to an application] there is no need to inject code into thousands of microservices or change the code of a Go application… instead, Dynatrace automatically discovers and monitors Go components.

With this feature, Dynatrace extends its scope of AI capabilities for cloud native platforms including Cloud Foundry, Kubernetes and Red Hat OpenShift.

Why go for Go?

So why the focus on Go?

Because (based on 2018 GitHub usage) Dynatrace says it has recognised how fast Go is growing in terms of popularity – it is estimated to now be used by nearly two million developers today.

“Go is lightweight, suited to microservices architectures and fast becoming the programming language of the cloud. Yet most monitoring tools are blind to Go which has left organisations having to do manual development and configuration to get any sort of insight,” said Steve Tack, SVP of product management at Dynatrace. “The Dynatrace platform is built for the cloud and now automatically picks up and shines a light into Go components, ensuring that Go isn’t creating an enterprise cloud blind spot.”

Looking inside a cloud

This new capability is said to be particularly important to organisations that are using high-profile cloud platforms built using Go.

They now have visibility into the performance of the source code of these platforms which means they can utilise the AI from Dynatrace to automatically surface the root cause of performance problems… and this, in turn, enables DevOps teams to focus on optimisation rather than troubleshooting.

These same benefits also apply to those that are developing using Go, with the additional benefit of being able to get more accurate performance insights.

 


January 21, 2019  12:08 PM

Onymos Fabr­ic 2: promises ‘high-functioning’ mobi­le apps

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Onymos has come forward with what it calls a mobile appl­ication development platform for developers to create feature-rich ‘no comp­romise’ mobile apps based on open stand­ards.

The latest ver­sion includes support for Augmented Real­ity, Deep Linking and enhanced bio-authentication.

NOTE: In the context of the World Wide Web, deep linking is the use of a hyperlink that links to a specific, generally searchable or indexed, piece of web content on a website, rather than the website’s home page.

Onymos Fabric apps can incorporate hardware and softw­are technologies of both the Android and iOS device platform­s.

Shuffle off shortcomings

The company claims that Onymos addresses the shor­tcomings of the two popular approaches used by thousands of mobile app developer­s:

  1. Assembling and stitc­hing together a set of tools and service­s, which is time-con­suming and complicat­ed.
  2. Using no-code/low-co­de solutions that lo­ck them into a propr­ietary, cookie-cutter approach and do not offer precise cont­rol of their final code.

Rather than limit de­velopers to pre-defi­ned functionality or building blocks that are isolated to a single layer of the development stack, Onymos Fabric is supposed to provide fully customisable core components that give developers co­mplete and integrated access to the full technology stack of the user interface, business logic and backend-as-a-servic­e.

“With Onymos Fab­ric, enterprises no longer have to worry about mobile OS upd­ates, new devices and feature enhancemen­t… and building and maintaining core mob­ile functions. Onymos Fabric handles all these potentially time-consuming develo­pment tasks for you,” said the company, in a press statement.

Onymos Fabric has al­so been used to crea­te a new app that he­lps eye doctors to diagnose eye problems associated with dia­betes. There is a cau­sal link between high blood pressure and retinal damage that leads to vision pro­blems. Dr. Einar Ste­fansson, a physician in the field of diabetic eye di­sease and diabetic screening, saw the ne­ed to provide health­care professionals with an accurate and convenient tool to evaluate individual risk for sight-threat­ening retinopathy ba­sed on a patient’s clinical state.

Backend widgets

Onymos Fabric provides a palette of customisable core components th­at includes user interface widgets, logic, services and bac­kend data stores.

The software is archi­tected on open standards su­ch as HTML5, CSS3 and JavaScript to provide access to thousands of open source components …. it offers continuous updates to keep the core components updated to the latest OS features, new device offerings, and other technology updates.

NOTE: This story was written on a Gemini PDA by Planet on a Virgin aircraft.

 


January 17, 2019  9:33 AM

SUSE teams with Intel & SAP on persistent memory in the datacentre

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

SUSE has announced support for Intel Optane DC persistent memory with SAP HANA.

Persistent memory is typically defined as any method or apparatus for storing data structures such that they can continue to be accessed using memory instructions or memory APIs even after the end of the process that created or last modified them – and that often means ‘when the power is off’.

Running on SUSE Linux Enterprise Server for SAP Applications, SAP HANA users can now use Intel Optane DC persistent memory in the data centre.

Users can optimise their workloads by moving and maintaining larger amounts of data closer to the processor and minimising the higher latency of fetching data from system storage during maintenance.

Support for Intel Optane DC persistent memory, currently available in beta from multiple cloud service providers and hardware vendors, is another way SUSE is helping customers transform their IT infrastructures to reduce costs, deliver higher performance and compete more efficiently.

“Persistent memory technology will spark new applications for data access and storage,” said Thomas Di Giacomo, SUSE CTO. “By offering a fully supported solution built on Intel Optane DC persistent memory, businesses can take greater advantage of the performance of SAP HANA. SUSE continues to partner with companies like SAP and Intel to serve customers worldwide who are looking to fuel growth by transforming their IT infrastructure. It is their needs that drive the direction of our innovation.”

Intel VP & GM for ‘non-volatile’ memory and storage Alper Ilkbahar claims that Intel Optane DC persistent memory represents a new class of memory and storage technology architected specifically for datacentre usage.

This new memory class is designed to enable cost-effective, large-capacity in-memory database solutions, help provide greater system uptime and faster recovery after power cycles, and deliver higher-performance cloud-scale applications.

Martin Heisig, SAP HANA technology innovation network, said, “The ability to deliver persistent memory for SAP HANA is a significant milestone in our ongoing relationship with SUSE and Intel. The SAP Digital Core is built on the concept of simplifying the infrastructure for increased productivity and real-time insights.”

Support for Intel Optane DC persistent memory with SAP HANA workloads running on SUSE Linux Enterprise Server for SAP Applications is included in SUSE Linux Enterprise 12 Service Pack 4, which is now available worldwide.

 


January 16, 2019  10:02 AM

Puppet on DevOps: practitioners (not managers) are the new champions

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Software delivery (and operations… and change) automation company Puppet has staged its fifth annual DevOps Salary Report.

With a foundation in open source, Puppet is championing a world of what it calls ‘unconstrained software change’… presumably an even more intense version of Continuous Integration (CI) and Continuous Delivery (CD).

Puppet claims to have gathered over 3000 responses for its State of DevOps survey, the summary findings of which suggest that DevOps salaries at the practitioner level are closing in on the manager-level.

Could businesses actually be putting hands-on DevOps experience and skill sets at the top of their priorities?

In the UK, 26 percent of IT practitioners salaries are now reaching the $75,000 (£58,000) to $100,000 (£77,000) bracket, up from 17 percent last year.

Le français plus lent

Even though the highest salaries have been recorded in the United States, the UK is seen to be paying more across Europe, with France lagging behind on the lower ranges and Germany paying more for higher positions.

“Companies are increasingly growing their DevOps practices and the way they deliver IT services and software across the globe, which means businesses are in desperate need of the right talent who can adapt to this shift, raise the bar with software delivery and play an integral role in innovation,” said Nick Smyth, VP Engineering at Puppet.

The report also suggests that large organisations with more complex technology infrastructures have more high-paying positions than smaller ones, with the need for more experience and diverse skill set at the manager-level as well.

Enterprises are further along in their automation journey and therefore require fewer lower skill personnel to sustain IT activities.

Puppet director of product marketing Alanna Brown says that to get ahead of the competition and stay relevant to clients, large organisations need sophisticated DevOps and automation technologies, so (she says) it comes as ‘little surprise’ to see them paying more for highly-skilled practitioners and managers in order to sustain their complex technology infrastructure.

Other findings include the observation that retail appears to be a lucrative sector for IT practitioners, with an increased focus on digital commerce and omnichannel engagement.

Also, in Europe at least, there is more parity at top salary levels between men and women.


January 15, 2019  6:45 AM

Kubernetes flaw shows API security is no ‘set & forget’ deal

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

When a report surfaced last month detailing a ‘severe vulnerability’ in Kubernetes, the popular, open-source software for managing Linux applications deployed within containers, many of us will have wondered what the deeper implications of this alleged flaw could mean.

Although the flaw was quickly patched, it allowed any user to escalate their privileges to access administrative controls through the Kubernetes API server.

As the above linked report explains, with this, they can create requests authenticated by Kubernetes’ own Transport Layer Security (TLS) credentials and mess with any container running on the same pod.

Senior principal consultant at Synopsys Andrew van der Stock spoke to the Computer Weekly Open Source Insider blog to explain that although APIs make the friction of doing business much less, securing APIs should be the focus of every organisation that uses them.

“APIs can be difficult to test by traditional security testing tools and approaches — and to a certain extent, the security industry has not kept up, primarily because most are not developers themselves,” claimed van der Stock.

Shift left

He recommends that the security industry needs to shift left, adopt the same tooling as developers… and write unit and integration tests that fully exercise APIs, particularly those that have the potential to alter the state of an application or extract bulk personal information.

“Organisations publishing APIs for public consumption should carefully select design and technical controls to protect against known threats, including anti-automation, and far better monitoring to detect breaches. APIs are designed to be called after all, and when they function without errors, monitoring cannot just be of failed attempts, but also include threshold breaches around extensive and sustained access to sensitive records or changes to configuration,” said van der Stock.

No ‘set-&-forget’

The fact is, breaches such as these can be deterred and detected by well configured API gateways, but they are not a ‘set-&-forget’ security defence, they have to be carefully and continuously monitored.

The Synopsys consultant recommends the upcoming OWASP Application Security Verification Standard 4.0, OWASP Serverless Top 10, API cheat sheets and other API specific projects.

API monitoring is the entire point of  Open Web Application Security Project (OWASP) Top 10 A10:2017 – Insufficient monitoring and logging.

Synopsys’ van der Stock: API gateways must be carefully and continuously monitored.


January 10, 2019  9:24 AM

Red Hat feathers nested workflows

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Red Hat inside IBM continues to look a lot like Red Hat… but just inside IBM.

The [commercial] open source champions at Red Hat have clearly pressed on with ALL the firm’s various roadmap rollouts, the most recent of which is Red Hat Ansible Tower a its version 3.4 release.

But what is it?

This is a software framework for automating [data & application processes] across IT operations including infrastructure, networks, cloud and security [layers].

New in 3.4 are workflow enhancements including ‘nested workflows’ and workflow convergence, designed to simplify challenges inherent in managing complex hybrid cloud infrastructure.

What is a nested workflow?

A workflow is a collection of steps that are routed. Every workflow defines a business process. Each step has certain ‘performers’ and actions associated with it — so, then, a nested workflow occurs when you have a workflow that has a small subset of steps and is then connected to another workflow.

Guess where the above definition comes from (it was the top Google hit for nested workflows) then? IBM, obviously.

Red Hat says that a common reality for enterprises can be that separate IT teams may manage on-premises IT vs. cloud services, each with their own sets of Ansible Playbooks.

To help maximise the benefits of automation across a distributed infrastructure, Red Hat suggests that organisations can build an automation Center of Excellence (CoE) to help to provide ‘consistent automation’ across the enterprise, that is — sharing common solutions and accepted strategies as automation is introduced into new areas of IT internally.

“We have seen enterprises look to build automation centres of excellence to accelerate automation across a broader set of domains, including compute, network and storage. With the new features available in Red Hat Ansible Tower 3.4 organisations are able to increase the scale and scope of their automation activities together with increased control and visibility,” said Joe Fitzgerald, vice president, management, Red Hat.

With Red Hat Ansible Tower 3.4, users can now define one master workflow that ties different areas of IT together, so it is designed to cover a hybrid infrastructure without being stopped at specific technology silos.

Advanced workflows

With new workflow enhancements, users can reuse automation workflows based on different environments and scenarios to better manage their hybrid cloud infrastructure. Workflow enhancements available in Red Hat Ansible Tower 3.4 include:

  • Nested workflows enable users to create reusable, modular components to automate more complex operations using Red Hat Ansible Tower with the same ease as a simple playbook.
  • Workflow convergence enables users to have workflow jobs dependent on the finishing of multiple other workflow jobs prior to continuing allowing for a coordination point among different steps.
  • Workflow always job templates enable execution regardless of the success or failure of a job. If a dependent service needs to be running regardless of the exit status of a workflow, a workflow always job template is designed to help keep business running.
  • Workflow level inventory helps enable users to apply a workflow to inventory that they have access to, allowing for the reuse of deployment workflows across datacentres, environments and teams.

Job slicing

Also here we will make note of ‘job slicing’, users can take a single large job designed for thousands of machines and split it into a number of smaller jobs for distribution across a Tower cluster environment. This allows jobs to run more reliably and complete faster for users to better scale their automation.

Additionally, Red Hat Ansible Tower is now compatible to run on Red Hat Enterprise Linux in FIPS compliant mode. Federal Information Processing Standard (FIPS 140-2) security certification from the National Institute of Standards and Technology (NIST) is a computer security standard that specifies the requirements for cryptographic modules — including both hardware and software components — used within a security system to protect sensitive but unclassified information.

Wikipedia


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: