CW Developer Network


April 9, 2018  6:20 AM

Google: don’t ‘just’ turn cloud on

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater
Data Center, Google

Google has attempted to shine a light on Application Performance Management (APM) technologies built in what the company calls ‘a developer-first mindset’ to monitor and tune the performance of applications.

The end-game suggestion here is that we don’t ‘just’ turn cloud on, we also need to tune and monitor what happens inside live applications.

The foundation of Google’s APM tooling lies in two products: Stackdriver Trace and Debugger.

Stackdriver Trace is a distributed tracing system that collects latency data from applications and displays it in the Google Cloud Platform Console.

Stackdriver Debugger is a feature of the Google Cloud Platform that lets developers inspect the state of a running application in real time without stopping it or slowing it down.

There’s also Stackdriver Profiler as a new addition to the Google APM toolkit. This tools allows developers to profile and explore how code actually executes in production, to optimise performance and reduce cost of computation.

Google product manager Morgan McLean notes that the company is also announcing integrations between Stackdriver Debugger and GitHub Enterprise and GitLab.

“All of these tools work with code and applications that run on any cloud or even on-premises infrastructure, so no matter where you run your application, you now have a consistent, accessible APM toolkit to monitor and manage the performance of your applications,” said McLean.

Unexpectedly resource-intensive

When is an app not an app? When it’s unexpectedly resource-intensive says McLean.

He points to the use of production profiling and says that this allows developers to gauge the impact of any function or line of code on an application’s overall performance. If we don’t analyse code execution in production, unexpectedly resource-intensive functions can increase the latency and cost of web services.

Stackdriver Profiler collects data via sampling-based instrumentation that runs across all of an application’s instances. It then displays this data on a flame chart to present the selected metric (CPU time, wall time**, RAM used, contention, etc.) for each function on the horizontal axis, with the function call hierarchy on the vertical axis.

NOTE**: Wall time refers to real world elapsed time as determined by a chronometer such as a wristwatch or wall clock. Wall time differs from time as measured by counting microprocessor clock pulses or cycles.

Don’t ‘just’ turn cloud on

Not (arguably) always known to be the most altruistic, philanthropic and benevolent source of corporate muscle in the world, Google here appears to keen to ‘give back’ to the developer community with a set of tooling designed to really look inside large and complex batch processes to see where different data sets and client-specific configurations do indeed cause cloud applications to run in a less-than-optimal state.

You don’t ‘just’ turn cloud on and expect it to work perfectly – well, somebody had to say it.

Image: Google

April 4, 2018  9:54 AM

Cybric CTO: What is infrastructure as code & how do we build it?

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

This is a short but punchy guest post written for the Computer Weekly Developer Network by Mike Kail in his capacity as CTO of Cybric.

Described as a continuous application security platform, Cybric claims to be able to continuous integrate security provisioning, management and controls into the Continuous Integration (CI) Continuous Deployment (CD) loop and lifecycle.

Given that we now move to a world where software-defined everything becomes an inherent part of the DNA used in all codestreams, we are now at the point of describing infrastructure as code — but beyond our notions of what Infrastructure (IaaS) means in cloud computing spheres, what does infrastructure as code really mean?

Kail writes as follows…

By now, I’m sure most, if not all have at least heard the term “Infrastructure as Code” (IaC).

Below I succinctly define it and then provide some guidance on how to start evolving infrastructure and application deployments to leverage its benefits.

IaC is also a key practice in a DevOps culture, so if that evolution is part of your overall plan, this will be of use to you.

Infrastructure as Code replaces the use of manual tasks and processes to deploy IT infrastructure and instead is deployed and managed through code, which is also known as ‘programmable infrastructure’.

3 components of IaC

The three components of IaC are:

  1. Images – create a ‘golden master’ base image using a tool such as Packer.
  2. Blueprint – define the infrastructure using DSL (Domain Specific Language).
  3. Automation – leverage APIs to query/update infrastructure.

These components can be viewed as the initial logical steps in transitioning to IaC, but none of them should ever be considered “done”.

The image definition files will need to be updated as updates to the components of an image are released, the infrastructure blueprint will evolve as the solution scales and features/services are added, and there will certainly always be areas to automate further.

One thing to keep an eye on is making sure that no one bypasses the IaC pipeline and makes changes out-of-band as that will result in what is known as ‘configuration drift’, where portions of the infrastructure don’t match the rest and that often results in strange errors that are difficult to debug.

In closing, I’d also suggest one of the core tenets of the DevOps culture, measurement, be used so that teams can track improvements in deployment efficiency, infrastructure availability, and other KPIs.

Prior to Cybric, Mike Kail was Yahoo’s CIO and SVP of Infrastructure, where he led the IT and datacentre functions for the company. He has more than 25 years of IT Operations experience with a focus on highly- scalable architectures.


April 2, 2018  9:19 AM

Cloud complexity: why it’s good to be a DMaaS

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Cloud computing is great news, apart from one thing.

The option to now build complex heterogeneous cloud environments gives us a massively expanded choice of deployment options to bring service-centric datacentre-driven virtualised data processing, analytics and storage options to bear upon contemporary IT workload burdens — which is great news, apart from one thing.

The clue is in the name and it’s the c-word at the start: complex heterogeneous cloud environments, are, well, pretty complex.

This issue is, when cloud data exists in various places, it creates a wider worry factor.

To explain… when elements cloud data have a footprint in SaaS (Software-as-a-Service) PaaS (Platform-as-a-Service) and IaaS (Infrastructure-as-a-Service)… then all that data needs to be ‘managed through its lifecycle’ – by which we mean, that data needs to be monitored so that we can assess enterprise Service Level Agreements (SLAs) and look to achieve consistency of service regardless of where data is ultimately stored.

Be a DMaaS

This is the pain point that Data Management-as-as Service (DMaaS) company Druva has aimed to address with its Druva Cloud Platform – the technology unifies data protection, management and intelligence capabilities for data.

Druva says that challenges in cloud arise due to what it calls the ‘patchwork of disparate systems’ and the need to administer them.

According to Druva, “Different clouds have different data management needs — IaaS, PaaS and SaaS have different protection and data management requirements that range from simple resiliency needs like backup and disaster recovery to more complex governance such as compliance, search and legal data handling.”

Veep of product and alliances marketing at Druva is Dave Packer. He insists that cloud means that IT teams must deal with growing data lifecycle complexity, including managing data over time for long term retention and archiving.

“If not done properly, lack of management can equate to high costs due to collecting too much dark data,” said Packer. “If a company’s data management is a mess while it exists in-house, then exporting it to the cloud can introduce even more data management challenges, and the increased cost to fix these can offset any anticipated savings.”

Druva Cloud Platform aims to provide a single point of data management and protection for workloads in the cloud.

The product comes with an integrated console/dashboard to be used for data management and protection, including analytics, governance and visibility into data across heterogeneous environments.


March 28, 2018  12:51 PM

As lonely as a (complex) cloud (code script)

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

We appear to be learning, on a daily basis, that you don’t just ‘throw applications over the wall’ to into deployment, into the hands of operations.

Yes, we’d had the DevOps ‘revolution’ (spoiler alert: application framework structures have been struggling to provide lifecycle controls of this kind of at least a couple of decades) and all the malarkey that has followed it… so what happened to the science and specialism of application delivery next?

Load balancers and Application Delivery Controllers (ADC) have done an admirable job of serving the previously more on-premises (admittedly more monolithic) age of pre-cloud computing application delivery.

Running blind

Avi Networks insists that legacy ADC appliances are “running blind” because this technology fails to track mobile users logging in from a variety of devices across different networks using applications that themselves are essentially distributed across different cloud resource datacenters.

This reality (if we accept the Avi Netwoks line of reasoning) means that the application is poor at scaling to user demand or application scale.

Avi proposes a distributed microservices based architecture and a centralised policy-based controller to balance application traffic from multiple locations – but with the customer still able to view and manage the total workload as one single entity.

The firm’s application delivery platform has this month launched new features to automate the deployment and scaling of applications in hybrid and multi-cloud environments.

No complex logic

Enhancements to Avi’s integration with its chosen datacentre management platforms (which in this case are Ansible and Terraform) paired with Avi’s machine learning capabilities is supposed to let IT teams provision infrastructure resources and application services without the need to code complex logic.

“Automating application deployment and provisioning is essential for enterprises today. However, organisations need to code and maintain complex scripts to deploy applications in each respective environment, and for each potential scenario. The move towards hybrid and multi-cloud only makes this experience more painful,” said Gaurav Rastogi, automation and analytics architect at Avi Networks. “We’ve eliminated that hassle. You don’t have to think about inputs or code anymore. Simply declare the desired outcome and let Avi’s intent-based system do the rest.”

As demand increases for an application, Avi can automatically spin up additional servers, cloud infrastructure, network resources, and application services for the application.

This is — although it’s something of a mouthful — elastic application networking services delivered through a single infrastructure-agnostic platform to provide multi-cloud automation orchestration with zero code.

From here, one wonders if (complex) cloud (code scripts) will wander lonely.

Wikimedia Commons


March 27, 2018  12:22 PM

Synopsys on source code security sensitivities

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Software application development professionals often have to fix problems with software code – source code in particular; it’s a simple fact of life.

Much is made of the balance between Agile rapid development and the option to iteratively fix issues along the way compared to methodologies that advocate getting it right in the first place.

The ‘suggested findings’ of a 2016 Forrester Research study commissioned by Synopsys call to mind an ancient proverb: a stitch in time saves nine.

Or, in the case of software development, fixing defects early in the lifecycle could reduce remediation costs by a factor of anywhere from five to 15, or so it is claimed.

Senior security strategist at Synopsys Taylor Armerding further suggests that the study set a baseline example of five hours of work to fix a defect in the coding/development stage.

But, he reminds us, finding and fixing that same defect in the final testing phase would take five to seven times longer.

“Waiting until after the product was on the market to discover and fix the same defect would take even longer and cost 10–15 times more. That doesn’t include the potential cost of damages from a bad guy discovering the defect first and exploiting it to attack users,” he said.

Source code sensitivities

So what’s going on out there in the real world?

Synopsys’ Armerding points to a Reuters report saying that major tech companies – SAP, Symantec, Micro Focus and McAfee – have allowed Russian authorities to inspect the source code of their software.

That same software is used by at least a dozen US government departments including Defense, State, NASA, the FBI and other intelligence agencies.

“Several security experts and government officials said it was tantamount to handing tools to the Russians to spy on the US. A Dec. 7 letter from the Pentagon to Sen. Jeanne Shaheen (D-NH) said that allowing governments to review the code, “may aid such countries in discovering vulnerabilities in those products,” said Armerding.

But according to other security experts, neither was that big of a deal.

Armerding points out that commentators have said that when it comes to defects that can be exploited for cyberattacks or espionage, access to the source code is no more dangerous – likely less so – than access to the binary code, which is created from the source code and is sold as part of the commercial product that results.

The mantra is, you sell them (customers) the binary, which means all customers can inspect it for exploitable defects at their leisure.

“But the major risks [in scenarios like this] appear to be to [developers of the operating system such as Apple or Microsoft] themselves, since the source code is its proprietary IP, and access to it might make it easier to jailbreak the OS – something these companies tries ferociously to prevent,” said Armerding.

The question of when to fix and at what level we should regard source code problems may be open to debate for some, but Armerding insists that during development, source code can be (and should be) reviewed by a static analysis program — so when you find a bug in source code, it is easier to fix, so fix it.

 


March 21, 2018  5:48 PM

IBM analytics GM: data is the new cargo

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

There are many magic rings in this world Frodo, just as there are many job titles spanning the ever-growing realm of software application development, programming and all related disciplines that fall under the wider banner of software engineering and data management.

Of all the new job titles that the industry now wants to coin (think DevSecOpsDBA and so on), it is perhaps the coming together of the programmer/developer and the data analytics engineer that is drawing more interest than others – this new role is the data developer, so how does their world look?

General manager for IBM Analytics Rob Thomas says that it’s all about containerisation – the shipping kind and the software kind.

Thomas cites the invention and introduction of the world’s first intermodal shipping containers and the simplicity these offered in terms of load and unload with no unique assembly required.

This is a form of standardisation for efficiency that the software world has also sought to emulate. We can point to advances in TCP and TCP/IP, Linux and the new era of Kubernetes.

“The benefit of standardisation in this realm is flexibility and portability: engineers build on a standard and in the case of Kubernetes, their work is fully flexible, with no need to understand the underlying infrastructure,” wrote IBM’s Thomas, in a recent blog post.

Data is cargo

Continuing the IT to shipping containerisation analogy, we can suggest that data is the cargo… and the data developer is the stevedore of the system.

NOTE: A stevedore, longshoreman, or dockworker is a waterfront manual labourer who is involved in loading and unloading ships, trucks, trains or airplanes.

But there’s a problem, stevedoring is tough gritty backbreaking hard work – surely we can automate and computerise a good deal of the old grunt work (in real stevedoring and in data development) in this day and age.

For IBM’s Thomas there is an issue to highlight here. He says that most of the advances in IT over the past few years have been focused on making it easy for application developers.

But, no one has unleashed the data developer.

Every enterprise is on the road to AI says Thomas. But, AI requires machine learning, which requires analytics, which requires the right data/information architecture.

He asks, “When enterprise intelligence is enhanced, productivity increases and standards can emerge. The only drawback is the assembly required: all systems need to talk to each other and data architecture must be normalised. What if an organisation could establish the building blocks of AI, with no assembly required?”

IBM claims to have some form of answer in the shape of its new IBM Cloud Private for Data engineering product.

Now, please wash your hands

The product is supposed to be aligned for working with data science, data engineering and application building, with no assembly (and no need to get your hands dirty).

“As an aspiring data scientist, anyone can find relevant data, do ad-hoc analysis, build models and deploy them into production, within a single integrated experience. Cloud Private for Data provides: access to data across on-premises and all clouds; a cloud-native data architecture, behind the firewall; and data ingestion rates of up to 250 billion events per day,” said Thomas.

The big claim from Big Blue here is that what Kubernetes solved for application developers (dependency management, portability, etc), IBM Cloud Private for Data will solve for data developers attempting to build AI products without them needing to get their hands completely greasy down at the docks.


March 21, 2018  2:22 PM

Apple devs get IBM AI: Watson Services for Core ML

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Apple and IBM have been working together for a few years.

Thought to be an unlikely couple by some, the two firms established a partnership in 2014 to work on integration projects.

Fast forward to 2018 and the two tech giants are pairing up again, but at a granular developer focused level designed to help inject IBM’s Watson Artificial Intelligence (AI) platform and services directly into Apple iOS mobile applications.

The product/service launches are IBM Watson Services for Core ML and the IBM Cloud Developer Console for Apple.

In terms of what’s actually happening here, Core ML is a Apple’s software framework for running machine learning models on Apple iOS devices – the focus is very much on iPhone smartphones in the first instance, iPad second… and then onto Apple technologies for smart watches and television.

Core ML, first released in iOS 11, is an Apple framework for running machine learning models locally on iOS-enabled devices, meaning it runs even when the device is offline. Existing models written in CaffeKerasScikit-learn and others can be converted to Core ML.

Due to this agreement, developers using Core ML can now gain access to IBM Watson AI functions inside the apps they are building.

According to Apple, “A trained model is the result of applying a machine learning algorithm to a set of training data. The model makes predictions based on new input data. For example, a model that’s been trained on a region’s historical house prices may be able to predict a house’s price when given the number of bedrooms and bathrooms.”

Continuous learning

Developers can now build AI-powered apps that securely connect to enterprise data, are optimised to run offline and on cloud… and that continuously learn, adapt and improve through each user interaction.

“IBM Watson is available as a set of cloud-based services for developers to build AI applications, and Core ML delivers advanced machine learning to apps on Apple devices. Together, these technologies can deliver faster, smarter insights with continuous learning capabilities, transforming AI for enterprise mobility,” said Mahmoud Nagshineh, general manager, IBM partnerships & alliances

The Coca-Cola Company is said to be working with this technology to transform in-field capabilities for field service engineers. Initial functionalities being analysed are custom visual recognition problem identification, cognitive diagnosis and augmented reality repair.

Watson Visual Recognition is a service on IBM Cloud that enables you to quickly and accurately tag, classify, and train visual content using machine learning. It has built-in classifiers for objects such as faces and food.

The new IBM Cloud Developer Console for Apple provides tools, including pre-configured starter kits which aim to simplify Swift development, along with AI, data and mobile services optimized for Swift.

“For example, we’re introducing IBM Cloud Hyper Protect Starter Kit to enable iOS developers to safeguard credentials, services and data using the IBM Cloud Hyper Protect services,” said Nagshineh.

IBM’s focus here in the smartphone arena has been confirmed to extend to Apple only devices at this time i.e. Android does not appear to be in the picture. IBM is a fan of Apple’s ability to have integrated its hardware and software platforms together and says that they provides the level of enterprise grade security and control the firm was looking for in order to make this agreement happen.


March 20, 2018  1:44 PM

IBM Think 5 in 5: TED-style opener, five ‘paradigms to watch’

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Director at IBM Research Arvind Krishna opened IBM’s Think 2018 conference in Las Vegas this week by introducing the firm’s ‘5 in 5’ presentation.

Delivered as an almost TED Talk style opener, this opening session was pleasing for three key reasons:

  • Female speaker/scientists outweighed the males.
  • No up-front IBM product pitches were delivered.
  • Absolutely nobody said ‘awesome’, or ‘super excited’ – not once.

Polymer chemist Jamie Garcia kicked off proceedings as master of ceremonies, a celebrated scientist in her own field, Garcia introduced the speakers one by one as they delivered their five-minute ‘pitch-explanations’.

The below notes are presented in paraphrased quote form in the speaker’s own voice – all the content represents quoted material, so quote marks have not been used.

#1 Blockchain crypto-anchors

Speaker: Andreas Kind, a specialist in crypto-anchors working at IBM Research.

From cinnamon to boiled eggs, we know that everything in this world has been copied these days. In some parts of the world, as many as 40% of parts in the car after market are fake. Drugs get recycled after bad storage procedures etc. The root of the problem is that global supply chains have become very complex because products are made in more than one country and assembled in others… and then sold in others still.

Blockchain can help us build a global provenance supply chain, but…

… anchors are needed to link cryptographic entries in blockchains to the physical objects [and services, presumably] in the real world.

We can use crypto anchors to validate everything from cars parts to medicine. Crucially, the DNA of every object [i.e. the physical shape and attributes] can be used to provide the information to track the provenance of everything tracked in blockchain.

#2 Quantum encryption techniques

Cecilia Boschini, lattice-based cryptography specialist

The appeal of mathematics is based upon the fact that once you learn the formula, you can work out any problem. Cryptography is the art of designing protocols to protect your data. When you have to send your credit card number to a store it is encrypted before it is sent. Today we use the most scientific approach to crypto systems based upon logic that makes breaking the system a very long and time consuming process that requires access to massive computing power.

So will quantum allow that power… so should we panic?

Well, we would need a quantum computer that has thousands of qubits… so we can now make problems that grow with us. To produce post-quantum theory we need to produce quantum resistant protocols.

This is what lattice is… a two dimensional lattice has a grid where the key is to find a specific point on a grid. In three dimensions with many layers, we start to get extremely complex.

#3 Robot powered AI microscopes

Tom Zimmerman, IBM research scientist – human/machine devices and paradigm scientist

By 2025 over half of the world’s population are going to be living in what we can call ‘water stressed’ areas. Plankton produce two thirds of the oxygen that we breathe, they are also the greatest sequester of carbon on the planet as they consume it for us when we produce it as a waste product.

Scientists has traditionally studied plankton by collecting them, treating them with chemicals and studying them while they live. We can combine remote sensing with AI to study plankton in their living environment.

#4 Bias in AI datasets

Francesca Rossi, IBM Researcher and university professor, global leader for AI ethics at IBM.

I think back to 30 years ago, when I went to first AI conference, we looked at making machines smarter with very little discussion related to the powers of those machines on our society.

Today I work with lawyers, economists, accountants and many other professionals to develop a multi-disciplinary view at what AI will do for us. Key concerns for AI are — bias, explain-ability and value alignment.

Once a system learns something out of a certain set of data, it will then try and generalize and apply that knowledge to many different areas. But if that set of data is not diverse enough, then the AI system will not have enough info.

But not all forms of bias are bad… some bias in domain specific skills (such as a doctor’s skills in medicine) is a type of bias that we do not want to eradicate. We predict that in the next five years we will eradicate bias and that only the systems that show a clean lack of bias will be the only ones that get used.

AI needs to be multi-disciplinary, multi-gender and multi-stakeholder.

#5 Quantum Computing

Dr Thalia Gershon, senior manager for AI challenges and quantum experiences at IBM.

Simulating the bonding of large molecules is tough because you have to simulate the relationship with every electron with every other electron. Quantum Computing gives us the power to do this kind of things as it encodes information into quantum states.

Many of our challenges today in computing are interdisciplinary problems – so we are now building systems with quantum properties to experiment with these issues.

Quantum Machines need to cooled down to -460 degree below Fahrenheit to keep them cool, so the task here is a big one.

Linear classical logical thinking does not help us in quantum exploration… we need to be able to think differently.

Computing classes will start to offer a quantum track within five years. Plus, we will also need to teach students in all disciplines what qubits are. Three different quantum computers are made available by IBM on the IBM Q Experience where there are three core programming options. AI and QC are not fully independent, we need to join these two worlds.

NOTE: We can (arguably) take these five areas (and others) as key pointers for software application development growth over the next five years.


March 17, 2018  2:13 PM

Making progressive (web apps) rock for DevOps

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

This is a guest post for the Computer Weekly Developer Network written by Eran Kinsbruner, lead technical evangelist at Perfecto.

Kinsbruner wants to uncover the mechanics behind an application stream being called Progressive Web Apps (PWA) and examine why the could be the next big thing.

Firmly of the opinion that PWAs are hailed as a means of pushingt the mobile web forward, Kinsbruner  reminds us that they can potentially bring parity to web and native apps, while also providing mobile specific capabilities to web users.

Perhaps unsurprisingly, Google came up with the term itself and PWAs are seen alongside Accelerated Mobile Pages (AMP) and Responsive Web Design (RWD) as key weapons in the fight for a slick mobile user experience.

Kinsbruner writes as follows…

Progressive Apps have one common source code base to develop for all platforms: web, Android and iOS – making them easy to maintain and fix. With Google behind the development; it’s perhaps no surprise that PWAs are relatively easy to adopt. So, developers don’t need to gain new skills, but rather learn new APIs and see how they can be leveraged by their websites.

PWA apps exhibit two main architectural features for developers to use; Service Workers (which give developers the ability to manually manage the caching of assets and control the experience when there is no network connectivity) and Web App Manifest (the file within the PWA that describe the app, provides metadata specific to the app like icons, splash screens and more) – and these present significant new opportunities for developers.

For testers, PWAs are still JavaScript-based apps, so tools like Selenium and Appium will continue to work effectively. However, cross browser testing on desktop and mobile platforms is getting harder and PWA introduces a greater level of complexity than RWD. As with any development of this ilk, new tests (manual and automated) need to be developed, executed, and fit into the overall pipeline.

With RWD, the primary challenge was the visual changes driven by form factor.

PWA introduces additional complexities due to more unique mobile-specific capabilities, such as no network operation, sensors-based functionality (location, camera for AR/VR, and more) and cross-device functionality as well as dependency on different test frameworks like Selenium, Appium.

There may also be a need to instrument the mobile side of the PWA to better interact with the UI components of the app on the devices. Testers must be aware of what PWAs can access and how to keep quality assurance high at the top of their priority list.

A checklist for a PWA test plan:

Step 1: Perform complete validation of the Mainfest.Json validity file and check for load through the supported browsers (Opera, Firefox, Chrome, soon Safari)

Step 2: Test and debug your supported service workers. This can be done through the Chrome internals page[1]. In this section, enable and disable each service worker and assure it behaves as expected – common service workers enable offline mode – and register the app for push notification. Perform these validations across mobile and web.

Step 3: PWA adds support for camera (e.g. scanning QR codes), voice injection (recording a memo), push notifications and other more mobile specific capabilities. Make sure to test the use of these functions on your app across mobile devices – these will be an additional set of functional tests only relevant to mobile devices.

Step 4: As PWA is a super set of a RWD, all the RWD testing pillars apply here also, which means you need to consider:

  • UI and visual/layout testing across multiple form factors
  • Performance and rendering of the content across platforms (load the proper code and images to the right platform – mobile vs. web)
  • Network related testing – in addition to the offline mode that is covered through service workers, make sure to cover the app behaviour throughout various network conditions (packer loss, flight mode, latency %, 3G, 4G, 5G etc.)
  • Functionality of the entire page user flows across platforms and screen sizes and resolutions

Step 5: Handle test automation code and object repository. PWAs are java script apps that add a .apk file to the device application repository once installed – especially on an Android device (apple has limited support so far for such apps). When aiming to test the Android .apk on the device, the developer and tester will need a proper object spy that can identify both the App shell objects as well as the WebView objects within this app. From initial tries for subset of PWA apps, the Appium object spy for Android will not work, leaving the user to only get the DOM objects from the web site only. This is currently a technological challenge to automate the mobile parts of a PWA.

Step 6: Be sure to cover Google’s recommended checklist for PWAs step by step, since it includes a lot of the core functionalities of progressive apps, and can assure not just good functionality when followed, but also great SEO and high score by Google search engines.

Forward looking developers will look to overcome the challenges this kind of innovation brings and use PWAs as an opportunity to deliver a better user experience.

If you’re a developer just starting to move from a .com or a .mob site to a cross platform web environment, then PWA is a compelling option. Web developers, should base any plan for change around an appropriate product or business milestone such as a next big website release or a complete rebrand; making sure that a move to PWA makes sense, and isn’t just a jump on the latest and greatest bandwagon.


March 16, 2018  10:59 AM

Data pipelines need love too

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater
Data Analytics

Hitachi has aligned its data analytics divisions and and fused it with its Pentaho acquisition to call the new entity Hitachi Vantara. So… Vantara… kind of sounds like ‘advantage’ with a bit of Latin ‘avanti’ in there for added good measure right?

Branding shenanigans aside, Hitachi Vantara (more usually pronounced in an American accent as Hitachi Ven-tera) has continued to roll out products aligned for that role which we can now quite comfortably define as the ‘data developer’.

These data developers (call them data management and data science focused software engineering professionals with an appreciation for the need to apply analytics and machine learning technologies to database and application strucures… if you must – but it’s not as catchy) use machine learning (ML) functions, obviously.

Orchestration situation

As ML becomes the order of the day, these same data devs will also (arguably) need an increasing degree of orchestration functions with which to corall and manage the models they seek to build, execute and apply – this is what Hitachi Ven-tera (sorry, Vantara) is now rolling out.

The company now offers machine learning orchestration to help data professional to monitor, test, retrain and redeploy supervised models in production.

Emanating from its Hitachi Vantara Labs machine learning model management’ and these tools can be used in a data pipeline built in Pentaho.

Once an algorithm-rich ML model is in production, it must be monitored, tested and retrained continually in response to changing conditions, then redeployed. This work involves manual effort and, consequently, is often done infrequently. When this happens, prediction accuracy will deteriorate and impact the profitability of data-driven businesses.

Pipeline wear & tear

Hitachi Vantara explains that once a machine learning model is in production, its accuracy typically degrades as new production data runs through it. To avoid this, the company provides a new range of evaluation statistics helps to identify degraded models.

More organisations are demanding visibility into how algorithms make decisions. Lack of transparency often leads to poor collaboration in groups deploying and maintaining models including operations teams, data scientists, data engineers, developers and application architects.

“These new capabilities from Hitachi Vantara promote collaboration, providing data lineage of model steps, and visibility of data sources and features that feed the model,” said the company, in a press statement.

Love your pipeline

Building out the ML-enriched ‘data pipeline’ appears to be a surprisingly non-sequential process i.e. in that we can build our pipe and lay it down, but we will need to go back and look for leaks and other areas of weakness where the structure of the pipe itself may have become compromised as a result of the content we put through it.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: