CW Developer Network

Page 1 of 10612345...102030...Last »

April 18, 2018  9:14 AM

Cloudistics: you had me at programmatically extensible

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

What the cloud computing industry needs is an increasing number of platforms, right?

Okay perhaps not, platform plays are already many and multifarious in their nature and the formally defined cloud layer of Platform-as-a-Service (PaaS) already has its place firmly etched into the global taxonomy that we now use to describe the workings of the industry.

That all being said then, when the Computer Weekly Developer Network hears the newswires crackling with a new cloud software platform company launch, the hackles do tend to go up.

Cloudistics this week launches its presence in the EMEA region in response to what the firm calls demand for its private cloud with a ‘premium experience’, no less.

Premium cloud, USDA approved

So premium cloud, what could that be and why would software application developers be interested in this gambit?

Is premium cloud extra lean, USDA approved organic grass fed Kobe beef cloud?

It could be, but in truth it’s more a question of Cloudistics claiming to provide a private cloud deployment with some of the swagger and weight of the public cloud, but behind the firewall.

To be specific, these are the type of public cloud functions associated with actions such as initial cloud implementation, deployment, operations and maintenance – that is, stuff that is arguably tougher to make consistent, predictable, repeatable and secure in private cloud environments.

Programmatically extensible pleasure

Cloudistics Ignite is engineered with composable, software-defined, clusterless scaling. The resultant cloud operates free of any hardware-specific dependencies and is programmatically extensible with automation, orchestration, performance and security built in.

This, for software application development concerns, may be the special sauce here, programmatic extensibility in cloud environments can be tough as we look to tasks such as the ‘repatriation’ of previously terrestrial applications on their route to a new cloud-based existence.

Why? Because tasks like resource provisioning and performance management, while federating and abstracting all physical hardware into a cloud service is a complex big ask – so this, in real terms, is what Cloudistics does.

“We are immensely excited about the prospects in EMEA, which is frankly ripe for what Cloudistics has to offer. The emphasis has shifted from cloud itself to cloud functionality as an enabler for digital transformation… and this is where Cloudistics shines,” commented Chris Hurst, VP EMEA, Cloudistics.

Integrated with its Ignite product is the Cloudistics Application Marketplace. This features pre-built, ready-to-run and reusable application virtual machine templates that can be deployed instantly.

While Cloudistics may not be the private-public panacea programmatic provisioning panacea for all instance requirements, the ability to encapsulate and package a good degree of public cloud functionality controls as a consumable service in a private cloud platform play is what, arguably, makes this company interesting.

April 17, 2018  9:28 AM

Qualys ups security automation with a bit of Swagger

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater
Qualys

Cloud security firm Qualys, like every vendor today, is pushing the automation mantra.

The company’s Web Application Scanning (WAS) 6.0 now supports Swagger version 2.0 to allow developers to streamline [security] assessments of REST APIs and get visibility of the security posture of mobile application backends and Internet of Things (IoT) services.

NOTE: Swagger is an open source software framework backed by a considerable ecosystem of tools that helps developers design, build, document and consume RESTful web services.

As noted here, RESTful web services are built to work best on the web.

Representational State Transfer (REST) is an architectural style that specifies constraints, such as the uniform interface, that if applied to a web service induce desirable properties, such as performance, scalability and modifiability, that enable services to work best on the web.

Additionally (in terms of the Qualys news), a new native plugin for Jenkins delivers automated vulnerability scanning of web applications for teams using this Continuous Integration/Continuous Delivery (CI/CD) tool.

“As companies move their internal apps to the cloud and embrace new technologies, web app security must be integrated into the DevOps process to safeguard data and prevent breaches,” said Philippe Courtot, chairman and CEO, Qualys, Inc. “Qualys is helping customers streamline and automate their DevSecOps through continuous visibility of security and compliance across their applications and REST APIs. With the latest WAS features, customers now can make web application security an integral part of their DevOps processes, avoiding costly security issues in production.”

In tandem with all of the above, developers (and their DevOps compatriots) can now leverage Qualys Browser Recorder, a free Google Chrome browser extension, to review scripts for navigating through complex authentication and business workflows in web applications.

Qualys also launched a new free tool – CertView – to make it easier for developers to create and manage an inventory for their certificates.


April 16, 2018  6:32 AM

Is a Hive crash a ‘stale’ middleware malfunction?

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Hive Connected Home products are great and the support is outstanding, but they will crash… or at least ours did and here’s what we hope is an interesting analysis in terms of trying to work out where the faults could crop up in the new world of the Internet of Things (IoT).

Consider the facts and follow the logic if you will please.

There are three pieces of hardware here and one piece of software:

  • The Hive Hub – the unit that sits in your boiler or immersion room area to drive the ON/OFF functions of heating and hot water (middle picture below).
  • The router extension – the unit that takes a direct feed out of your home Internet box to provide wireless connectivity to Hive devices (not shown).
  • The thermostat – it is what it is, it’s a thermostat and it’s digital and attractive and it can be used to turn heat up or down and programme a schedule and put the hot water ON/OFF (left picture below).
  • The Hive app, on your smartphone or tablet.

So consider the scenario in our use case.

The Hive thermostat was working correctly and talking to the router extension and onward to the rest of the system, the heating could be controlled perfectly.

But, the app failed to synchronise with the home devices despite several reboots, reset passwords and re-installations.

Foolishly (it turns out) a factory reset on the thermostat was executed.

This is easy to do, the user simply holds down all three bottom buttons and in fact just the two on the left (menu, and back) will do it. This leaves the thermostat unable to connect with the router in any form and the only means of turning the heating and hot water ON/OFF is by the Hive Hub in your immersion cupboard – no ‘control’ of temperature is then possible, simply ON/OFF.

The solution

The solution is Maureen from Kenya in the Glasgow call centre, who (bless her heart) works on a Sunday and knows the system back to front.

She can also reboot remotely and make everything right again – all you need to do is provide your name and postcode regardless of your energy supplier.

So here’s the question, if the system is working internally so-to-speak without a web app connection, but the core ‘app’ application fails to be able to syncronise, then surely the app and the higher level systems engineering has outstripped the middleware on the devices, which have been left with the ability to talk to each other, but stripped of the ability to connect to the outside world.

Surely the firmware or middleware on the devices themselves has become stale and outdated.

Lessons learned… do not necessarily perform a factory reset on any IoT device if there’s a good call centre option and do check all wire connections and battery performance in remote unit devices first… and when all that fails, call Maureen, she’s lovely.

Credit: Hive


April 13, 2018  2:35 PM

Why workflow automation matters (to developers)

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Developers need to know more about workflow automation, it appears.

As analyst ‘thumb in the air’ pontificating predictions have ‘suggested’, the workflow automation market is expected to reach nearly US$ 17 (£12) billion by 2023, up from $4.7 billion in 2017.

But what is workflow automation?

Related to business process automation, IT process orchestration and content automation, workflow automation is perhaps best described as a means of managing and streamlining manual and paper-based processes made up of typically repetitive tasks (which are often essentially unstructured in their nature) which go towards making up a higher level total digital business process.

Good workflow automation reduces employee loss because workers hate managers who fail to communicate with them – but with this technology, they can have an automated communications backbone with accountability pointing to who should do what, and when.

But, as we apply automation, we need to be careful and remember what Bill (Gates) said – below.

“The first rule of any technology used in a business is that automation applied to an efficient operation will magnify the efficiency. The second is that automation applied to an inefficient operation will magnify the inefficiency.”

Big data’s role

Why the workflow automation 101 and clarification then?

Because as companies across industries accelerate their digital initiatives and bring more business functions online, IT systems are becoming increasingly distributed. With this shift, business processes are undergoing their own transformation and becoming more complex and interconnected, driving demand for workflow automation and visibility technology…

… and it’s developers who are building it.

One player in this market is Camunda. The company has just hit the version 2.0 release of its Optimize product, a software tool designed to combine big data and workflow automation to provide a holistic view of business process performance.

The product provides stakeholders with what has been glamorously called ‘instantaneous insight’ into revenue-generating business processes as they are automated.

“With Optimize 2.0, business leaders can identify potential problems and fix weak spots in core processes before delivery of products and services to customers is disrupted,” said the company.

Camunda claims that many traditional workflow systems can’t manage the dramatic increase of transactions processed online, in addition to the growth of distributed systems as companies adopt cloud, containers and microservices.

Because of this, Camunda claims to be the only vendor addressing the big workflow impact all these developments have with software that provides cohesive end-to-end intelligence into how well core business workflow processes are working.

“Optimize 2.0 combines the power of big data and workflow automation to enable customers to understand what’s going on with the most important aspects of their business: their revenue-generating processes,” said Jakob Freund, co-founder and CEO of Camunda. “With its flow charts, reports and analytics, Optimize customers can see precisely what steps have been executed, the status of orders for customers and detailed information if and why they are stuck in the process.”

Optimize 2.0 provides a report builder, dashboards, alerts and correlation analytics to give customers visibility into how their business workflows and processes are performing.

With the Camunda REST API, Optimize imports the historical data, storing it in an Elasticsearch big data repository that allows for queries. After the initial import, Optimize will pull new data from Camunda in configurable intervals providing user with up-to-date information in soft real-time.

NOTE: Soft real-time systems are typically used to solve issues of concurrent access and the need to keep a number of connected systems up-to-date through changing situations — like software that maintains & updates flight plans for commercial airlines: the flight plans must be kept reasonably current, but they can operate with the latency of a few seconds.


April 12, 2018  7:21 AM

British Army goes forth into API zone

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

The British Army marches on its stomach, fights for king, queen, god and country, upholds itself as a bastion of exemplary standards and is never afraid to say ‘stop it, that’s silly’ if things get out of hand.

Like any ‘organisation’, the British Army is also focused on the process of so-called digital transformation.

Our force’s technology interest (like any organisation) today is of course focused on data in environments where that data is operational and intelligence information of life-changing importance. This isn’t just mission critical, it’s military mission critical.

A major multi-vendor open technology consortium is now working to improve the Army’s understanding of the readiness of its troops and equipment.

The consortium’s lead partner is German softwarehaus Software AG.

Software AG reminds us that, like many organisations, the British Army is saddled with a number of overlapping legacy technologies, with information siloed in many different systems and databases contracted to different system integrators.  

Don’t mention the monoliths

The Army has now been on a journey to decompose large monolithic internally developed applications to loosely coupled services, designed to be externalised.  

To do so, it required an API (Application Programme Interface) management platform to enable governance, monitoring, securing and support of the APIs — this is what it gets from Software AG, along with software to integrate with large defence applications to expose reference data and services.

Software AG’s webMethods integration platform and its CentraSite (an API catalogue and services registry) are the two technology bases in use here.

“Without effective data and services, it’s very difficult for planners to understand our forces’ state of readiness, or to create much-needed services for our soldiers; this is where the Software AG API suite is adding real value” said Lt Col Dorian Seabrook, head of operations at Army Software House.

The use of Software AG’s API Management capability is said to enable a range of services across boundaries, drawing information from numerous systems to support a range of functions from HR, equipment availability, operational readiness, payment of reserves and trialling remote processing automation on legacy systems.

Head of UK public sector and alliances for Software AG Clive Freedman has said that, like many public-sector organisations, budget cuts mean that the Armed Forces are being asked to do more with less.

“The British Army has looked at how businesses in the private sector are using technologies such as Master Data Management (MDM) and APIs to improve data sharing and visibility, save money and boost effectiveness,” said Freedman.

Truth be told then, the British Army is now following an API-first strategy for interoperability to data and services that have been thus far not possible — that’s not silly.

Credit: Wikipedia


April 10, 2018  7:45 AM

Database DevOps, it’s now ‘a thing’

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

There’s DevOps and there’s DevSecOps — heck, come to think of it there’s even DevLoBDOps (Developer Line of Business Database Operations).

Of all the DevOps subsets, Microsoft SQL Server tools vendor Redgate Software is firmly in the Database DevOps camp — can we call that DataDevOps?

Either way, what would database-centric developer-&-operations tools actually do?

This is proactive monitoring technology to examine a database (in this case, a SQL Server estate) to provide performance diagnostics.

Of some news value then, Redgate’s SQL Monitor now integrates with the deployment tools in the company’s wider portfolio.

Why is this important?

Because the increased frequency of releases that DevOps enables typically means that development shops are shifting from occasional big deployments to constant smaller ones.

Redgate’s Database DevOps tools customer Skyscanner says that it moved to releasing database changes 95 times a day, rather than once every six weeks. That’s a fast releasing database… hence, the rise of Database DevOps.

The tough part of course is monitoring all those deployments.

If a ‘breaking change’ (a change that causes a system break) hits production, the cause has to be pinpointed quickly and precisely.

According to Redgate, “By highlighting which tool was used for a specific deployment, when it occurred and to which database, SQL Monitor lets users instantly drill down to the context and details they need to resolve any problems that do occur.”

The integration with the different database deployment tools also allows users to choose the method which best suits their workflow, as Jamie Wallis, Redgate product marketing manager explains.

“Some of our users like the way Redgate ReadyRoll integrates with Visual Studio and generates numerically ordered migration scripts,” said Wallis. “Others prefer SQL Compare, which is the industry standard tool for comparing and deploying SQL Server databases, or DLM Automation which plugs into the same build and deployment tools they use for their applications. We want to give them the freedom to stay with the tool they prefer, as well as reassure them that if there is an issue, SQL Monitor will help them track down the reason in seconds.”

In terms of performance, the deadlock capability of SQL Monitor has been extended with graphs that show when deadlocks occur and includes historical data so that users can interpret what happened.

The development team behind SQL Monitor are now looking into improvements to configuring and filtering alerts, so that over time users can train SQL Monitor about which things are and aren’t important to them.


April 9, 2018  6:20 AM

Google: don’t ‘just’ turn cloud on

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater
Data Center, Google

Google has attempted to shine a light on Application Performance Management (APM) technologies built in what the company calls ‘a developer-first mindset’ to monitor and tune the performance of applications.

The end-game suggestion here is that we don’t ‘just’ turn cloud on, we also need to tune and monitor what happens inside live applications.

The foundation of Google’s APM tooling lies in two products: Stackdriver Trace and Debugger.

Stackdriver Trace is a distributed tracing system that collects latency data from applications and displays it in the Google Cloud Platform Console.

Stackdriver Debugger is a feature of the Google Cloud Platform that lets developers inspect the state of a running application in real time without stopping it or slowing it down.

There’s also Stackdriver Profiler as a new addition to the Google APM toolkit. This tools allows developers to profile and explore how code actually executes in production, to optimise performance and reduce cost of computation.

Google product manager Morgan McLean notes that the company is also announcing integrations between Stackdriver Debugger and GitHub Enterprise and GitLab.

“All of these tools work with code and applications that run on any cloud or even on-premises infrastructure, so no matter where you run your application, you now have a consistent, accessible APM toolkit to monitor and manage the performance of your applications,” said McLean.

Unexpectedly resource-intensive

When is an app not an app? When it’s unexpectedly resource-intensive says McLean.

He points to the use of production profiling and says that this allows developers to gauge the impact of any function or line of code on an application’s overall performance. If we don’t analyse code execution in production, unexpectedly resource-intensive functions can increase the latency and cost of web services.

Stackdriver Profiler collects data via sampling-based instrumentation that runs across all of an application’s instances. It then displays this data on a flame chart to present the selected metric (CPU time, wall time**, RAM used, contention, etc.) for each function on the horizontal axis, with the function call hierarchy on the vertical axis.

NOTE**: Wall time refers to real world elapsed time as determined by a chronometer such as a wristwatch or wall clock. Wall time differs from time as measured by counting microprocessor clock pulses or cycles.

Don’t ‘just’ turn cloud on

Not (arguably) always known to be the most altruistic, philanthropic and benevolent source of corporate muscle in the world, Google here appears to keen to ‘give back’ to the developer community with a set of tooling designed to really look inside large and complex batch processes to see where different data sets and client-specific configurations do indeed cause cloud applications to run in a less-than-optimal state.

You don’t ‘just’ turn cloud on and expect it to work perfectly – well, somebody had to say it.

Image: Google


April 4, 2018  9:54 AM

Cybric CTO: What is infrastructure as code & how do we build it?

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

This is a short but punchy guest post written for the Computer Weekly Developer Network by Mike Kail in his capacity as CTO of Cybric.

Described as a continuous application security platform, Cybric claims to be able to continuous integrate security provisioning, management and controls into the Continuous Integration (CI) Continuous Deployment (CD) loop and lifecycle.

Given that we now move to a world where software-defined everything becomes an inherent part of the DNA used in all codestreams, we are now at the point of describing infrastructure as code — but beyond our notions of what Infrastructure (IaaS) means in cloud computing spheres, what does infrastructure as code really mean?

Kail writes as follows…

By now, I’m sure most, if not all have at least heard the term “Infrastructure as Code” (IaC).

Below I succinctly define it and then provide some guidance on how to start evolving infrastructure and application deployments to leverage its benefits.

IaC is also a key practice in a DevOps culture, so if that evolution is part of your overall plan, this will be of use to you.

Infrastructure as Code replaces the use of manual tasks and processes to deploy IT infrastructure and instead is deployed and managed through code, which is also known as ‘programmable infrastructure’.

3 components of IaC

The three components of IaC are:

  1. Images – create a ‘golden master’ base image using a tool such as Packer.
  2. Blueprint – define the infrastructure using DSL (Domain Specific Language).
  3. Automation – leverage APIs to query/update infrastructure.

These components can be viewed as the initial logical steps in transitioning to IaC, but none of them should ever be considered “done”.

The image definition files will need to be updated as updates to the components of an image are released, the infrastructure blueprint will evolve as the solution scales and features/services are added, and there will certainly always be areas to automate further.

One thing to keep an eye on is making sure that no one bypasses the IaC pipeline and makes changes out-of-band as that will result in what is known as ‘configuration drift’, where portions of the infrastructure don’t match the rest and that often results in strange errors that are difficult to debug.

In closing, I’d also suggest one of the core tenets of the DevOps culture, measurement, be used so that teams can track improvements in deployment efficiency, infrastructure availability, and other KPIs.

Prior to Cybric, Mike Kail was Yahoo’s CIO and SVP of Infrastructure, where he led the IT and datacentre functions for the company. He has more than 25 years of IT Operations experience with a focus on highly- scalable architectures.


April 2, 2018  9:19 AM

Cloud complexity: why it’s good to be a DMaaS

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Cloud computing is great news, apart from one thing.

The option to now build complex heterogeneous cloud environments gives us a massively expanded choice of deployment options to bring service-centric datacentre-driven virtualised data processing, analytics and storage options to bear upon contemporary IT workload burdens — which is great news, apart from one thing.

The clue is in the name and it’s the c-word at the start: complex heterogeneous cloud environments, are, well, pretty complex.

This issue is, when cloud data exists in various places, it creates a wider worry factor.

To explain… when elements cloud data have a footprint in SaaS (Software-as-a-Service) PaaS (Platform-as-a-Service) and IaaS (Infrastructure-as-a-Service)… then all that data needs to be ‘managed through its lifecycle’ – by which we mean, that data needs to be monitored so that we can assess enterprise Service Level Agreements (SLAs) and look to achieve consistency of service regardless of where data is ultimately stored.

Be a DMaaS

This is the pain point that Data Management-as-as Service (DMaaS) company Druva has aimed to address with its Druva Cloud Platform – the technology unifies data protection, management and intelligence capabilities for data.

Druva says that challenges in cloud arise due to what it calls the ‘patchwork of disparate systems’ and the need to administer them.

According to Druva, “Different clouds have different data management needs — IaaS, PaaS and SaaS have different protection and data management requirements that range from simple resiliency needs like backup and disaster recovery to more complex governance such as compliance, search and legal data handling.”

Veep of product and alliances marketing at Druva is Dave Packer. He insists that cloud means that IT teams must deal with growing data lifecycle complexity, including managing data over time for long term retention and archiving.

“If not done properly, lack of management can equate to high costs due to collecting too much dark data,” said Packer. “If a company’s data management is a mess while it exists in-house, then exporting it to the cloud can introduce even more data management challenges, and the increased cost to fix these can offset any anticipated savings.”

Druva Cloud Platform aims to provide a single point of data management and protection for workloads in the cloud.

The product comes with an integrated console/dashboard to be used for data management and protection, including analytics, governance and visibility into data across heterogeneous environments.


March 28, 2018  12:51 PM

As lonely as a (complex) cloud (code script)

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

We appear to be learning, on a daily basis, that you don’t just ‘throw applications over the wall’ to into deployment, into the hands of operations.

Yes, we’d had the DevOps ‘revolution’ (spoiler alert: application framework structures have been struggling to provide lifecycle controls of this kind of at least a couple of decades) and all the malarkey that has followed it… so what happened to the science and specialism of application delivery next?

Load balancers and Application Delivery Controllers (ADC) have done an admirable job of serving the previously more on-premises (admittedly more monolithic) age of pre-cloud computing application delivery.

Running blind

Avi Networks insists that legacy ADC appliances are “running blind” because this technology fails to track mobile users logging in from a variety of devices across different networks using applications that themselves are essentially distributed across different cloud resource datacenters.

This reality (if we accept the Avi Netwoks line of reasoning) means that the application is poor at scaling to user demand or application scale.

Avi proposes a distributed microservices based architecture and a centralised policy-based controller to balance application traffic from multiple locations – but with the customer still able to view and manage the total workload as one single entity.

The firm’s application delivery platform has this month launched new features to automate the deployment and scaling of applications in hybrid and multi-cloud environments.

No complex logic

Enhancements to Avi’s integration with its chosen datacentre management platforms (which in this case are Ansible and Terraform) paired with Avi’s machine learning capabilities is supposed to let IT teams provision infrastructure resources and application services without the need to code complex logic.

“Automating application deployment and provisioning is essential for enterprises today. However, organisations need to code and maintain complex scripts to deploy applications in each respective environment, and for each potential scenario. The move towards hybrid and multi-cloud only makes this experience more painful,” said Gaurav Rastogi, automation and analytics architect at Avi Networks. “We’ve eliminated that hassle. You don’t have to think about inputs or code anymore. Simply declare the desired outcome and let Avi’s intent-based system do the rest.”

As demand increases for an application, Avi can automatically spin up additional servers, cloud infrastructure, network resources, and application services for the application.

This is — although it’s something of a mouthful — elastic application networking services delivered through a single infrastructure-agnostic platform to provide multi-cloud automation orchestration with zero code.

From here, one wonders if (complex) cloud (code scripts) will wander lonely.

Wikimedia Commons


Page 1 of 10612345...102030...Last »

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: