CW Developer Network

Page 1 of 10712345...102030...Last »

April 24, 2018  2:37 PM

Appian’s 7 pillars of low-code

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater
Appian

Low-code business process company Appian has detailed its current updates and future road ahead as part of its annual Appian World user and customer conference.

Essential a mostly graphical visual drag-and-drop software tool designed to help build business ‘case management’ applications to track behaviour and task status, Appian apps are composed of elements of the business that can be pinned down to be consistent, measurable, enforceable and auditable.

Appian was recently named a leader in the Forrester Wave: Cloud-Based Dynamic Case Management Q1 2018 report. A report that is, arguably, comparatively narrowly focused on cloud platforms that work in the case management arena. Other firms in this space include TIBco, IBM, Hyland, Pegasystems, Newgen and bpm’online.

Appian integrates systems, data, and digitised processes to create what the company calls a unified omni-channel management across customer touchpoints by drawing from a mix of integrated cross-channel, cross-product and cross-service data.

So this is all fine, but what really defines low-code? What characterises this type of software application construction and what makes these applications functionally productive?

7 pillars of low-code

1. Visual modelling tools

Appian says that at the heart of a low-code application, we find visual modelling tools. These are the controls that the user (often the citizen developer) will touch to building, launching and changing enterprise applications.

2. CRUD functions

The core of a low-code platform will allow the user to perform CRUD functions: Create, Repeat, Update, Delete. These are the four basic actions we wish to execute upon data which exists in ‘persistent storage’ i.e. that layer of data storage in any device or machine which still exists when the machine is powered down.

3. Drag-and-droppyness

Low-code platforms should typically offer drag-and-drop interfaces so that users can reduce the amount of scripts they have to build manually. Ultimately this should move from low-code to no-code options where all the artefacts, data orchestration and integration functions that an application needs are encapsulated in pre-defined functionality that exists in the platform.

4. Agile iteration

Low-code development means you can iterate apps and release them as soon as functionality is built – so this is Agile (with a CAPS A) iteration. Since change is fast with low-code development, Agile transformation is made easier, well – in theory at least.

5. Instant mobility

Build once, deploy everywhere. With the explosion of mobile devices like smart phones and tablets, applications must have cross-platform functionality standard in their design. With true low-code development, it should all happen behind the scenes automatically, with no extra effort, coding, or resources.

6. Declarative tools

With low-code software, declarative tools are implemented through visual models and business rules. Appian says that removing the need to write custom-coding for these mitigates the difficulty of future changes or additions and speeds development times.

7. Security and Scalability

Appian admits that low-code development has had its knocks…mainly when it comes to security and scalability. While initially, low-code development was focused on smaller, departmental and less critical capabilities, today’s low-code should be enterprise-grade. The right platform has all the necessary security certifications in place, and proven experience with large-scale initiatives as well.

April 24, 2018  1:38 PM

Appian CEO Calkins details low-code road ahead

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Low-code software application development company Appian held its annual user conference and exhibition this week in Will Smith’s favourite holiday hangout, Miami Beach Florida.

Appian founder and CEO Matt Calkins hosted an opening keynote by referencing his firm’s year in review, which included an IPO and a variety of functionality updates to the core Appian platform which now runs (what Calkins claimed to be) ‘thousands’ of applications.

“It’s our goal to keep getting faster and our commitment rests on getting better at this aspect of our platform. We are’t focused on the competition, we’re just following the star that leads to what the customers want to do with our product. We’re not trying to open up a lead, we’re trying to unlock an entire market,” said Calkins, in his keynote address.

Calkins says that Appian is focused on making it much easier for more types of users to create applications. He thinks that if we do this, we can create a new renaissance in terms of the way software is applied to our everyday lives — and, crucially, it will enable organisations to operate differently.

“Every year we want to cut in half the amount of work it takes to build an application, it’s our take on a kind of Moore’s Law objective,” said Calkins.

In terms of how it cuts down application construction timing, Calkins has explained how Appian is focused on working on the aspects of application build that require the most scripting. A key part of this will be making GUI construction much faster.

Other areas for the future will include integration tasks, which may still include more scripting than Appian would ultimately like to deliver. The objective Calkins has is to be able to offload certain aspects of these kinds of tasks so that they can be handled by an Robotic Process Automation (RPA) bot or a software algorithm.

AI ‘serious’ new logo

There will also be a big chunk of new Artificial Intelligence for sentiment analysis.

“Our tool will look at any text string in real time and be able to render to the user (in real time) how happy a customer is. It will also tell you which customer representative is doing the best job and this will all be bundled into Appian now at no extra charge,” said Calkins.

“We have even developed a new logo to show that AI is part of Appian — so hey, we’ve developed a logo, that way you know we’re serious,” joked Calkins.

RPA itself will also be part of the road ahead. But in this case it is focused on orchestration tasks. This is related to what Calkins has noted as ‘bi-directional passage of complex data types’ — technology that can be used to build RPA into more diverse teams that feature both humans and robot functions.

“We’re trying to raise the profile of low code so that people realise it’s all about building applications that need less code. If more attention does come to this term, then I want people to understand more about what it really means — this does not mean that the ‘thing’ you are creating is in any way less sophisticated,” said Calkins. “Low code should be used ABOVE ALL to create more sophisticated software because that’s the software that is taking the longest to build.”

What Appian wants to do with low code is for its apps to be able to hosted anywhere, to run anywhere and to work with data anywhere. Calkins went a little more corporate at this point and spent some time talking about SLAs and reliability, so let’s take enterprise backslapping platitudes as read.

Looking more closely at architectural developments, Appian explains that it is working to facilitate access to information that may reside in multiple places.

What’s it like listening to the Appian CEO? That’s a tough question to answer — this is a technical guy, talking a mixture of comparatively technical algorithm-related content mixed with corporate positioning, working to build a (comparatively) non-technical technical software platform.

But that’s Appian in a nutshell isn’t it? It’s exposure to low level software engine technology in as high level (as in human centric) a format as possible.


April 23, 2018  12:50 PM

Luxoft puts blockchain adapter into low-code Appian toolset

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Low-code software application development company Appian holds its annual user conference and exhibition this week in Will Smith’s favourite holiday hangout, Miami Beach Florida.

First to the table for a portion of news at Appian World 2018 was IT services firm Luxoft.

But in this day and age, what, really, does any company mean by calling itself an ‘IT services’ firm, when all businesses are digital and all digital businesses run on software and all software is based on services?

It is possibly more accurate to call Luxoft a custom/bespoke software development services and engineering firm with a built in strategic consulting function.

The firm’s key sectors of work are financial services, automotive, communications, healthcare & life sciences.

Blockchain adapter

With a healthy breadth of enterprise software lifting under its belt, Luxoft has logically had a good deal of exposure to blockchain as this immutable distributed ledger technology now comes to the fore in terms of its application across software systems looking to use its record-based advantages. As such, Luxoft has built a blockchain adapter for Appian’s low-code Rapid Application Development (RAD) platform.

This makes sense, surely?

Appian is known for its low-code platform designed to help firms build software front ends and Graphical User Interface (GUI) elements, but also deeper software tools that help create and control Business Process Management (BPM) functions and their associated logic.

But Appian is not, necessarily, known for its work or competency with blockchain for its secure (but let’s remember not completely reverse-engineer-able and pure in every single sense) digital environment that facilitates data sharing.

Businesses using Appian’s Business Process Management (BPM) tool can now integrate a blockchain network into their day-to-day business processes.

No rip and replace

“The launch of this adapter is about helping businesses realise the potential of blockchain by making it easier to use,” said Vasiliy Suvorov, vice president of technology strategy at Luxoft. “Problems integrating blockchain into existing in-house systems are often the biggest obstacles to its adoption. Now, by integrating blockchain into a BPM [system], a business can [gain] the benefits of a decentralised model [taking blockchain technology from outside the central BPM stack] whilst retaining its existing IT architecture. This means business don’t have to rip out their old IT systems to use blockchain.”

Luxoft claims that, in healthcare, the blockchain adapter can reduce claims processing errors and inaccurate medical bills.

Blockchain creates an auditable way for medical and pharmacy systems to share and update real-time accumulators, meaning medical insurers, healthcare providers and pharmacies using the Appian platform instantly have access to the same claim data.

Sam Mantle, managing director of Digital Enterprise at Luxoft says this is important because the disparate systems used by pharmacies, healthcare providers and insurers to manage medical information are extremely complex.

“They are not designed for the smooth exchange of data,” said Mantle.

Distributed Ledger Technologies (DLT)

Luxoft built this adapter on Appian’s platform . It is used not just in healthcare but also in financial services and other verticals in areas such as data management. The move is part of a push to commercialise Distributed Ledger Technologies (DLT) and accelerate their deployment in established business processes.

Luxoft verticals


April 20, 2018  10:14 AM

Twilio’s developer-first approach to programmable cellular communications

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Cloud communications software company Twilio has announced Twilio Programmable Wireless.

This is programmable cellular communications software that offers APIs to power web-connected Internet of Things (IoT) products.

To be clear, Twilio is making the cellular connectivity element itself programmable… the technology proposition is that Twilio handles the complexity of dealing with carrier business models.

IoT devices need a connectivity provider, obviously.

But says Chetan Chaudhary, general manager of IoT at Twilio, existing connectivity providers have antiquated ordering, billing and reporting, non-existent documentation and often require developers to wait weeks to get started, deterring developers and slowing adoption.

“With Twilio Programmable Wireless, developers now benefit from a platform that is self-service, pay-as-you-go, has extensive documentation and enables them to begin building IoT solutions immediately,” says Chaudhary and team.

Connecting a device to the Internet requires a subscriber identification module (SIM) that is standard in every mobile phone today. Twilio provides developers with SIM cards that give them access to global connectivity in more than 120 countries.

Twilio Programmable Wireless enables developers to see where and when SIMs are connected and monitor data consumption in real time with individual SIM reporting. They can also minimise data costs by using Twilio’s Commands API to send and receive machine to machine (M2M) commands using the SMS transport protocol.

The firm insists that cellular IoT connectivity enables developers to build applications that require a persistent, always on connection and integrate systems for a whole suite of use cases.

In private beta, Twilio customers have built new services in healthcare, transportation and hospitality using this software.


April 18, 2018  9:14 AM

Cloudistics: you had me at programmatically extensible

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

What the cloud computing industry needs is an increasing number of platforms, right?

Okay perhaps not, platform plays are already many and multifarious in their nature and the formally defined cloud layer of Platform-as-a-Service (PaaS) already has its place firmly etched into the global taxonomy that we now use to describe the workings of the industry.

That all being said then, when the Computer Weekly Developer Network hears the newswires crackling with a new cloud software platform company launch, the hackles do tend to go up.

Cloudistics this week launches its presence in the EMEA region in response to what the firm calls demand for its private cloud with a ‘premium experience’, no less.

Premium cloud, USDA approved

So premium cloud, what could that be and why would software application developers be interested in this gambit?

Is premium cloud extra lean, USDA approved organic grass fed Kobe beef cloud?

It could be, but in truth it’s more a question of Cloudistics claiming to provide a private cloud deployment with some of the swagger and weight of the public cloud, but behind the firewall.

To be specific, these are the type of public cloud functions associated with actions such as initial cloud implementation, deployment, operations and maintenance – that is, stuff that is arguably tougher to make consistent, predictable, repeatable and secure in private cloud environments.

Programmatically extensible pleasure

Cloudistics Ignite is engineered with composable, software-defined, clusterless scaling. The resultant cloud operates free of any hardware-specific dependencies and is programmatically extensible with automation, orchestration, performance and security built in.

This, for software application development concerns, may be the special sauce here, programmatic extensibility in cloud environments can be tough as we look to tasks such as the ‘repatriation’ of previously terrestrial applications on their route to a new cloud-based existence.

Why? Because tasks like resource provisioning and performance management, while federating and abstracting all physical hardware into a cloud service is a complex big ask – so this, in real terms, is what Cloudistics does.

“We are immensely excited about the prospects in EMEA, which is frankly ripe for what Cloudistics has to offer. The emphasis has shifted from cloud itself to cloud functionality as an enabler for digital transformation… and this is where Cloudistics shines,” commented Chris Hurst, VP EMEA, Cloudistics.

Integrated with its Ignite product is the Cloudistics Application Marketplace. This features pre-built, ready-to-run and reusable application virtual machine templates that can be deployed instantly.

While Cloudistics may not be the private-public panacea programmatic provisioning panacea for all instance requirements, the ability to encapsulate and package a good degree of public cloud functionality controls as a consumable service in a private cloud platform play is what, arguably, makes this company interesting.


April 17, 2018  9:28 AM

Qualys ups security automation with a bit of Swagger

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater
Qualys

Cloud security firm Qualys, like every vendor today, is pushing the automation mantra.

The company’s Web Application Scanning (WAS) 6.0 now supports Swagger version 2.0 to allow developers to streamline [security] assessments of REST APIs and get visibility of the security posture of mobile application backends and Internet of Things (IoT) services.

NOTE: Swagger is an open source software framework backed by a considerable ecosystem of tools that helps developers design, build, document and consume RESTful web services.

As noted here, RESTful web services are built to work best on the web.

Representational State Transfer (REST) is an architectural style that specifies constraints, such as the uniform interface, that if applied to a web service induce desirable properties, such as performance, scalability and modifiability, that enable services to work best on the web.

Additionally (in terms of the Qualys news), a new native plugin for Jenkins delivers automated vulnerability scanning of web applications for teams using this Continuous Integration/Continuous Delivery (CI/CD) tool.

“As companies move their internal apps to the cloud and embrace new technologies, web app security must be integrated into the DevOps process to safeguard data and prevent breaches,” said Philippe Courtot, chairman and CEO, Qualys, Inc. “Qualys is helping customers streamline and automate their DevSecOps through continuous visibility of security and compliance across their applications and REST APIs. With the latest WAS features, customers now can make web application security an integral part of their DevOps processes, avoiding costly security issues in production.”

In tandem with all of the above, developers (and their DevOps compatriots) can now leverage Qualys Browser Recorder, a free Google Chrome browser extension, to review scripts for navigating through complex authentication and business workflows in web applications.

Qualys also launched a new free tool – CertView – to make it easier for developers to create and manage an inventory for their certificates.


April 16, 2018  6:32 AM

Is a Hive crash a ‘stale’ middleware malfunction?

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Hive Connected Home products are great and the support is outstanding, but they will crash… or at least ours did and here’s what we hope is an interesting analysis in terms of trying to work out where the faults could crop up in the new world of the Internet of Things (IoT).

Consider the facts and follow the logic if you will please.

There are three pieces of hardware here and one piece of software:

  • The Hive Hub – the unit that sits in your boiler or immersion room area to drive the ON/OFF functions of heating and hot water (middle picture below).
  • The router extension – the unit that takes a direct feed out of your home Internet box to provide wireless connectivity to Hive devices (not shown).
  • The thermostat – it is what it is, it’s a thermostat and it’s digital and attractive and it can be used to turn heat up or down and programme a schedule and put the hot water ON/OFF (left picture below).
  • The Hive app, on your smartphone or tablet.

So consider the scenario in our use case.

The Hive thermostat was working correctly and talking to the router extension and onward to the rest of the system, the heating could be controlled perfectly.

But, the app failed to synchronise with the home devices despite several reboots, reset passwords and re-installations.

Foolishly (it turns out) a factory reset on the thermostat was executed.

This is easy to do, the user simply holds down all three bottom buttons and in fact just the two on the left (menu, and back) will do it. This leaves the thermostat unable to connect with the router in any form and the only means of turning the heating and hot water ON/OFF is by the Hive Hub in your immersion cupboard – no ‘control’ of temperature is then possible, simply ON/OFF.

The solution

The solution is Maureen from Kenya in the Glasgow call centre, who (bless her heart) works on a Sunday and knows the system back to front.

She can also reboot remotely and make everything right again – all you need to do is provide your name and postcode regardless of your energy supplier.

So here’s the question, if the system is working internally so-to-speak without a web app connection, but the core ‘app’ application fails to be able to syncronise, then surely the app and the higher level systems engineering has outstripped the middleware on the devices, which have been left with the ability to talk to each other, but stripped of the ability to connect to the outside world.

Surely the firmware or middleware on the devices themselves has become stale and outdated.

Lessons learned… do not necessarily perform a factory reset on any IoT device if there’s a good call centre option and do check all wire connections and battery performance in remote unit devices first… and when all that fails, call Maureen, she’s lovely.

Credit: Hive


April 13, 2018  2:35 PM

Why workflow automation matters (to developers)

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Developers need to know more about workflow automation, it appears.

As analyst ‘thumb in the air’ pontificating predictions have ‘suggested’, the workflow automation market is expected to reach nearly US$ 17 (£12) billion by 2023, up from $4.7 billion in 2017.

But what is workflow automation?

Related to business process automation, IT process orchestration and content automation, workflow automation is perhaps best described as a means of managing and streamlining manual and paper-based processes made up of typically repetitive tasks (which are often essentially unstructured in their nature) which go towards making up a higher level total digital business process.

Good workflow automation reduces employee loss because workers hate managers who fail to communicate with them – but with this technology, they can have an automated communications backbone with accountability pointing to who should do what, and when.

But, as we apply automation, we need to be careful and remember what Bill (Gates) said – below.

“The first rule of any technology used in a business is that automation applied to an efficient operation will magnify the efficiency. The second is that automation applied to an inefficient operation will magnify the inefficiency.”

Big data’s role

Why the workflow automation 101 and clarification then?

Because as companies across industries accelerate their digital initiatives and bring more business functions online, IT systems are becoming increasingly distributed. With this shift, business processes are undergoing their own transformation and becoming more complex and interconnected, driving demand for workflow automation and visibility technology…

… and it’s developers who are building it.

One player in this market is Camunda. The company has just hit the version 2.0 release of its Optimize product, a software tool designed to combine big data and workflow automation to provide a holistic view of business process performance.

The product provides stakeholders with what has been glamorously called ‘instantaneous insight’ into revenue-generating business processes as they are automated.

“With Optimize 2.0, business leaders can identify potential problems and fix weak spots in core processes before delivery of products and services to customers is disrupted,” said the company.

Camunda claims that many traditional workflow systems can’t manage the dramatic increase of transactions processed online, in addition to the growth of distributed systems as companies adopt cloud, containers and microservices.

Because of this, Camunda claims to be the only vendor addressing the big workflow impact all these developments have with software that provides cohesive end-to-end intelligence into how well core business workflow processes are working.

“Optimize 2.0 combines the power of big data and workflow automation to enable customers to understand what’s going on with the most important aspects of their business: their revenue-generating processes,” said Jakob Freund, co-founder and CEO of Camunda. “With its flow charts, reports and analytics, Optimize customers can see precisely what steps have been executed, the status of orders for customers and detailed information if and why they are stuck in the process.”

Optimize 2.0 provides a report builder, dashboards, alerts and correlation analytics to give customers visibility into how their business workflows and processes are performing.

With the Camunda REST API, Optimize imports the historical data, storing it in an Elasticsearch big data repository that allows for queries. After the initial import, Optimize will pull new data from Camunda in configurable intervals providing user with up-to-date information in soft real-time.

NOTE: Soft real-time systems are typically used to solve issues of concurrent access and the need to keep a number of connected systems up-to-date through changing situations — like software that maintains & updates flight plans for commercial airlines: the flight plans must be kept reasonably current, but they can operate with the latency of a few seconds.


April 12, 2018  7:21 AM

British Army goes forth into API zone

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

The British Army marches on its stomach, fights for king, queen, god and country, upholds itself as a bastion of exemplary standards and is never afraid to say ‘stop it, that’s silly’ if things get out of hand.

Like any ‘organisation’, the British Army is also focused on the process of so-called digital transformation.

Our force’s technology interest (like any organisation) today is of course focused on data in environments where that data is operational and intelligence information of life-changing importance. This isn’t just mission critical, it’s military mission critical.

A major multi-vendor open technology consortium is now working to improve the Army’s understanding of the readiness of its troops and equipment.

The consortium’s lead partner is German softwarehaus Software AG.

Software AG reminds us that, like many organisations, the British Army is saddled with a number of overlapping legacy technologies, with information siloed in many different systems and databases contracted to different system integrators.  

Don’t mention the monoliths

The Army has now been on a journey to decompose large monolithic internally developed applications to loosely coupled services, designed to be externalised.  

To do so, it required an API (Application Programme Interface) management platform to enable governance, monitoring, securing and support of the APIs — this is what it gets from Software AG, along with software to integrate with large defence applications to expose reference data and services.

Software AG’s webMethods integration platform and its CentraSite (an API catalogue and services registry) are the two technology bases in use here.

“Without effective data and services, it’s very difficult for planners to understand our forces’ state of readiness, or to create much-needed services for our soldiers; this is where the Software AG API suite is adding real value” said Lt Col Dorian Seabrook, head of operations at Army Software House.

The use of Software AG’s API Management capability is said to enable a range of services across boundaries, drawing information from numerous systems to support a range of functions from HR, equipment availability, operational readiness, payment of reserves and trialling remote processing automation on legacy systems.

Head of UK public sector and alliances for Software AG Clive Freedman has said that, like many public-sector organisations, budget cuts mean that the Armed Forces are being asked to do more with less.

“The British Army has looked at how businesses in the private sector are using technologies such as Master Data Management (MDM) and APIs to improve data sharing and visibility, save money and boost effectiveness,” said Freedman.

Truth be told then, the British Army is now following an API-first strategy for interoperability to data and services that have been thus far not possible — that’s not silly.

Credit: Wikipedia


April 10, 2018  7:45 AM

Database DevOps, it’s now ‘a thing’

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

There’s DevOps and there’s DevSecOps — heck, come to think of it there’s even DevLoBDOps (Developer Line of Business Database Operations).

Of all the DevOps subsets, Microsoft SQL Server tools vendor Redgate Software is firmly in the Database DevOps camp — can we call that DataDevOps?

Either way, what would database-centric developer-&-operations tools actually do?

This is proactive monitoring technology to examine a database (in this case, a SQL Server estate) to provide performance diagnostics.

Of some news value then, Redgate’s SQL Monitor now integrates with the deployment tools in the company’s wider portfolio.

Why is this important?

Because the increased frequency of releases that DevOps enables typically means that development shops are shifting from occasional big deployments to constant smaller ones.

Redgate’s Database DevOps tools customer Skyscanner says that it moved to releasing database changes 95 times a day, rather than once every six weeks. That’s a fast releasing database… hence, the rise of Database DevOps.

The tough part of course is monitoring all those deployments.

If a ‘breaking change’ (a change that causes a system break) hits production, the cause has to be pinpointed quickly and precisely.

According to Redgate, “By highlighting which tool was used for a specific deployment, when it occurred and to which database, SQL Monitor lets users instantly drill down to the context and details they need to resolve any problems that do occur.”

The integration with the different database deployment tools also allows users to choose the method which best suits their workflow, as Jamie Wallis, Redgate product marketing manager explains.

“Some of our users like the way Redgate ReadyRoll integrates with Visual Studio and generates numerically ordered migration scripts,” said Wallis. “Others prefer SQL Compare, which is the industry standard tool for comparing and deploying SQL Server databases, or DLM Automation which plugs into the same build and deployment tools they use for their applications. We want to give them the freedom to stay with the tool they prefer, as well as reassure them that if there is an issue, SQL Monitor will help them track down the reason in seconds.”

In terms of performance, the deadlock capability of SQL Monitor has been extended with graphs that show when deadlocks occur and includes historical data so that users can interpret what happened.

The development team behind SQL Monitor are now looking into improvements to configuring and filtering alerts, so that over time users can train SQL Monitor about which things are and aren’t important to them.


Page 1 of 10712345...102030...Last »

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: