CW Developer Network


October 29, 2019  1:02 PM

Pegasystems lays down the do’s & don’ts of RPA

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

This is a guest post for the Computer Weekly Developer Network written by Francis Carden is his capacity as vice president for digital automation and robotics at Pegasystems — the company is known for its work in customer engagement software.

Robotic Process Automation (RPA) is already helping businesses reduce inefficiencies and deliver cost savings, but it’s not perfect.

IT teams must consider in equal measure what RPA should and shouldn’t be used for.

DO use RPA

DO use RPA…to automate simple tasks.

RPA typically automates simpler and mundane work. Years of business process optimisation has made it such that a lot of work that remains is not simple, repetitive work but rather complex and fragmented. This impacts RPA getting to scale. Companies switching from one unattended RPA vendor to another eventually realise it’s the complexity of the process/applications that prevent scale, not RPA itself.

Businesses should use attended RPA to automate simpler tasks within complex processes and optimise every worker rather than automating every worker. 50% automation on 1000 people is better than 100% of 20.

Also use RPA as a means to digital transformation, not the end game.

The goal of digital transformation is to transform business. RPA has a place – optimising parts of enterprise, but ultimately to digitally transform, RPA would require replacement with deeper intelligent automation technologies for many applications. Automating bad processes on old, costly-to-manage legacy apps are still bad processes on those applications.

No set-and-forget

In addition, RPA is not scalable for APIs and too breakable for IT to use as a set-and-forget. According to a recent Pega survey of 509 global decision makers on RPA, 87% of respondents experience some level of bot failures. Further, forty-one percent of respondents said that ongoing bot management is taking more time and resources than expected. RPA automations can be exposed to APIs to redesign new digital interfaces, but these RPA connections will be temporary.

There are far better technologies to automate intelligently for serious transformation than RPA and these include:

  • Low-code designs which supports centralised governed rules rather than siloed macros to eliminate cumbersome spreadsheets.
  • Centralised email management with natural language processing to read and respond to emails rather than RPA on outlook.
  • Orchestration and low-code together to replace old, tired processes and accelerate ending the life of many legacy apps that encourage poor processes.

DON’T use RPA…

Don’t use RPA to automate complex processes.

The core of RPA is to automate screens through the user interface (UI) – to read something off the screens and do something with the data before possibly sending the data back into them. RPA doesn’t need much intelligence for this because all RPA is limited by its operating system and the ability to support the multitude of complexly compiled applications that run on it. Some intelligence is being used via optical character recognition on-screen, but in these cases, app automation is even slower and more brittle than object-based automation systems in some RPA products.

Complex processes embedded with complex applications and GUIs are a no-go – they are fragile, expensive to support and break frequently.

Also, don’t use RPA for long-term enterprise application integration (EAI) projects.

Using RPA – UI automation – for long-term EAI projects is not architecturally sound. A simple UI change down the RPA chain could seriously impact EAI – disastrous as the objective of enterprise architects is to use robust, scalable, APIs connected using proven, secure technology. However, these aren’t always available in time to fulfil short-term optimisation or digital transformation. In this case, it is fine to at least consider RPA as a stop-gap/bridge so the value can be delivered.

It will be interesting to see how RPA develops. RPA’s lack of scale is at least drawing attention to bad processes and eventually, maybe, people will start to plan to transform them instead.

October 29, 2019  12:05 PM

Nice on RPA for EAI: inside out & beyond

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Product Director for Nice advanced process automation solutions Itay Reiner spoke to the Computer Weekly Developer Network to discuss how Robotic Process Automation (RPA) is used to aid Enterprise Application Integration (EAI).

Reiner says that RPA provides an out of the box connectivity capability enabling the integration with ‘any’ kind of application within enterprises.

As such, RPA does not require any additional work on the IT side in order to integrate with enterprise applications. By utilising the connectivity technologies available, RPA is used to create integrations between different applications within the enterprise.

The advantage is that it is done quickly with minimal development effort and without the need to expose additional APIs on the application side.

Reiner writes as follows…

RPA is known for its low code type of development. By utilising different application connectivity technologies such as object-based connectivity and AI based surface-based connectivity, it (in most cases) eliminates the need of using API’s.

So how is RPA is being used both a) internally to streamline IT operations and b) externally, to automate manual tasks and help organisations on their digital transformation journey?

Internal use case

A large Internet Services Provider (ISP) implemented a self-service mechanism for customers to run network testing immediately instead of waiting for human customer service. By selecting the relevant options from a series of digital self-service menus, RPA robots are automatically triggered to match the customer’s inputs and authorisation details to the equipment profiles, in order to perform a network test in real-time.

By putting the power into the hands of the customer and further enabled by robots, significant time savings were achieved for both the ISP and the end customer. As well as delivering greater customer satisfaction the ISP was able to deliver additional services without the need to educate their employees in additional network testing scenarios.

External use case

A global provider of managed services found the manual processing of documents to be one of their most labour intensive, error prone and expensive administrative processes. By utilising advanced Optical Character Recognition (OCR) capabilities, the RPA bots could read and extract data from a high volume of scanned documents. The software robots then quickly and efficiently transferred the data to various enterprise applications including SAP, Oracle and salesforce.com as well as the end customers own bespoke systems.

Average processing time was greatly improved and service consistency was achieved across thousands of document types and permutations. There was 100% compliance with processing accuracy leading to cost reductionsand greater employee satisfaction and engagement, as a result of being freed up from the mundane manual processing of documents.

RPA can be utilised on many different levels. For instance, humans can be directly involved and interacting with the robots or, on a more functional level, RPA can be used to build integration between pure machine interfaces e.g. database queries, different APIs etc. The generic connectivity capabilities of RPA enable integration with any machine interface and fulfill any type of automation logic between two machines.

Unattended RPA gets smarter

Unattended automation technology has become commoditised as many enterprises have already successfully adopted it within their business operations. The next step towards smart automation lies in desktop automation technology and a more intelligent personification of desktop automation technology in the form of employee virtual attendants.

Robotics will become more focused on enabling employees to perform their desktop tasks more effectively and efficiently with real time process optimisation and guidance.

Furthermore, the integration of RPA with more cognitive and AI driven tools will also make RPA smarter, as the technology learns from human input and overtime starts to more closely mimic and resemble human actions.

RPA engines only ever have a fixed level of logic that they can bring to the table… so how do we know when we’re at the limit of what any one single engine can do and what happens next? This largely depends on the specific use case. Most RPA platforms have very extensive logical engines that can support any type of use case or logic. In cases where there is a need for a more complicated logic, which cannot be implemented, RPA has the capability to integrate with external 3rd party technology to close any relevant gaps.

Where does RPA do next?

RPA will become more and more intelligent when integrated with cognitive tools to provide more human like capabilities, such as exception handling, auto repair, decisioning and built in optimisation (to facilitate for continuous improvement of an implemented process)

Cross platform connectivity is a given and is already being delivered by most RPA vendors. RPA is moving more towards the employee desktop (Desktop Automation) to better align humans and robots. A more diversified digital workplace will encompass humans and desktop robots working more collaboratively and we will start to see more robots moving from back end servers to the employee’s desktop, enabling employees to perform better.


October 29, 2019  11:25 AM

CI/CD series – Tibco: Continuous self-service intelligence with Kafka DataOps

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

This is a guest post for the Computer Weekly Developer Network in our Continuous Integration (CI) & Continuous Delivery (CD) series.

This contribution is written by Mark Palmer in his capacity as SVP of data, analytics and data science at TIBCO Software & Steven Warwick in his capacity as principal software engineer at TIBCO Software — the company is a specialist in integration, API management and analytics.

Palmer & Warwick write…

Developers love Kafka because it makes it easy to integrate software systems with real-time data. But to business users, Kafka can seem like a cable TV station you haven’t subscribed to i.e. insights are buried within Kafka messages because they aren’t easily accessible.

Recently, developments in analytics and Kafka DataOps have made Kafka a first-class citizen for Business Intelligence (BI) and Artificial Intelligence (AI), for the first time. Better yet, it can now take just a few minutes — and better yet still, the insights are real time, so the information gathered is well suited to CI/CD implementations.

Here’s how DataOps for Kafka works and how continuous intelligence with Kafka can change how business users think about data.

Streaming Business Intelligence

Streaming BI is a new class of technology that connects Kafka streaming data to self-service business intelligence. For the first time, Kafka business analyst blindness can be cured with visual analytics. Here’s how it works. Streaming BI supports native connectivity for Kafka topics. It takes just a few minutes to set up: you open a Kafka topic like you would open an Excel spreadsheet. Once connected, a Kafka channel looks like any other database table, except it’s live and continuously updated in the BI tool. The business analysis selects the topics they want to explore, creates visualisations and links views.

Each visualisation creates queries that are submitted to the streaming BI engine to continuously evaluate messages on Kafka as conditions change. When any element in the result set changes, the incremental change is pushed to the BI tool.

Streaming BI looks like traditional BI, but all the data is alive! Graphs update in real-time. Alerts fire. Continuous insight on Kafka data is now at the fingertips of any business analyst, with no programming, no hacking KSQL, without weeks or months of development from IT.

Because streaming BI is built on an advanced streaming analytics engine, analysts can decide to create arbitrary sliding windows based on temporal constraints, join and merge streams with each other or even apply machine learning models in real-time.

Adaptive data science & Kafka

This same streaming technology can power adaptive data science too.

So what is adaptive data science? Traditional machine learning trains models based on historical data. This predictive intelligence works well for applications where the world essentially stays the same — that the same patterns, anomalies and mechanisms observed in the past will happen in the future. So, predictive analytics is really looking to the past, rather than the future.

But in many real-world situations, conditions do change and with the increasing ubiquity of sensors, Kafka is like an Enterprise Nervous System, where ‘sensory input’ from Kafka can be continuously used to score data science models. When you inject an AI model into the streaming BI server predictions continually update as conditions change. Java, PMML, R, H2O, Tensorflow and Python models can all be executed against sliding windows of Kafka messages.

Continuous Intelligence

Combined, streaming BI and AI provides continuous intelligence for Kafka. This style of analytics flips the traditional backwards-looking model of processing on its head: in effect, business analysts can now query and predict the future. This style of insight exploration brings BI, for the first time, to true real-time use cases in operational business applications that use Kafka.

Here’s how to think about how to use Kafka for continuous intelligence. First, imagine the data you might have on Kafka that changes frequently: sales leads, transactions, connected vehicles, mobile apps, wearable devices, social media updates, customer calls, robotic device state changes, kiosk interactions, social media activity, website hits, customer orders, chat messages, supply chain updates and file exchanges.

Next, think of questions about the topics on your Kafka streams that start with ‘tell me when’. These questions can contain mathematics, rules, or a machine learning model and can be answered millions or billions of times a day. When answered, your BI tool will call you; you don’t have to sit around and wait.

These are questions about the future: Tell me when a high-value customer walks in my store. Tell me when a piece of equipment shows signs of failure for more than 30 seconds. Tell me when a plane is about to land with a high-priority passenger aboard at risk of missing their connection.

Most companies don’t bother asking ‘tell me when’ questions because, well, their BI and AI tools couldn’t answer them.

Until now.

Continuous Kafka

Continuous Intelligence with Kafka enables entirely new use cases and applications that leverage real-time data.

Bank risk officers can detect anomalies within streams of Kafka messages that carry trades, orders, market data, client activity, account activity and risk metrics to identify suspicious trading activity, trading opportunities and assess compliance in real-time.

Supply chain, logistics and transportation firms can analyse streaming data on Kafka to monitor and map connected vehicles, containers and people in real time. Streaming BI helps analysts optimize deliveries by identifying the most impactful routing problems.

Smart City operational analysts can monitor Kafka data from GPS, traffic data, buses and trains to help them predict and act to dangerous conditions before they cause harm.

Energy companies can analyze sensor-enabled industrial equipment to avoid oil production problems before they happen and predict when to perform maintenance on equipment. ConocoPhillips says that these systems could lead to “billions and billions of dollars” in savings.

Why not use KSQL?

KSQL allows developers to query Kafka data, but it’s not real-time and continuous, BI tools don’t support it and it requires developers to implement a system with it. For example, BI tools like Power BI, Tableau and Looker don’t support it because they aren’t built for streaming BI—they do not accept continuous push-based, incremental query processing.

So using KSQL for analytics is like buying a hammer and expecting to get a house as a result – there’s a lot more to it.

Streaming BI instantly makes Kafka useful by allowing users to connect to professional BI tools and data science models without KSQL coding. No need to build a house – you just move in! Connect Kafka to your BI tool with the click of a button and prepare to experience a continuously live, immersive BI experience.

The other approach is to simply stuff Kafka messages into a database and use SQL. This works if all you want to do is to look in the rear-view mirror at what has already happened.

Fusing Continuous Intelligence & History

There’s just one more thing.

For the first time, Streaming BI helps you combine real-time insight with historical insight, which lets analysts to compare real-time conditions to what has already happened. This is analytics nirvana.

For example, airline operations might use Kafka to capture data about frequent flyers as they check in for flights and by using streaming BI, analysts can correlate continuous awareness about customer history as they consider how to take action in real-time.

Streaming BI and adaptive data science can turn how you think about using Kafka on its head. With continuous intelligence, business users can now have self-service access to Kafka events, apply AI in real-time and understand what’s happening right now. Even better, they can query the future and make more accurate and timely predictions and decisions.


October 25, 2019  2:32 PM

Abbyy Timeline: digging into process intelligence

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

The Computer Weekly Developer Network team engulfed itself in process intelligence at Abbyy Content IQ this season at a session delivered by Ryan Raiker, senior product marketing manager and Alex Dibeler, senior sales engineer at ABBYY.

Do you really know how you operational processes work, in detail? This was the core question that the presenters aimed to uncover.

Less than 1 in 5 organisations really understand their internal processes, claimed the presenters.

Work, in the real world, is often duplicated and performed ‘out of order’ — and this is the reality of the non-digital transformation world… this is the inefficiency reality of the non-automated world, claimed Abbyy’s Raiker.

NOTE: Why is Abbyy called ABBYY — sources suggest that its founder simply wanted a company name starting with A and B and so decided to reinforce that effort by doubling the B and capitalising the whole thing.

Abbyy Timeline is a cloud based techniology designed to allow businesses to use the information contained within their systems to create a visual model of their processes, analyse them in real time to identify bottlenecks and predict future outcomes to facilitate decision-making of technology investments.

The company actually purchased Timeline back in May of 2019.

The global process analytics market size is expected to grow to USD 1,421.7 million by 2023 according to Research and Markets.

The trouble with processes in the real world is that they don’t always have an easy to define (and so manage) — because of this Abbyy Timeline’s four pillars of process intelliegence are as follows:

  • Discovery
  • Analyses
  • Monitoring
  • Prediction

Timeline Analysis approach is designed to handle the full range of business process types – from highly structured, to ad hoc.

“Organisations are focused on digital intelligence to impact process, patient, business and customer outcomes. Process intelligence is required to truly understand the operational effectiveness of business processes and how well they support a business strategy,” commented Ulf Persson, CEO at Abbyy, at the time of the Timeline acquisition.

Processes are everywhere insists Abbyy — they’ve vertical and they’re horizontal thoughout any organization, but we need a ‘process identifier’ to be able to know when and where the events that define a process actually happen.

We know ETL (as in Extract Transform and Load in terms of the three database functions that are combined into one tool to pull data out of one database and place it into another database) but Abbyy talks about ELT … because Abbyy extracts the data, loads it into Abbyy Timeline and THEN transforms it for process intelligence.

But the problem with real world data is that it’s (typically) messy and out of shape, so building in all that variability into the dataset that describes a process (after process mining) can lead to a messy process path analysis.

Abbyy says that it knows this and can help to analyse the different ‘lanes’ that define the various paths down which a process will typically be executed.

According to Abbyy Timeline official documentation, a variety of pre-built analyses are ready to quantify process performance, identify a company’s process execution issues and perform root cause analysis.

“The ABBYY Timeline platform also supports operational monitoring through its continuous assessment of new event data to determine if any adverse conditions occur and can immediately notify you or other business operations personnel so you can act,” notes the company.

Overall, this kind of session is enlightening, mainly because we get to hear real engineers talk about how to encapsulate work processes into definable business segmentation and express that through algorithmic logic.

 


October 24, 2019  6:40 PM

Big Lemon: How developers are helping change student energy consumption

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

This is a guest post for the Computer Weekly Developer network written by Owen Richards, co-founder and head developer of Big Lemon.

Big Lemon is a software development agency based in South Wales. The company solves problems through digital solutions, designing and building software, apps and bots. 

Richards writes as follows…

Climate change is rightly gathering an increasing focus in today’s news cycle.

From eating less meat to cutting out plastic, people are looking to the things they can do to reduce their carbon impact.

One demographic that has historically been less efficient at managing their impact, are students. The average student uses roughly £500 worth of energy and water a year, which is more than the average homeowner. When you consider they are only likely to be living in student accommodation for eight months of that year, you realise, that’s a lot.

We know that students disproportionately use a lot of energy, but what isn’t really known is why. Some accommodation providers think it’s because contracts are often ‘all-in’ so the costs are hidden. Others think it’s because this is the first time students are out on their own and are still learning how to manage living by themselves.

We just don’t have the data.

So what do you do when you have a problem that needs to both gather information, but also offer a solution? You call in the developers…

Developing a solution

The Student Energy Project is an initiative run by the energy management company, Amber Energy. The project aims to collect habitual data from students, to learn where they are using the most energy, and then provide feedback and incentives (such as free pizza, students love pizza, right?) to reduce that energy consumption.

The project works by collecting data from accommodation meters and combining it with historical data, such as weather, temperatures and expected usage, and then combines that with habitual data from the students – such as how often they boil the kettle or run water. In the future it will go on to gamify the feedback, encouraging students to compete with each other as to how efficient they can be through league tables and rewards.

We joined the project to help build and develop the platform.

Initially, it was just a web-based-project, had a poor interface and considering the amount of data it was going to need to manage, was lacking the future-proofing required to manage the backend processes.

First, we mapped out how the users would utilise the tool and quickly decided that whilst the web app was ok, it needed to be more efficient and the project needed its own native app.

So, we went about building the API and the backend to transfer all the data across. We also built the native app alongside the new web app, to make it easier for cross over functionality in the future.

We realised that we were going to need to build something that could handle live data, so we decided to use Meteor as the framework for the backend, which uses the Distributed Data Protocol (DDP) – something that was quite new to us at the time. I’d recommend anyone looking to build real-time web applications to check out Meteor. The documentation is really accessible and the fact that it works out of the box with React was a massive plus for us, as well as having their own Meteor-dedicated hosting, Galaxy.

The challenges

There were several challenges to the development process, but one of the biggest was the sheer amount of data points.

For example, some meters in student accommodation are per room, others are per site. Some don’t have gas, water and electric meters, or some only have water and gas etc. So mapping out the different ways all the data connects was tricky, and needed to be reflected in the UI. There’s no point asking a user for information they don’t have, and given the variance, we had to make sure we were asking the right users the right things.

How can this make a difference going forward?

The applications are now live with over 700 students signed up in the first couple of weeks. As the project goes forward and we gather more data, we are going to be able to build a clearer picture of student energy usage.

The app itself is also going to evolve as we gather more data. For example, when students are regularly submitting habitual data and we have a better understanding of what behaviour impacts consumption the most, we can offer the suggestions or incentives that are going to have the biggest impact.

Essentially the app will become more intelligent; it’s going to be able to learn that someone showers once every day or they always charge their phone overnight, and be able to give stats and feedback that are more personal and appropriate to each user.

There’s also scope to expand the development to other sectors, including in the workplace. Whilst we can’t control government policy or conglomerate’s resources, as developers, we can look at how our platforms, apps and tools shape behaviour. Tech for good is a growing ethos, and it will be interesting to see what the industry can develop to try and solve global problems.

To take a look at the app yourself, you can go to – www.studentenergyproject.com

Richards: chief lemonhead


October 24, 2019  2:44 PM

Abbyy Content IQ: keynote notes, quotes & anecdotes

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Abbyy (or ABBYY if you follow the firm’s corporate brand guidelines) kicked of its Content IQ Summit in Nashville, Tennesse this week and the Computer Weekly Developer Network team were there to soak up the technical download.

CEO Ulf Persson started the session looking at the company’s ‘mission and vision’ and initially decided to focus on how his firm gravitates towards its partner network — and partner sponsors at this event included UiPath, Blue Prism, Ripcord and Nice.

Referring back to quotes from Winston Churchill, Persson noted that the UK prime minister’s words on change actually stemmed from an earlier quote.

“To live is to change and the be perfect is to have changed often,” John Henry Newman.

Persson thinks that ‘ease of use’ is becoming a paradigm in and of itself. He wants to be able to ’empower customers through self-service’… and so now we (as customers) have a different level of expectations if we consider the transactions that we make over connected services every day.

As have detailed before, Abbyy is known for its foundations in ‘document capture’ and management but now wants to establish itself more broadly as a provider of so-called ‘Digital IQ’ for the enterprise.

In terms of product, ABBYY Timeline is a neural networked technology that digs into business processes and identified which processes should be targeted for automation — and the company calls this Process IQ.

“Digital IQ revolves around both process and content. Enterprises have LOADS of process and content and we know that automation is a key part of the transformation that companies are trying to achieve. But it’s tough to automate [any] process if you don’t understand what’s in them… and so even harder to make changes to those processes. But we do know that processes always [typically] include document, so we need a way to EXTRACT the information in those documents to be able to deliver on the future. Our keywords here are modularity, flexibility, extensibility and ease-of-use,” said Persson.

CEO Persson says that the journey to digitising the information needed for next-generation business is what drives his firm’s research and development as it now works to build out its product mission further.

With an established presence in North America and Europe, growth markets for Abbyy’s technology include Japan, China, Australia and New Zealand. The company has recently opened an office in Hong Kong to serve Asia… and also now runs a developer centre in Budapest, Hungary.

Invisible Robots

Taking over from CEO Persson, Forrester analyst Craig Le Clair continued his ‘I’ve written a book’ roadshow and delivered his well-toured ‘Invisible Robots in the Quiet of the Night – How AI and Automation Will Restructure the Workforce’ presentation.

Starting with his usual American football joke that leaves European attendees cluelessly befuddled, Le Clair clearly understands his subject and delivered his view on where bots and software automation is developing next.

“I like to put RPA in its place, what I mean is that it’s like a shiny new object,” said Le Clair. “We know that RPA is a fairly crude tool that has no native intelligence and has no learning ability, so if you keep the number of decisions [a bot takes] under five and keep its connection down to less than five applications — then the process can be executed more accurately.”

Integrating RPA with ‘conversational intelligence’ will be key to developing intelligent systems of the future, this (for Le Clair) is what will really take intelligent decision management forward.

Agreeing with the sentiment expressed by many spokespeople across the IT industry, Le Clair noted that ‘everyone will have a personal robot in the future’. But right now management doesn’t really get it and the development curve is relatively slow.

Bots today are slow, underutilised and have a high total cost of ownership… but the market has the potential to move towards more consumption-based pricing in the future.

Paying lip service to his Abbyy hosts, Le Clair noted that understanding what content and process really is and then pushing robot workers to help will be a fundamental step to making RPA really happen in the workplace.

As control moves from people to machines, the impact of automation will increase and the shape of automation will also change. Today, humans still make most of the decisions, but as the software starts to take more control, the cost of making decisions will move to zero (the same way that search costs zero today)…  Abbyy is certainly at an inflection point as it moves beyond being ‘just’ a document capture specialist and starts to play a part in the wider digital content and digital process market.

ABBYY CEO Persson: It’s tough to automate a process if your company doesn’t understand it.


October 22, 2019  6:06 PM

Splunk .conf keynote notes, quotes & anecdotes

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

What’s not to love about log file management and trace analytics?

Nothing apparently.

Splunk managed to attract what was claimed to be somewhere over 11,000 attendees to Las Vegas for its .conf 2019 conference and exhibition… and the geek-cognoscenti were there in force to dig deep in all things logs and machine data.

CEO Doug Merritt explained that the event is now in its 10th year and the gathering has grown from what was just 300 attendees in its first year.

Merritt thinks that this is ‘just the beginning of the first data age’ and that the future has everything it in from printed foods, to flying cars to mission(s) to Mars.

“In the [near] future, there will be those companies that seize the opportunity to do [productive] things with data… and those that simply don’t exist,” said Merritt.

But as positive as the drive to data is, CEO Merritt says that there is a real need to ‘liberate’ data, because so much of it is locked into systems, devices and machines.

The ‘shape’ of data

We know that some data is static and stable, we know that data sits in many different data sources and repositories, we know that data works on different time-scales from milliseconds to months, we know that some data is structured, some is unstructured… and some is even semi-structured, even further, we know that some data is streaming data and some data sits in a more orchestrated and federated state than others.

Merritt says that Splunk has been engineered to be able to deal with all those data sources and work to provide the right level of analytics.

So for all of this data, Splunk is aiming to differentiate its offerings for organizations who need to use data in lots of different ways. The company is also looking to provide new levels of infrastructure-based analytics and also offer rapid adoption packages based upon recognised industry use cases.

“As you solve increasingly complex [data] problems, you [the attendees] will be showing the rest of the business what is really possible with new insights in the data that underpins company operations,” said Merritt.

Splunk VP of customer success and professional services Toni Pavlovich took the stage to showcase a use case at Porsche. This section of the keynote featured a demonstration to showcase Splunk Augmented Reality (AR), a technical development which helps engineers fit parts and equipment that actually features video inside the viewing headset experience.

The road to unbounded learning

Splunk CTO Tim Tully brought us out of the customer session (lots of high fives and people shouting ‘awesome!’ – you get the picture) to explain where Splunk is building, buying and investing in new functionalities and capabilities.

“In terms of what Splunk is building, the company is pushing for ‘massive scalability and real time capability’ in its platform… and in a form that is usable in mobile form. The Splunk Data Stream Processor is focused on creating data pipelines and learning use cases into live routing of data to Splunk or other public cloud connectors. We’ve seen customers use it as a data routing message bus, which was actually a surprise,” said Tully.

Many people are working out how Machine Learning really works and using old processes from raw data to feature engineering to model training to model deployment. Splunk promotes a new approach called ‘unbounded learning’ where the model learns continually from the point of deployment.

Tully also talks about what he calls ‘indulgent design’, an aspect of user interface creation that the company has used to create its new Mission Control product, which has a new colour dashboard presented in ‘dark mode’ with an additional ‘notable events’ screen to allow users to really ‘stare at it’ (Tully’s own words) for as long as they need to get data insights.

Font of (data) knowledge

Splunk Data Sans is the company’s own new font which the company has used to brand itself in a new way. The text itself has an elongated bar and clear disambiguation throughout the character set so that anyone looking at Splunk text will immediately be able to recognise it as Splunk, simply by look and feel.

Tully also explained how Splunk wants to extend its mobile capabilities to provide interactive data dashboards that allow users to address incidents more quickly. The company calls this the ability to ‘Splunk data’… and so uses Splunk in this case as a verb i.e. the ability to drill through and analyse data in a live format on a mobile unit… and there’s an integration with Slack to make that easier.

Overall, Splunk moved from broader CEO messages to specific on-screen presentation layer updates with accompanying functionality changes within an hour of keynote, which is pretty deep… well, Spelunking is all about going deep, after all.

 


October 22, 2019  3:42 PM

Splunk Mission Control acts on data ‘at machine speed’

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Splunk has built new functions into its Security Operations Suite to modernize and unify its Security Operations Center (SOC) product.

Anchored by the newly launched Splunk Mission Control, the Splunk Security Operations Suite is designed to help security analysts to turn ‘data into doing’ (as the marketing spin puts it) in real world operational systems.

The cloud-based Splunk Mission Control connects Splunk SIEM (Splunk Enterprise Security), SOAR (Splunk Phantom) and UEBA (Splunk UBA) products into a single data-developer data-analyst experience.

Combined, these products form the Splunk Security Operations Suite.

“With Splunk Mission Control, customers gain a new, unified SOC experience that supports investigation and search across multiple on-premise and cloud-based Splunk Enterprise and Splunk Enterprise Security instances, ChatOps collaboration, case management and automated response, all from a common work surface,” said Haiyan Song, senior vice president and general manager of security markets, Splunk.

Machine speed response

The company points out one core truth and says that as the volume of security-relevant data continues to grow, so will the importance of technologies that can automate and respond to that data in real-time.

So… the mission is: detection, defence and action on threats at machine speed.

New product announcements include Splunk Enterprise Security (ES) 6.0 as the latest version of Splunk’s flagship security offering. Splunk ES is a security information and event management (SIEM) platform that now benefits from improved asset and identity framework enhancements.

Splunk User Behavior Analytics (UBA) 5.0 is described as a product that enables security teams to build advanced, customized Machine Learning (ML) models for baselining and tracking deviations, based on their security environment and use cases.

Splunk Phantom 4.6 is the company’s security orchestration, automation and response (SOAR) product and it now come to the mobile phone.

“Phantom on Splunk Mobile allows customers to automate repetitive, manual tasks from the palm of their hand, enabling analysts to focus on mission-critical security threats that fuel security operations. Splunk Phantom 4.6 also introduces new open source integration apps, giving developers easy access to Phantom’s source code to extend SOAR to the unique needs of every individual SOC,” said the company, in a press statement.

Splunk has also announced several new security apps and updates to Splunk ES Content Update, which delivers pre-packaged Security Content to Splunk ES customers. Updates include Splunk Analytics Story Preview, a new Splunkbase app; Cloud Infrastructure Security, new security content which analyses cloud infrastructure environments; and new open source content, including over 30 new open sourced apps for Splunk Phantom.


October 22, 2019  1:31 PM

CI/CD series: What drives continuum in software?

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Software used to shut down.

Users would boot up applications and wrangle about with their various functions until they had completed the tasks, computations or analyses that they wanted — and then they would turn off their machines and the applications would cease to operate.

Somewhere along the birth-line that drove the evolution of the modern web, that start-stop reality ceased to be.

Applications (and in many cases the ancillary database functions and other related systems) that served them became continuous, or always-on.

Where applications weren’t inherently always-on in the live sense, the connected nature of the mothership responsible for their being would be continuously updating and so whenever the user connected a ‘live pipe’ to the back-end, then the continuum would drive forward to deliver the updates, enhancements, security refreshes and other adornments that the app itself deserved.

This, we now know as Continuous Integration & Continuous Delivery (CI/CD).

CI/CD reality

The reality of CI/CD today is that it has become an initialism term in and of itself that technologists don’t spell out in full when they speak out loud, like API, like GUI… or even like OMG or LOL, if you must.

But as simple as CI/CD sounds in its basic form, there are questions to be answered.

We know that CI/CD is responsible for pushing out a set of ‘isolated changes’ to an existing application, but how big can those changes be… and at what point do we know that the isolated code is going to integrate properly with the delpoyed live production application?

A core advantage gained through CI/CD environments is the ability to garner immediate feedback from the user base and then (in theory at least) be able to continuously develop an application’s functionality set ‘on-the-fly’, so CI/CD clearly has roots in Agile methodologies and the extreme programming paradigm.

But CI/CD isn’t just Agile iteration, so how are the differences highlighted?

Do firms embark upon CI/CD because they think it’s a good idea, but end up falling short because they fail to create a well managed continuous feedback system to understand what users think?

Does CI/CD automatically lead to fewer bugs? How frequent should frequency be in a productive CI/CD environment and how frequent is too frequent? Can you have CI/CD without DevOps? Is CI/CD more disruptive or less disruptive?

TechTarget reminds us that the GitLab repository supports CI/CD and can help run run unit and integration tests on multiple machines as it splits builds to work over multiple machines to decrease project execution times… is this balancing act a key factor of effective CI/CD delivery?

CI/CD contrasts with continuous deployment (despite the proximity), which is a similar approach in which software is also produced in short cycles but through automated deployments rather than manual ones… do we need to clarify this point more clearly?

How do functional tests differ fro unit tests in CI/CD environments… and, while we’re asking, if development teams use CI/CD tools to automate parts of the application build and construct a document trail, then what factors impact the wealth and worth of that document trail?

CWDN CI/CD series

The Computer Weekly Developer Network team now sets out on a mission to uncover the depths, breadths, highs, lows and in-betweens where CI/CD practices and methodologies are today.

In a series of posts following this one we will feature commentary from industry software engineers who have a coalface understanding of what CI/CD is and what factors are going to most prevalently impact its development going forward.

We look at how organisations are shifting to continuous integration and continuous deployment to deliver new software powered functionality to the business. What are the common tools being used? How do organisations get started? What are the pitfalls? How much is enough application to go-live and then continuously build upon?

These (and more) are the CI/CD questions we will be asking and we hope that you dear reader will come back again and again for updates… continuously.

Image: Wikipedia

 


October 22, 2019  1:28 PM

Can advanced fire modeling put out wildfires?

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Splunk has ‘closed funding’ in Zonehaven, a cloud-based analytics application designed to help communities improve evacuations and reduce wildfire risk with data.

The funding is the first investment from Splunk’s newly launched Splunk Ventures social impact fund that champions data-driven approaches programmes that have social impact.

In 2018, more than 57,000 wildfires burned 8.8 million acres of land in the United States – although Splunk acknowledges that the problem is a global risk.

Despite this reality, fire departments still rely heavily on word-of-mouth, emergency calls and static ‘paper playbooks’ to detect wildfires and evacuate people at risk.

Zonehaven provides situational awareness and decision support by using what have been called ‘intelligent evacuation zones’ as well as advanced fire modeling, real-time weather data and always-on fire sensing capabilities.

“The increased spread of wildfires is a global emergency that impacts public health and the planet. While technology alone won’t eliminate fires, Zonehaven’s software can help communities prepare for evacuation, provide advance warning to those in harm’s way, preserve natural and economic resources and ultimately save lives,” said Charlie Crocker, CEO, Zonehaven.

Common data platform

Zonehaven’s technology presents a ‘common data platform’ for coordination and response to wildfires. The technology helps identify ignition points, projects simulated fire spread and develops fire-specific intelligent evacuation zones.

Splunk also used its annual .conf user event to detail Splunk Partner+ Program updates.

The company says it can count over 2,200 individual partner attendees at the show itself in the form of distributors, system integrators, service providers, original equipment manufacturers, technology alliance partners and value-added resellers.

Many of the partners build connectors, apps and add-ons to Splunk itself.

Splunk’s Big Data Beard team recently equipped an RV (recreational vehicle) with IoT sensors, built an edge-to-cloud computing environment and drove over 3,700 miles with stops in 13 cities on their Road Trip to .conf19.

Big Data Beard used Splunk throughout the journey to analyse their location, road quality, comfort levels and health data. Big Data Beard’s dashboards use Splunk Augmented Reality.

Image: Zonehaven


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: