CW Developer Network

December 10, 2019  8:25 AM

CI/CD series – Confluent: Events provide an ‘in-built primitive’ for continuous coding

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

This is a guest post for the Computer Weekly Developer Network in our Continuous Integration (CI) & Continuous Delivery (CD) series.

This contribution is written by Neil Avery, lead technologist and member of office of the CTO (OCTO) at Confluent — the company is known as for its event streaming platform powered by Apache Kafka that helps companies harness high-volume, real-time data streams.

Avery writes…

Continuous deployment sets the goalposts for the style of application architecture. 

It means the system should never be turned off, there is no such thing as a big-bang release, instead, new functionality is incrementally developed and released while old functionality is removed when no longer needed. 

The application architecture is decoupled and evolvable. 

Event-driven architectures provide both of these qualities. To access new functionality, events are routed to new microservices. This routing of events also helps support CI/CD functionality such as A/B testing or Green/Blue deployments (roll forwards, rollback) and the use of feature flags. 

Many organisations normally get started with CI/CD by focusing on decoupling and event-driven microservices. The prolific adoption of Kafka, not only makes it a good platform for eventing but also means there is a wealth of industry expertise for building this style of application. 

Event storage, replay & schematisation

This style of architecture relies on event-storage, event-replay and event schematisation. In production, Kafka stores all events and becomes the source of truth for understanding system behaviour. 

You might say it acts like a black-box recorder for events that can be used to replay incidents or scenarios at a later time. For test scenario purposes, events can be copied from production and made available in the CI environment (once desensitised). It also affords a level of regression testing difficult to achieve with non-event-driven systems. 

So events provide an in-built primitive that by their nature make it easier for organisations to get started with CI and CD. The input and outputs of different components are automatically recorded.  

The decision to build an event-driven system is significant. There are various pitfalls we commonly see, especially when developers are new to this approach: 

  • Slow builds
    A key challenge of the CI build is that test cycles take progressively longer as a project develops. Slow builds affect team velocity and weight against fast release cycles. To overcome this build pipelines should be staged and parallelised.
  • Limited resources for CI pipeline
    As teams grow in scale the resources required to support the CI pipeline will also grow. The ideal solution is to use a cloud-based CI environment that scales according to demand. Recommended tools include: Jenkins-Cloud, AWS-Code/BuildDeploy or CloudBees
  • Inability to reproduce production incidents
    Event-driven systems provide a unique advantage in that production events can be copied to non-production environments for reproduction. It is simple to build tooling to not only reproduce but also inspect incidents and characteristics that occur within specific time intervals.
  • Manual testing of business functionality
    It is common to see manual testing stages to certify business requirements. However, manual testing must be replaced with automation and as such APIs should focus on supporting API based automation tooling.  Recommended tooling includes Apigee, JMeter or REST-Assured
  • Insufficient regression testing
    It’s important that regression testing strategies are in place. Regression tests should be signed off by the business as new functionality is introduced.
  • Lack of knowledge about test tooling for event-driven systems
    There are many tools available for testing event-driven systems, we have compiled a comprehensive list at the end of this article. 

Generally speaking, the ideal go-live system is based on a ‘straw-man’ architecture. One which contains all of the touchpoints mentioned above and provides an end-to-end system; from dev to release. It becomes very difficult (and costly) to ignore fundamentals and retrospectively change so it’s better to get it right from the outset. 

Go-live considerations

From a deployment perspective, the go-live application should have a signature that meets infrastructure requirements, i.e. hybrid-cloud, multi-dc awareness, SaaS tooling (managed Kafka – Confluent Cloud). All SaaS and PaaS infrastructure should be configured, integrated and operational costs understood.

The go-live system is not just the production application, but the entire pipeline that supports the software development lifecycle all the way to continuous deployment; it’s the build pipeline that runs integration, scale, operational testing, and automation. Finally, it supports a runtime with the use of feature flags, auditing and security. 

Every application will have a unique set of constraints that can dictate infrastructure.

For event streaming applications delivered using CI/CD, the recommended tools and infrastructure would include:

  • Language runtime: Java via Knative + GraalVM
  • Kafka Clients: Java Client and Kafka Streams via Quarkus
  • Confluent Cloud; A managed Kafka service in the cloud (AWS, GCP, Azure) including Schema Registry and KSQL
  • Datadog: SaaS monitoring
  • GitHub/Lab: SaaS source repo
  • CI environment: SaaS-based build pipeline – one that supports cloud autoscaling (Jenkins cloud, CloudBees, AWS code commit/build/pipeline/deploy)

Event-driven applications also have particular requirements that require special tools for testing and automation. 

Confluent’s Avery: Kafka events are the black box recorder route to CI/CD wins.

December 9, 2019  10:00 AM

Why we need an Internet of Blockchains

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

This is a guest post for the Computer Weekly Developer Network written by Jack Zampolin in his capacity as director of product at Tendermint Inc.

Tendermint is the company behind the eponymously named consensus protocol Tendermint and the interoperability platform Cosmos.

Zampolin writes…

The first computers were individual machines.

Then, in a quest for more computing power, bigger and bigger machines were built, initially by linking these individual machines together, until, eventually, supercomputers emerged.

These huge entities took an army of people to maintain and after a while, the benefits of adding power to these large computers dwindled and the returns fell off.

Then, the Earth cooled… and we decided on a different approach.

We made a huge number of smaller computers and ended up scaling them; this is what we now call the Internet. So essentially, we went from computer to supercomputer, to network computers.

As with machines, it is in blockchain

The development cycle has been roughly the same in the blockchain world.

We started out with Bitcoin — the first decentralised ledger technology, the first computer in this case.

Then Ethereum came along, like those original supercomputers.

Now, on Ethereum, it was possible to run a lot of different programmes, but it was still relatively slow compared to the third approach.

This third way allows to build an interconnected network of computers to Ethereum’s supercomputer. When blockchains connect to each other and network with each other, we get a development that will enable the true global scaling and growth of this technology.

Perhaps the greatest challenge for blockchains in enterprise – particularly public blockchains like Bitcoin and Ethereum – has been the ability to scale and to transact with different blockchain networks.

Crossing the blockchain chasm

If blockchains are to matter at all beyond cryptocurrencies… and if they are to be used for applications such as maintaining self-sovereign identities, delivering decentralised social media and in a variety of use cases throughout the supply chain, then they would profit greatly from being able to interact with one another.

A lack of interoperability leads to individual chain maximalism and tribalism, so for many the conflict is unavoidable. Sometimes this is even helpful, in a positive way, because it compels developers to enhance their projects’ code so that their blockchain might rise above all others.

More often than not though, because these disconnected chains think they will need to cover all use cases (and be a kind of Swiss Army Knife blockchain), they end up lacking specialisation… and so are not fit for many uses.

There’s quite a chasm between these views, one that comes down to how each approaches the question of trust. We should make these chains talk to each other in a permissionless, frictionless way.

Operable interoperability

Instead of participating in divisions between crypto factions, we need a way to offer a network of interoperable blockchains. Blockchains have been traditionally siloed and unable to communicate with each other. They have always been hard to build and could only handle a small number of transactions per second.

The industry is now looking for tools that allow any engineer or developer to build a brand-new, custom-designed, independent sovereign blockchain which can interoperate with an arbitrary number of others. Each individual chain should be able to choose and run its own independent governance, adding further flexibility and allowing developers to select the mechanisms which best suit their particular use cases.

Zampolin: A lack of interoperability leads to individual chain maximalism and tribalism.

December 6, 2019  10:38 AM

Sumo Logic cracks on with new bands of intelligence strapping

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Self-styled continuous intelligence company Sumo Logic has reached beta on two new analytics services that extend its Cloud Flex offering (a credit-based licensing strategy).

Interactive Intelligence Service and Archiving Intelligence Service work to provide monitoring, troubleshooting and threat detection for business applications.

This software collects and analyzes all types of data for operational, security, business intelligence, IoT and other various use cases.

Crucially, it is now packaged at varying price points to suit diverse use cases and cost needs. 

Sumo Logic tells us that today’s legacy and siloed monitoring and analytics vendor licensing models force customers to make a trade-off as their machine data grows, either by paying runaway license costs or being forced to discard data to control costs creating blind spots. 

The proposition here is that organisations should be able to dynamically segment their data and tailor the analytics accordingly for the right level of insights, frequent or infrequent interactive searching or troubleshooting and full data archiving. 

The new Sumo Logic Interactive Intelligence Service enables customers to ingest any log or machine data they desire.

No re-preparation, re-ingestion, re-hydration

The data is securely stored in the Sumo Logic service and is available on-demand for interactive analysis without any additional data re-preparation, re-ingestion, or re-hydration. 

This service was designed and is ideal for use cases where users need to quickly and/or periodically investigate issues, troubleshoot code or configuration problems or address customer support cases which often rely upon searching over high volume of data for specific insights. 

“Today’s data analytics pricing and licensing models are broken and simply don’t reflect the rapidly changing ways customers are using data,” said Suku Krishnaraj, chief marketing officer, Sumo Logic. “By introducing our new Interactive Intelligence Service and Archiving Intelligence Service, we are shifting the conversation from a volume-based, one-size-fits-all approach, to a flexible value based licensing model enabling customers to gain limitless value from their analytics solution at a price that makes sense for their varied use cases.” 

The Sumo Logic Archiving Intelligence Service is designed for use cases such as operational data stores, cloud data warehousing, or to potentially search during an unplanned security incident or business event. This new service will allow customers to send unlimited log or other machine data for free, without incurring any additional costs for using the platform to send data to their own AWS S3 bucket or cloud provider of their choice. 

December 5, 2019  9:55 AM

Sophos to developers: who you gonna call? (for cyberthreat APIs)

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Every company is now a software company, this much we already know.

But now, every company wants to be a developer company, well… that’s the emerging message from some of industrys newer and more established players.

Not content with being a ‘next-generation cybersecurity solutions firm’ (it’s words, not ours) Sophos is making a play for developer credibility.

SophosLabs Intelix is a cloud-based threat intelligence tool for software application development professionals to use when building applications. Secure ones, obviously. 

The product allows developers to make API calls to it for what is described as ‘turnkey cyberthreat expertise’ — which, one assumes, means that Sophos has compartmentalised chunks of functional code capable of performing security-related analysis tasks.

Those tasks include the ability to assesses the risk of software artifacts such as files, URLs and IP addresses. 

According to Joe Levy, CTO, Sophos, the platform continuously updates and collates petabytes of real-time and historical intelligence, including: telemetry from Sophos’ endpoint, network and mobile security solutions; data from honeypots and spam traps; 30 years of threat research; predictive insights from machine and deep learning models etc.

NOTE: A honeypot is a network-attached system set up as a decoy to lure cyberattackers and to detect, deflect or study hacking attempts in order to gain unauthorised access to information systems.

Using RESTful APIs, developers can use this technology with file submissions for static and dynamic analysis, queries on file hashes, URLs, IP addresses and Android applications (APKs) to answer questions like:

  • Is this file safe? 
  • What happens if I open or execute it?
  • Is this link safe? 
  • What happens if I call this URL?

SophosLabs Intelix is available through the AWS Marketplace and includes several free tier options.

Sophos CTO Levy describes this technology’s three key service features:

Real-time Lookups enable classification of artifacts with access to the SophosLabs intelligence by querying file hashes, URLs, IPs, or Android application thumbprints. Reputation scores identify known bad and known good files, as well as those in the grey area.

Static File Analysis uses multiple machine learning models, global reputation, deep file scanning without needing to execute the file in real time.

Dynamic File Analysis provides dynamic file analysis and classification capabilities through execution and instrumentation of submitted files in sandboxes, utilising the latest runtime detection techniques to reveal ‘true’ behaviours of potential threats.

Who you gonna call? Source: Wiki Commons.



December 4, 2019  8:31 AM

Dynatrace musters cross-cluster application fluster buster 

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Dynatrace is a company that specialises in the monitoring and control of enterprise-scale application performance, the underlying infrastructure layer and the experience of users.

It does this at scale, running what it calls a Dynatrace cluster as a coalesced instance of software intelligence designed to provide application control features such as tracing, analytics and management.

Question: So what’s better than a cluster? 

Answer: Whether it’s the chocolate-coated nutty kind or the application observability kind, it’s a double cluster.

Dynatrace has now doubled the capacity of a Dynatrace cluster, now scaling to 50k hosts while maintaining system performance. 

Cluster luck!

In addition, Dynatrace now supports the clustering of clusters, including cross-cluster distributed tracing, analytics and management to deliver AI-powered observability, automation and intelligence for customers operating large multi-cloud environments.

It’s true to say that web-scale environments were (comparatively speaking) a rarity a few years ago, but they are becoming commonplace as enterprises shift from static, on-premises datacentres to more dynamic multi-cloud architectures with highly distributed microservices workloads. 

Dynatrace suggests (because this is the market it operates in) that companies in industries from financial services, healthcare, eCommerce etc. are growing their environments beyond their current monitoring systems’ ability to keep up. 

In addition, growth in complexity is outpacing their teams’ ability to identify and understand anomalies and correct performance and availability issues in a timely fashion.

Scale-out architecture

Dynatrace claims to be the ‘only solution’ that has the automation, intelligence and scale-out architecture needed to deliver the observability and answers that today’s enterprise clouds require.

“Driven by the shift of their datacentres to the cloud and growing cloud-native workloads, it’s not hard to imagine hundreds, even thousands of web-scale enterprise clouds in the not too distant future,” said Steve Tack, SVP of product management at Dynatrace. “We reinvented our platform several years ago to provide high-fidelity observability, smart automation and real-time intelligence. We continue to push the boundaries on scalability and robustness.”

Features include automated discovery and instrumentation through single agent instrumentation that automatically and continuously discovers all microservices, components and processes across the full cloud stack – networks, infrastructure, applications and users – and continuously maps dependencies in real-time.

There is also high fidelity distributed tracing and cross-cluster analytics through a single management dashboard regardless of cluster location.

The Dynatrace explainable AI engine, Davis™ processes billions of dependencies in real-time, delivering the ability to ‘go beyond’ (so says the marketing info) metrics, logs and traces to provide precise answers to issues at scale.

Finally, there is also role-based governance for global teams with a branded Management Zones tools. In this way, Dynatrace says it enables fine-grained access across applications and zones for secure, distributed management of shared cloud environments by multiple teams.

No chocolate clusters were hurt (or consumed) in the production of this story.

Source: Wikipedia


December 3, 2019  7:06 AM

Cloudinary: video won’t kill the web developer star (AI will save the day)

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

We live in a world of data, this much we already know.

But in that world of data, some of it is structured well-ordered [database] information, some of it is semi-structured information that falls into some degree of order [identified, classified and parsed data relating to emails, voice and video data]… and some of it is messy unstructured data that may still be sitting in the so-called data lake.

It is this middle ground of semi-structured information that we’re concerned with now… and, in particular, video.

Web developers are now being tasked with video management to serve the pages that many of us access every day and Cloudinary insists that this task could benefit from an additional degree of Artificial Intelligence (AI).

The media management platform company offers a dedicated video management tool that now includes new AI capabilities to speed the video workflow process and improve user experience and engagement. 

Brands have long favoured video for its power and effectiveness, but why?

Nearly 90% of consumers rely on videos to help make purchasing decisions; landing pages with video achieve conversion rates of 80% or more. However (as a general reality) older platforms weren’t built to support the high volume of video currently being published, nor the range of browsers and platforms currently in use. 

In short: video can create massive headaches for web-centric developers.

Cloudinary capabilities

Cloudinary’s new video capabilities claims to be able to allow web developers to automate for format and quality at scale. The company’s software uses AI to shoulder the time-consuming work traditionally required to select the right codecs, formats and quality.

This video management platform is also capable of optimising video content for mobile and social. Deep learning capabilities dynamically re-frame landscape mode videos to any aspect ratio requested on-the-fly, and intelligent content-aware cropping uses machine learning to focus on the most engaging story in the frame.

“The exponential growth of online video content calls for an entirely new approach to video management. Video bandwidth on the Cloudinary platform has trebled over the last few months, underscoring the increased demand for video-rich experiences,” said Nadav Soferman, co-founder and chief product officer, Cloudinary. “Companies understand the power video holds, yet most continue to face obstacles when it comes to creating and delivering engaging video content at the pace and scale required to compete. Developers deserve a fresh approach. Our solution addresses today’s needs by utilising AI to automate tedious and time-consuming tasks so brands can more easily deliver the kinds of dynamic video experiences that convert.”

Other functions include the ability to add video to product pages — and, also, the option to simplify search, improve team workflows and optimise storage. 

AI-based auto-tagging, structured metadata and advanced search capabilities let users more manage, store and search video libraries. 

Finally here, Cloudinary has provides a means of managing user-generated content.

Cloudinary says it has simplified the process of moderating, optimising and delivering thousands of user-generated videos at scale by automating functions like cropping, resizing, branding, overlays, quality etc.

Video killed the radio star, let’s hope video won’t kill the web development star.

Source: Cloudinary


December 2, 2019  9:46 AM

Exasol: the chief data officer is the future-proofer

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

We live in a world of data analytics and software application development designed to function with ever-increasing levels of intelligence functionality based upon insight derived from analytics engines.

But, even with this boost available to us… not everybody is happy.

Research undertaken by YouGov on behalf of analytics database provider Exasol suggests that around three quarters (72%) of businessesworry’ that their inability to generate insights through the analysis of data will have a negative impact on financial performance.

This is despite a similar number (77%) of respondents stating that data is now their organisation’s most valuable asset.

Exasol CEO Aaron Auld suggests that many organisations are still struggling with legacy data systems and have no clear data strategy in place.

“This is where the chief data officer CDO role has come into its own, harnessing and demystifying data to inform business decisions, improve differentiation and foster financial growth within an organisation,” said Auld

The findings of the research, combined with additional desk research and the views from a number of industry commentators, are brought together in Exasol’s eBook: From CDO to CEO: why your data expertise could make you a future business leader.

Future-proofing power

The company thinks that a CDO’s ability to intrinsically understand the business and plan for its future will place these individuals as ideal candidates for future CEOs.

Caroline Carruthers, one of the UK’s first CDOs, contributor to the whitepaper and co-author of Data Driven Business Transformation thinks that currently, most businesses are ‘data hoarders’, wanting to get their hands on as much data as possible.

“However, without people with the skills to understand how to process and use that data, the questions needed to improve data use are not being asked within the organisation. The data is available, but those without experience in data handling don’t know what they don’t know, so they can’t use it to its full value,” said Carruthers.

While the value of data professionals isn’t in doubt, the path from CDO to CEO won’t necessarily be a smooth one.

Exasol CEO Auld:CDOs exist to ‘harness and demystify’ data sources to infinity & beyond!

November 25, 2019  8:08 AM

CI/CD series – Synopsys: Where does security fit into CI/CD?

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

This is a guest post for the Computer Weekly Developer Network in our Continuous Integration (CI) & Continuous Delivery (CD) series.

This contribution is written by Meera Rao in her capacity as a senior principal consultant at Synopsys — the company is known for its work in silicon chip design, verification, IP integration and application security.

CI/CD is all about speed, repetition and automation, but where does security fit into the equation? Let’s have a look at some of the application security testing tools that you need to consider in the CI/CD pipeline.

Static analysis (SAST) — SAST tools examine an application’s code or binary without executing the application. Lightweight, desktop options flag common vulnerabilities and offer remediation guidance in real-time as developers write code. More in-depth assessments consider business logic and provide full path coverage, ensuring every line of code and potential execution are tested.

Software composition analysis (SCA) — SCA tools provide a complete view of the software supply chain by analysing open source code, third-party application components and binaries.

Fuzz testing simulates real-life attack patterns used by hackers and automatically bombards a system with malformed inputs. These tools allow development teams to uncover misuse cases that trigger hidden, unknown vulnerabilities and failure modes.

Let’s also mention Dynamic Analysis (DAST) — DAST tools use penetration testing techniques to identify security vulnerabilities while applications are running. We also have Interactive application security testing (IAST) — IAST is an emerging technology that finds real security vulnerabilities in web applications and web services with a high-level of accuracy.

CI/CD security best practices

Now that we’re all up-to-date on the types of tooling options available, let’s examine the best practices for implementing security into the CI/CD pipeline. Treat security issues with the same importance as quality issues. Ignoring bugs won’t make them go away. Instead of waiting to fix bugs and vulnerabilities until after they wreak havoc on your applications, treat them like any other bug within your development process.

Facilitate collaborative change. Organisations seeking to increase developer productivity and solution time to market don’t realise that changes are required by everyone — the business included.

Just like continuous testing, there also needs to be continuous collaboration across development, security, operations teams and business included. Reduce friction between development, operations and security teams.

Carefully choose tools for automation. The rapid shift to CI/CD and DevOps continues to drive the need for automation and continuous testing. Building automated tests is a complex process that requires expertise, it can’t be easily simplified by just acquiring the tools required for automation. Most types of testing can be automated, but automation does not eliminate the need for manual testing.

In terms of security practices to avoid with CI/CD. There are also some highly impactful not-to-do aspects to consider when infusing security into your CI/CD pipeline. Automated tools aren’t a catch-all solution. Tools cannot interpret results for you, nor can they certify that the application is free of defects. You need an expert to determine if the results are true positives. Additionally, an automated tool cannot find new bugs or vulnerabilities. On top of automation and continuous testing, you need to make sure intelligence is built into the CI/CD or DevOps pipeline to know when human intervention is required. Recognise also that tools need hand-holding and aren’t ever truly plug-and-play.

Don’t jump the tracks

A mature organisation fails the build if the security tests don’t pass. If security issues appear in the build (perhaps issues higher than medium severity) the build fails and the development team is notified. Thus, security is treated with the same level of importance as business requirements.

No tool is one-size-fits-all. Organisations use different languages. Different languages mean different build tools. Even when they use one language, teams use different versions of that language which is very common.

Applications use different technologies. An enterprise application might be using different architecture. An application might be using several different frameworks. So, using the same rules and not customising the tool is a recipe for failure.

Summing it up. There are seemingly endless tooling options that can be applied within the software development life cycle and CI/CD processes. The most mature firms apply security mechanisms throughout the development process to ensure that the software that powers their business, the software they’re building to advance their business, and the software customers depend on is secure and continuously improving.

November 20, 2019  2:22 PM

Tanium updates core endpoint visibility platform

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Unified endpoint management and security platform company Tanium has come forward with a set of platform and portfolio enhancements that it says are focused on reinforcing fundamental needs in speed, simplicity and scale.

Where operational software silos grow, the ability to get visibility into every endpoint the company operates gets a lot tougher, obviously.

Tanium claims to help address those challenges by offering a single, unified endpoint management and security platform with the breadth to manage and secure endpoints on-premises or in the cloud.

“[The] majority of businesses struggle to gain end-to-end visibility of endpoints and the overall health of their IT systems. Without full visibility and control, IT teams leave themselves open to cyberattacks and other forms of disruption… and an overreliance on point product [solutions] only adds to the problem,” said Pete Constantine, Tanium chief product officer.

Core Platform 7.4

The company used its annual Converge user event to detail new features in Tanium Core Platform 7.4. The updates and enhancements focus on a number of user-experience updates intended to allow security and operations practitioners to make decisions based on the data provided by Tanium.

These security and Ops professionals will be able to use updated Role-Based Access Control (RBAC) and enhanced security to support off-network and cloud-hosted instances.

New enhancements that enable better management of cloud endpoints, include visibility into unmanaged virtual machines (VMs) in cloud environments, enriched asset inventory and reporting on the health and wealth of cloud infrastructures.

There is also ‘immediate visibility’ into what virtual containers are running in public and private cloud deployments. Additionally, there is enhanced support to configure, report and enforce security and other configuration policies across a range of operating systems, including Windows and Mac OS X.

Endpoint 101

Tanium reminds us that an endpoint is any single device connected to an enterprise network – laptops, desktops, servers etc. – and, obviously, large organisations have incredibly complex computer environments, comprising hundreds of thousands of endpoints.

Tanium aims to give security and IT operations teams the ability to ask questions about the state of every endpoint across the enterprise in plain English, retrieve data on their current and historical state and execute change as necessary.

This system is designed to provide control over any ‘rogue systems’ and bring them under management. It works with a core linear chain architecture that allows machines to ‘talk’ with one another in order to get answers to IT questions from thousands of endpoints in seconds.

Tanium modules

On top of the Tanium platform are the modules that offer additional features. The company positions these module products as powerful enough to make other dedicated software solutions (such as an Application Performance Management – APM layer, for example) now redundant.

Currently, Tanium has modules within four primary IT management and security areas:

Threat Management 

  • Interact: Real-time visibility and control over endpoints.
  • Trends: Trend data to obtain insights.
  • Connect: Integrate and exchange data with third party tools.
  • Detect: Apply threat data and continuously monitor and alert on malicious activity.


  • Threat Response: Active processes, network connections, loaded files, in-memory artifacts.
  • Protect: Manage native operating system security controls at enterprise scale.


  • Discover: Discover unmanaged interfaces.
  • Patch: Install patches on endpoint devices.
  • Deploy: Rapidly install, update, or remove software across large .organisations with minimal infrastructure requirements.
  • Asset: Complete and accurate report of assets quickly, with real-time data.
  • Map: Create on-demand, precise, comprehensive views — at scale and without manual work.
  • Performance: Track critical performance metrics related to hardware resource consumption, application health, and system health.


  • Comply: Address regulatory compliance needs using an agent-based scan and rapid remediation.
  • Integrity Monitor: File integrity monitoring satisfies compliance requirements.
  • Reveal: Detect sensitive data at-rest on endpoint devices and define sensitive data patterns.

November 19, 2019  3:55 PM

CI/CD series – MuleSoft: An API-led approach to continuous integration

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

This is a guest post for the Computer Weekly Developer Network in our Continuous Integration (CI) & Continuous Delivery (CD) series.

This contribution is written by Paul Crerand in his capacity as senior director for solution engineering EMEA at MuleSoft – the company (now acquired by Salesforce) is known as a vendor that provides an integration platform to help businesses connect data, applications and devices across on-premises and cloud computing environments.

Crerand reminds us that a robust DevOps practice goes hand-in-hand with having continuous integration mechanisms in place.

He says that the goal of continuous integration is to automate and standardise the way development teams build, test and package applications — however, many of today’s applications have been developed with a variety of tools, in various languages, across multiple platforms.

So what to do?

Crerand writes as follows…

Because of this distributed approach to application development [that exists in the real world], IT teams need a way to consolidate workflows and test for duplicate code or bugs, while ensuring that ‘business as normal’ continues to operate.

Continuous integration has a number of benefits, from encouraging collaboration between teams to increasing overall visibility and transparency. Although DevOps and Agile methodologies — including continuous integration — are becoming the norm, some organisations are struggling to support these practices effectively, as it’s difficult to reorganise people and rethink the relationship between development and operations teams.

It is here where API-led connectivity can help organisations enhance their continuous integration practices by streamlining operations.

Discover, consume, reuse

API-led connectivity is a standardised approach to integrating applications, data sources and devices through reusable APIs. By building composable API ‘building blocks’ that can be easily managed, secured and reused for new projects, organisations avoid the hard-wired dependencies between systems that typically arise from custom integrations built on an as-needed basis. The reality is that custom integration can create more problems than it solves, as the close dependencies between systems and applications can make future changes expensive and time-consuming. It’s incredibly difficult to untangle and reconnect these rigid integrations as organisations add new systems and data sources to their technology stacks.

When organisations use APIs to connect disparate systems, an application network naturally emerges over time. New projects add more ‘nodes’ to the network, each marked by an API specification, so it is secure by design, easy to change, and readily discoverable. By building an application network, organisations can ensure the APIs they create offer enduring business value and help drive greater agility.

One company that has adopted an API-led approach to integration is global athletic and apparel company, Asics. The company had chosen Salesforce Commerce Cloud as its new e-commerce platform but needed a solution to integrate Commerce Cloud with pre-existing systems like order management, product inventory, and shipping. Using MuleSoft’s Anypoint Platform, Asics connected these systems with standardized APIs to build a robust global e-commerce platform in less than six months.

Now, every piece of integration serves as a reusable asset across its growing portfolio of brands. For example, the ‘Asics Email API’, which was originally created to streamline communications to customers about order shipments and changes in product inventory, will be reused dozens of times as the e-commerce platform is rolled out globally, allowing the developer teams to complete projects 2.5x faster and eliminating multiple points of failure.

As a result, Asics does not need to build, test and deploy a new integration for every process that requires an email notification to be sent to a customer. Instead, they can reuse the same standardised email API and easily track changes, make improvements and control access in one place.

Holistic connections to streamline workflows 

Organisations looking to reap the benefits of continuous integration must move beyond custom integration, which creates data silos and results in a build-up of outdated, incomplete or duplicate code.

Instead, organisations should follow an API-led approach to integration, which allows them to holistically connect their applications, data sources and devices with governance and flexibility built into each connection.

With the combined power of continuous integration and API-led connectivity, development teams will be a step closer to producing collaborative, error-free and ready-to-deploy code.

Crerand: Go API – because custom integration can create more problems than it solves.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: