CW Developer Network


December 5, 2019  9:55 AM

Sophos to developers: who you gonna call? (for cyberthreat APIs)

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Every company is now a software company, this much we already know.

But now, every company wants to be a developer company, well… that’s the emerging message from some of industrys newer and more established players.

Not content with being a ‘next-generation cybersecurity solutions firm’ (it’s words, not ours) Sophos is making a play for developer credibility.

SophosLabs Intelix is a cloud-based threat intelligence tool for software application development professionals to use when building applications. Secure ones, obviously. 

The product allows developers to make API calls to it for what is described as ‘turnkey cyberthreat expertise’ — which, one assumes, means that Sophos has compartmentalised chunks of functional code capable of performing security-related analysis tasks.

Those tasks include the ability to assesses the risk of software artifacts such as files, URLs and IP addresses. 

According to Joe Levy, CTO, Sophos, the platform continuously updates and collates petabytes of real-time and historical intelligence, including: telemetry from Sophos’ endpoint, network and mobile security solutions; data from honeypots and spam traps; 30 years of threat research; predictive insights from machine and deep learning models etc.

NOTE: A honeypot is a network-attached system set up as a decoy to lure cyberattackers and to detect, deflect or study hacking attempts in order to gain unauthorised access to information systems.

Using RESTful APIs, developers can use this technology with file submissions for static and dynamic analysis, queries on file hashes, URLs, IP addresses and Android applications (APKs) to answer questions like:

  • Is this file safe? 
  • What happens if I open or execute it?
  • Is this link safe? 
  • What happens if I call this URL?

SophosLabs Intelix is available through the AWS Marketplace and includes several free tier options.

Sophos CTO Levy describes this technology’s three key service features:

Real-time Lookups enable classification of artifacts with access to the SophosLabs intelligence by querying file hashes, URLs, IPs, or Android application thumbprints. Reputation scores identify known bad and known good files, as well as those in the grey area.

Static File Analysis uses multiple machine learning models, global reputation, deep file scanning without needing to execute the file in real time.

Dynamic File Analysis provides dynamic file analysis and classification capabilities through execution and instrumentation of submitted files in sandboxes, utilising the latest runtime detection techniques to reveal ‘true’ behaviours of potential threats.

Who you gonna call? Source: Wiki Commons.

 

 

December 4, 2019  8:31 AM

Dynatrace musters cross-cluster application fluster buster 

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Dynatrace is a company that specialises in the monitoring and control of enterprise-scale application performance, the underlying infrastructure layer and the experience of users.

It does this at scale, running what it calls a Dynatrace cluster as a coalesced instance of software intelligence designed to provide application control features such as tracing, analytics and management.

Question: So what’s better than a cluster? 

Answer: Whether it’s the chocolate-coated nutty kind or the application observability kind, it’s a double cluster.

Dynatrace has now doubled the capacity of a Dynatrace cluster, now scaling to 50k hosts while maintaining system performance. 

Cluster luck!

In addition, Dynatrace now supports the clustering of clusters, including cross-cluster distributed tracing, analytics and management to deliver AI-powered observability, automation and intelligence for customers operating large multi-cloud environments.

It’s true to say that web-scale environments were (comparatively speaking) a rarity a few years ago, but they are becoming commonplace as enterprises shift from static, on-premises datacentres to more dynamic multi-cloud architectures with highly distributed microservices workloads. 

Dynatrace suggests (because this is the market it operates in) that companies in industries from financial services, healthcare, eCommerce etc. are growing their environments beyond their current monitoring systems’ ability to keep up. 

In addition, growth in complexity is outpacing their teams’ ability to identify and understand anomalies and correct performance and availability issues in a timely fashion.

Scale-out architecture

Dynatrace claims to be the ‘only solution’ that has the automation, intelligence and scale-out architecture needed to deliver the observability and answers that today’s enterprise clouds require.

“Driven by the shift of their datacentres to the cloud and growing cloud-native workloads, it’s not hard to imagine hundreds, even thousands of web-scale enterprise clouds in the not too distant future,” said Steve Tack, SVP of product management at Dynatrace. “We reinvented our platform several years ago to provide high-fidelity observability, smart automation and real-time intelligence. We continue to push the boundaries on scalability and robustness.”

Features include automated discovery and instrumentation through single agent instrumentation that automatically and continuously discovers all microservices, components and processes across the full cloud stack – networks, infrastructure, applications and users – and continuously maps dependencies in real-time.

There is also high fidelity distributed tracing and cross-cluster analytics through a single management dashboard regardless of cluster location.

The Dynatrace explainable AI engine, Davis™ processes billions of dependencies in real-time, delivering the ability to ‘go beyond’ (so says the marketing info) metrics, logs and traces to provide precise answers to issues at scale.

Finally, there is also role-based governance for global teams with a branded Management Zones tools. In this way, Dynatrace says it enables fine-grained access across applications and zones for secure, distributed management of shared cloud environments by multiple teams.

No chocolate clusters were hurt (or consumed) in the production of this story.

Source: Wikipedia

 


December 3, 2019  7:06 AM

Cloudinary: video won’t kill the web developer star (AI will save the day)

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

We live in a world of data, this much we already know.

But in that world of data, some of it is structured well-ordered [database] information, some of it is semi-structured information that falls into some degree of order [identified, classified and parsed data relating to emails, voice and video data]… and some of it is messy unstructured data that may still be sitting in the so-called data lake.

It is this middle ground of semi-structured information that we’re concerned with now… and, in particular, video.

Web developers are now being tasked with video management to serve the pages that many of us access every day and Cloudinary insists that this task could benefit from an additional degree of Artificial Intelligence (AI).

The media management platform company offers a dedicated video management tool that now includes new AI capabilities to speed the video workflow process and improve user experience and engagement. 

Brands have long favoured video for its power and effectiveness, but why?

Nearly 90% of consumers rely on videos to help make purchasing decisions; landing pages with video achieve conversion rates of 80% or more. However (as a general reality) older platforms weren’t built to support the high volume of video currently being published, nor the range of browsers and platforms currently in use. 

In short: video can create massive headaches for web-centric developers.

Cloudinary capabilities

Cloudinary’s new video capabilities claims to be able to allow web developers to automate for format and quality at scale. The company’s software uses AI to shoulder the time-consuming work traditionally required to select the right codecs, formats and quality.

This video management platform is also capable of optimising video content for mobile and social. Deep learning capabilities dynamically re-frame landscape mode videos to any aspect ratio requested on-the-fly, and intelligent content-aware cropping uses machine learning to focus on the most engaging story in the frame.

“The exponential growth of online video content calls for an entirely new approach to video management. Video bandwidth on the Cloudinary platform has trebled over the last few months, underscoring the increased demand for video-rich experiences,” said Nadav Soferman, co-founder and chief product officer, Cloudinary. “Companies understand the power video holds, yet most continue to face obstacles when it comes to creating and delivering engaging video content at the pace and scale required to compete. Developers deserve a fresh approach. Our solution addresses today’s needs by utilising AI to automate tedious and time-consuming tasks so brands can more easily deliver the kinds of dynamic video experiences that convert.”

Other functions include the ability to add video to product pages — and, also, the option to simplify search, improve team workflows and optimise storage. 

AI-based auto-tagging, structured metadata and advanced search capabilities let users more manage, store and search video libraries. 

Finally here, Cloudinary has provides a means of managing user-generated content.

Cloudinary says it has simplified the process of moderating, optimising and delivering thousands of user-generated videos at scale by automating functions like cropping, resizing, branding, overlays, quality etc.

Video killed the radio star, let’s hope video won’t kill the web development star.

Source: Cloudinary

 


December 2, 2019  9:46 AM

Exasol: the chief data officer is the future-proofer

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

We live in a world of data analytics and software application development designed to function with ever-increasing levels of intelligence functionality based upon insight derived from analytics engines.

But, even with this boost available to us… not everybody is happy.

Research undertaken by YouGov on behalf of analytics database provider Exasol suggests that around three quarters (72%) of businessesworry’ that their inability to generate insights through the analysis of data will have a negative impact on financial performance.

This is despite a similar number (77%) of respondents stating that data is now their organisation’s most valuable asset.

Exasol CEO Aaron Auld suggests that many organisations are still struggling with legacy data systems and have no clear data strategy in place.

“This is where the chief data officer CDO role has come into its own, harnessing and demystifying data to inform business decisions, improve differentiation and foster financial growth within an organisation,” said Auld

The findings of the research, combined with additional desk research and the views from a number of industry commentators, are brought together in Exasol’s eBook: From CDO to CEO: why your data expertise could make you a future business leader.

Future-proofing power

The company thinks that a CDO’s ability to intrinsically understand the business and plan for its future will place these individuals as ideal candidates for future CEOs.

Caroline Carruthers, one of the UK’s first CDOs, contributor to the whitepaper and co-author of Data Driven Business Transformation thinks that currently, most businesses are ‘data hoarders’, wanting to get their hands on as much data as possible.

“However, without people with the skills to understand how to process and use that data, the questions needed to improve data use are not being asked within the organisation. The data is available, but those without experience in data handling don’t know what they don’t know, so they can’t use it to its full value,” said Carruthers.

While the value of data professionals isn’t in doubt, the path from CDO to CEO won’t necessarily be a smooth one.

Exasol CEO Auld:CDOs exist to ‘harness and demystify’ data sources to infinity & beyond!


November 25, 2019  8:08 AM

CI/CD series – Synopsys: Where does security fit into CI/CD?

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

This is a guest post for the Computer Weekly Developer Network in our Continuous Integration (CI) & Continuous Delivery (CD) series.

This contribution is written by Meera Rao in her capacity as a senior principal consultant at Synopsys — the company is known for its work in silicon chip design, verification, IP integration and application security.

CI/CD is all about speed, repetition and automation, but where does security fit into the equation? Let’s have a look at some of the application security testing tools that you need to consider in the CI/CD pipeline.

Static analysis (SAST) — SAST tools examine an application’s code or binary without executing the application. Lightweight, desktop options flag common vulnerabilities and offer remediation guidance in real-time as developers write code. More in-depth assessments consider business logic and provide full path coverage, ensuring every line of code and potential execution are tested.

Software composition analysis (SCA) — SCA tools provide a complete view of the software supply chain by analysing open source code, third-party application components and binaries.

Fuzz testing simulates real-life attack patterns used by hackers and automatically bombards a system with malformed inputs. These tools allow development teams to uncover misuse cases that trigger hidden, unknown vulnerabilities and failure modes.

Let’s also mention Dynamic Analysis (DAST) — DAST tools use penetration testing techniques to identify security vulnerabilities while applications are running. We also have Interactive application security testing (IAST) — IAST is an emerging technology that finds real security vulnerabilities in web applications and web services with a high-level of accuracy.

CI/CD security best practices

Now that we’re all up-to-date on the types of tooling options available, let’s examine the best practices for implementing security into the CI/CD pipeline. Treat security issues with the same importance as quality issues. Ignoring bugs won’t make them go away. Instead of waiting to fix bugs and vulnerabilities until after they wreak havoc on your applications, treat them like any other bug within your development process.

Facilitate collaborative change. Organisations seeking to increase developer productivity and solution time to market don’t realise that changes are required by everyone — the business included.

Just like continuous testing, there also needs to be continuous collaboration across development, security, operations teams and business included. Reduce friction between development, operations and security teams.

Carefully choose tools for automation. The rapid shift to CI/CD and DevOps continues to drive the need for automation and continuous testing. Building automated tests is a complex process that requires expertise, it can’t be easily simplified by just acquiring the tools required for automation. Most types of testing can be automated, but automation does not eliminate the need for manual testing.

In terms of security practices to avoid with CI/CD. There are also some highly impactful not-to-do aspects to consider when infusing security into your CI/CD pipeline. Automated tools aren’t a catch-all solution. Tools cannot interpret results for you, nor can they certify that the application is free of defects. You need an expert to determine if the results are true positives. Additionally, an automated tool cannot find new bugs or vulnerabilities. On top of automation and continuous testing, you need to make sure intelligence is built into the CI/CD or DevOps pipeline to know when human intervention is required. Recognise also that tools need hand-holding and aren’t ever truly plug-and-play.

Don’t jump the tracks

A mature organisation fails the build if the security tests don’t pass. If security issues appear in the build (perhaps issues higher than medium severity) the build fails and the development team is notified. Thus, security is treated with the same level of importance as business requirements.

No tool is one-size-fits-all. Organisations use different languages. Different languages mean different build tools. Even when they use one language, teams use different versions of that language which is very common.

Applications use different technologies. An enterprise application might be using different architecture. An application might be using several different frameworks. So, using the same rules and not customising the tool is a recipe for failure.

Summing it up. There are seemingly endless tooling options that can be applied within the software development life cycle and CI/CD processes. The most mature firms apply security mechanisms throughout the development process to ensure that the software that powers their business, the software they’re building to advance their business, and the software customers depend on is secure and continuously improving.


November 20, 2019  2:22 PM

Tanium updates core endpoint visibility platform

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

Unified endpoint management and security platform company Tanium has come forward with a set of platform and portfolio enhancements that it says are focused on reinforcing fundamental needs in speed, simplicity and scale.

Where operational software silos grow, the ability to get visibility into every endpoint the company operates gets a lot tougher, obviously.

Tanium claims to help address those challenges by offering a single, unified endpoint management and security platform with the breadth to manage and secure endpoints on-premises or in the cloud.

“[The] majority of businesses struggle to gain end-to-end visibility of endpoints and the overall health of their IT systems. Without full visibility and control, IT teams leave themselves open to cyberattacks and other forms of disruption… and an overreliance on point product [solutions] only adds to the problem,” said Pete Constantine, Tanium chief product officer.

Core Platform 7.4

The company used its annual Converge user event to detail new features in Tanium Core Platform 7.4. The updates and enhancements focus on a number of user-experience updates intended to allow security and operations practitioners to make decisions based on the data provided by Tanium.

These security and Ops professionals will be able to use updated Role-Based Access Control (RBAC) and enhanced security to support off-network and cloud-hosted instances.

New enhancements that enable better management of cloud endpoints, include visibility into unmanaged virtual machines (VMs) in cloud environments, enriched asset inventory and reporting on the health and wealth of cloud infrastructures.

There is also ‘immediate visibility’ into what virtual containers are running in public and private cloud deployments. Additionally, there is enhanced support to configure, report and enforce security and other configuration policies across a range of operating systems, including Windows and Mac OS X.

Endpoint 101

Tanium reminds us that an endpoint is any single device connected to an enterprise network – laptops, desktops, servers etc. – and, obviously, large organisations have incredibly complex computer environments, comprising hundreds of thousands of endpoints.

Tanium aims to give security and IT operations teams the ability to ask questions about the state of every endpoint across the enterprise in plain English, retrieve data on their current and historical state and execute change as necessary.

This system is designed to provide control over any ‘rogue systems’ and bring them under management. It works with a core linear chain architecture that allows machines to ‘talk’ with one another in order to get answers to IT questions from thousands of endpoints in seconds.

Tanium modules

On top of the Tanium platform are the modules that offer additional features. The company positions these module products as powerful enough to make other dedicated software solutions (such as an Application Performance Management – APM layer, for example) now redundant.

Currently, Tanium has modules within four primary IT management and security areas:

Threat Management 

  • Interact: Real-time visibility and control over endpoints.
  • Trends: Trend data to obtain insights.
  • Connect: Integrate and exchange data with third party tools.
  • Detect: Apply threat data and continuously monitor and alert on malicious activity.

Security

  • Threat Response: Active processes, network connections, loaded files, in-memory artifacts.
  • Protect: Manage native operating system security controls at enterprise scale.

Operations

  • Discover: Discover unmanaged interfaces.
  • Patch: Install patches on endpoint devices.
  • Deploy: Rapidly install, update, or remove software across large .organisations with minimal infrastructure requirements.
  • Asset: Complete and accurate report of assets quickly, with real-time data.
  • Map: Create on-demand, precise, comprehensive views — at scale and without manual work.
  • Performance: Track critical performance metrics related to hardware resource consumption, application health, and system health.

Risk

  • Comply: Address regulatory compliance needs using an agent-based scan and rapid remediation.
  • Integrity Monitor: File integrity monitoring satisfies compliance requirements.
  • Reveal: Detect sensitive data at-rest on endpoint devices and define sensitive data patterns.


November 19, 2019  3:55 PM

CI/CD series – MuleSoft: An API-led approach to continuous integration

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

This is a guest post for the Computer Weekly Developer Network in our Continuous Integration (CI) & Continuous Delivery (CD) series.

This contribution is written by Paul Crerand in his capacity as senior director for solution engineering EMEA at MuleSoft – the company (now acquired by Salesforce) is known as a vendor that provides an integration platform to help businesses connect data, applications and devices across on-premises and cloud computing environments.

Crerand reminds us that a robust DevOps practice goes hand-in-hand with having continuous integration mechanisms in place.

He says that the goal of continuous integration is to automate and standardise the way development teams build, test and package applications — however, many of today’s applications have been developed with a variety of tools, in various languages, across multiple platforms.

So what to do?

Crerand writes as follows…

Because of this distributed approach to application development [that exists in the real world], IT teams need a way to consolidate workflows and test for duplicate code or bugs, while ensuring that ‘business as normal’ continues to operate.

Continuous integration has a number of benefits, from encouraging collaboration between teams to increasing overall visibility and transparency. Although DevOps and Agile methodologies — including continuous integration — are becoming the norm, some organisations are struggling to support these practices effectively, as it’s difficult to reorganise people and rethink the relationship between development and operations teams.

It is here where API-led connectivity can help organisations enhance their continuous integration practices by streamlining operations.

Discover, consume, reuse

API-led connectivity is a standardised approach to integrating applications, data sources and devices through reusable APIs. By building composable API ‘building blocks’ that can be easily managed, secured and reused for new projects, organisations avoid the hard-wired dependencies between systems that typically arise from custom integrations built on an as-needed basis. The reality is that custom integration can create more problems than it solves, as the close dependencies between systems and applications can make future changes expensive and time-consuming. It’s incredibly difficult to untangle and reconnect these rigid integrations as organisations add new systems and data sources to their technology stacks.

When organisations use APIs to connect disparate systems, an application network naturally emerges over time. New projects add more ‘nodes’ to the network, each marked by an API specification, so it is secure by design, easy to change, and readily discoverable. By building an application network, organisations can ensure the APIs they create offer enduring business value and help drive greater agility.

One company that has adopted an API-led approach to integration is global athletic and apparel company, Asics. The company had chosen Salesforce Commerce Cloud as its new e-commerce platform but needed a solution to integrate Commerce Cloud with pre-existing systems like order management, product inventory, and shipping. Using MuleSoft’s Anypoint Platform, Asics connected these systems with standardized APIs to build a robust global e-commerce platform in less than six months.

Now, every piece of integration serves as a reusable asset across its growing portfolio of brands. For example, the ‘Asics Email API’, which was originally created to streamline communications to customers about order shipments and changes in product inventory, will be reused dozens of times as the e-commerce platform is rolled out globally, allowing the developer teams to complete projects 2.5x faster and eliminating multiple points of failure.

As a result, Asics does not need to build, test and deploy a new integration for every process that requires an email notification to be sent to a customer. Instead, they can reuse the same standardised email API and easily track changes, make improvements and control access in one place.

Holistic connections to streamline workflows 

Organisations looking to reap the benefits of continuous integration must move beyond custom integration, which creates data silos and results in a build-up of outdated, incomplete or duplicate code.

Instead, organisations should follow an API-led approach to integration, which allows them to holistically connect their applications, data sources and devices with governance and flexibility built into each connection.

With the combined power of continuous integration and API-led connectivity, development teams will be a step closer to producing collaborative, error-free and ready-to-deploy code.

Crerand: Go API – because custom integration can create more problems than it solves.

 


November 19, 2019  3:54 PM

Tanium Converge 2019: keynote notes, quotes & anecdotes

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

The Computer Weekly Developer Network team found itself ‘down south’ in Nashville, Tennessee this week for Tanium CONVERGE 2019.

Now in its fourth year, the event has gained some critical mass and this year is due to host close to 1000 attendees for the first time.

For those that would benefit from a reminder, Tanium provides insight and control into endpoints including laptops, servers, virtual machines, containers or elements of cloud infrastructure.

The technology can help IT security and operations teams ask questions about the state of every endpoint across the enterprise, retrieve data on their current and historical state and so manage and control areas of risk.

Keynote notes

The event keynote kicked off with a welcome from Tanium co-founder and executive chairman David Hindawi.

Hindawi welcomed the cybersecurity and systems management crowd, who hailed from both the public and private sectors.

“A lot of what you will see [at this event, in terms of feature and product requests] comes in direct response to the advice you [the users] have been giving us over the last 12-months. We started Tanium on the belief that enterprises need to know what [endpoints] they’ve got in real time if they are going to be able to secure their networks,” said Hindawi.

The Tanium chief suggests that many customers are managing their technology estate of machines using what he called ‘last decade’ technology — and the advances we have seen over the last decade in terms of virtualisation and the cloud only compound the problem.

“We believe that security and operations have been silo’d for too long — how can you have security without patching? We aspire to deliver a platform that bridges security and operations that offers features that point solutions offer but in a connected way that is integrated in ways that point solutions do not offer,” said Hindawi.

In terms of design, Hindawi recounted a story when he spoke to his engineering development team and asked them what they thought was the easiest possible user interface available — the answer that came back was Google. It was because of this revelation that Hindawi drove a Google-like experience in the way Tanium was built for ease of use throughout every module it delivers.

Empowering ‘truthsayers’

Going forward, Hindawi says he wants to be able to draw and deliver the power of the Tanium system with what he called ‘minimal user intervention’. This will allow users to know what’s going on in their enterprise systems so that they become ‘truthsayers’ — but (and here’s the essential control element) the company will also deliver guardrails so that operations itself is not disrupted.

If there are as many as 51 silos in a typical enterprise IT environment, then Tanium has a big task in front of it. Hindawi asked the audience to pay attention to what the company is doing to manage cyber threats… and he openly asked the audience to feedback to the Tanium engineering team to tell the company what works, what other capabilities and functionalities are needed.

Hindawi is unusual in his honesty and approach. He openly urged every attendee to stop him during the event (well, as many as time permits) and tell him what they need in terms of IT operations management for security. He’s trying to get people to actually tell him how they think Tanium services should develop in the future. Obviously the company will continue to ‘push’ its platform forward as it sees best, but the invitation to ‘pull’ from real users is very real here.

Analyst analysis

Hindawi was followed by Joseph Blankenship in his position as VP & research director for security & risk at Forrester.

Blankenship suggests that there is a ‘strained relationship’ between the security and operations functions. Of course work itself has developed into a state of silos — and there are ‘wedge issues’ that are created as a result of the diversity of IT (and process) systems that are used… and there are differences in tools, departmental missions, data access… and there are silo issues as a result of internal politics.

These wedges increase enterprise operational risk. But Blankenship called for more unity and insisted that IT Ops and security should ‘never be a contact sport’ in real world enterprise environments.

According to a Forrester Tanium study conducted in advance of this conference itself: “IT leaders today face pressure from all sides … To cope with this pressure, many have invested in a number of point solutions. However, these solutions often operate in silos, straining organisational alignment and inhibiting the visibility and control needed to protect the environment.”

You can read the full Tanium Forrester IT Ops and security survey story here.

Tanium’s Hindawi: open to talking about endpoint issues.


November 19, 2019  3:52 PM

Tanium taps the ‘cranium strain’ in security & IT Ops

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

We know that the software application development (Dev) function has been struggling for some years to overcome its previous disconnects with the operations (Ops) function.

The coming together of Dev and Ops inside DevOps has worked hard to try and build new bridges between these sometimes opposing forces… but there are further disconnects inside of Ops itself.

Research from Tanium conducted by Forrester Consulting suggests that there are also ‘strained relationships’ in place between security (essentially a ‘close cousin’ to the Ops function by most people’s definition) and IT Ops overall.

This strain factor has the potential to leave businesses vulnerable to disruption, even where there has been spending on IT security and management tools.

The study itself quizzed 400+ ‘IT leaders’ at large enterprises.

A total of 67 percent of businesses say that driving collaboration between security and IT Ops teams is a major challenge, which not only hampers team relationships, but also leaves organisations open to vulnerabilities.

IT hygiene

Over forty percent of businesses with strained relationships consider maintaining basic IT hygiene more of a challenge than those with good partnerships (32 percent). The proposition here is that it takes teams with strained relationships nearly two weeks longer to patch IT vulnerabilities than teams with healthy relationships (37 business days versus 27.8 business days).

The study also suggested that increased investment in IT solutions has not translated to improved visibility of computing devices and has created false confidence among security and IT ops teams in the veracity of their endpoint management data.

In recent years, there has been a considerable investment in security and IT operations tools, as well as an increased focus at the board level on cybersecurity. According to the study, 81 percent of respondents feel very confident that their senior leadership/board has more focus on IT security, IT operations and compliance than two years ago.

Enterprises who reported budget increases said they have seen considerable additional investment in IT security (18.3 percent) and operations (10.9 percent) over the last two years, with teams procuring an average of five new tools over this same time period.

Despite the increased investment in IT security and operational tools, businesses have a false sense of security regarding how well they can protect their IT environment from threats and disruption. Eighty percent of respondents claimed that they can take action instantly on the results of their vulnerability scans and 89 percent stated that they could report a breach within 72 hours.

“According to our research, most teams are confident in their ability to take timely action on the results of their vulnerability scans. However, further investigation shows teams are admittedly suffering from visibility gaps of all hardware and software assets in their environment, which undermine these efforts to take action. With around 50 percent of IT leaders showing confidence in asset and vulnerability visibility, you’re essentially leaving your security to a coin flip,” said Chris Hallenbeck, chief information security officer for Americas at Tanium.

Why would endpoint visibility specialist Tanium conduct such a study?

Well, one obvious reason is so that the company can table a proposition here to tell us that a unified endpoint management and security solution – i.e. a common toolset for both security and IT Ops – can help address these challenges.

In the study, IT decision makers stated that a unified solution would allow enterprises to operate at scale (59 percent), decrease vulnerabilities (54 percent), and improve communication between security and operations teams (52 percent).

Endpoint point solutions

According to the Forrester study: “IT leaders today face pressure from all sides … To cope with this pressure, many have invested in a number of point solutions. However, these solutions often operate in silos, straining organisational alignment and inhibiting the visibility and control needed to protect the environment … Using a unified endpoint security solution that centralizes device data management enables companies to accelerate operations, enhance security, and drive collaboration between Security and IT Ops teams.”

IT decision makers also say that a unified endpoint solution would help them see faster response times (53 percent) and have more efficient security investigations (51 percent), while improving visibility through improved data integration (49 percent) and accurate real-time data (45 percent).

Forrester’s Blankenship at Tanium Converge 2019.

 


November 14, 2019  8:05 PM

CI/CD series – Altran: What’s driving the continuum?

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater

This is a guest post for the Computer Weekly Developer Network in our Continuous Integration (CI) & Continuous Delivery (CD) series.

This post is written by Jitendra Thethi, AVP of Technology and Innovation at Altran — the company is known for its software product development and consultancy services.

Thethi writes as follows…

Continuous Integration and Continuous Delivery helps to accelerate product development in an Agile manner and deliver features faster than waiting for long release cycles. There are many open source and commercial tools that support CI/CD — and most fall into the following categories: source code management, build management, static code analysis tools, continuous testing and validation, continuous deployment, penetration testing, regression testing tools, infrastructure monitoring tools and application monitoring tools

So, how do you get started? First things first.

CI/CD goes hand-in-hand with DevOps and the move to DevOps is a culture change for many teams. They need to assess requirements and take incremental steps, rather than a big bang approach. Development and operations teams need to start working together on automating the build and deployment to remove the dependency on any manual tasks.

Measuring effectiveness

Of course DevOps has many pitfalls. Rather than focusing on the tools, examine the processes. There should be a clear measurement of the efficiency improved from the adoption of these practices and tools. Skills and the experience of adoption is a key ingredient in the transformation journey to DevOps.

CI/CD is not another Agile iteration, instead it has to be considered as part of the development cycle. Anything that is developed has to be automatically built, tested and deployed. Measuring the effectiveness of a CI/CD system is critical. The effectiveness is measured as ‘agility matrices’ that are derived from events that come from different DevOps tools and reflect KPIs such as build velocity, build quality and cycle time, developer productivity and release cycle times etc. Ideally the goal is to deploy any pull requests and have the ability to merge it to the master branch directly within a staging environment. The deployment into the production environment may depend on other business factors.

CI/CD is more disruptive for legacy systems.

It can therefore take more effort and be time intensive for teams to adopt new practices. For some teams, build cycles can get longer because of the additional computation required in the CI/CD pipeline.

Second nature CI/CD

However for new systems, as long as the DevOps discipline is instilled at the onset – it is not at all disruptive. Over time it becomes second nature and then it becomes part and parcel of the development cycles.

… and finally, security is critical. It should never be an after-thought or a bolt-on capability. It has to be followed throughout the architecture to deployment cycle.

Integrate security as part of the build and deployment cycles.

That way, products get developed as ‘secure-by-default’ and security does not become a barrier to agile deployments. Build a foundation of DevOps and CI/CD knowledge and this will lead to deeper security insights across development, operations, infrastructure and applications.

Thethi: Continuous about continuum.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: