Innovation Through Infrastructure

September 7, 2018  5:47 PM

The boom of automation and where it’s heading

Michael Tidmarsh Michael Tidmarsh Profile: Michael Tidmarsh

Business automation is booming as organisations shift to a digital-first model with the aim of improving customer experience and increasing productivity at an unprecedented scale, all while driving down costs.

Automation is already helping to improve customer interactions in data-rich industry sectors such as financial services, healthcare, insurance, transportation and logistics, travel and hospitality.

As a result, enterprise investment in intelligent automation, which includes artificial intelligence (AI), machine learning and robotic process automation (RPA) is growing fast – expected to reach $232bn by 2025 compared to $12.4bn today, according to research by KPMG.

“We are moving into a customer-centric world more geared to individual situations,” says Brian Safron, worldwide programme director for digital business automation at IBM.

Personalised services

Companies who are used to offering a limited range of products to customers are being outpaced by rivals who are using big data, analytics, decision management and software ‘robots’ to personalise products and services.

“Organisations are in a position where they can serve each customer’s individual needs without the overhead of configurations. They are able to scale up and to change the way they serve customers,” says Safron.

This customer-centric evolution is fuelling the demand for automation and tailor-made services on a huge scale.

“Instead of a bank offering 10 or 20 types of loans, they can now offer 250 types of loans and keep track of everything. The concept of automation enables scale and pushes the limits of individualisation and personalisation and enables organisations to provide customers with what they really need,” says Safron.

New technology-savvy competitors, such as fintechs in banking and Airbnb in travel, are often faster to exploit automation.

“Fintechs have built smart, automated systems that can service and manage a greater level of individualisation, but every company whether they are big or small must figure out how to do this,” says Safron.

Lost opportunities

Even if existing companies believe they are maintaining retention rates, competitors will be cherrypicking the most profitable customer accounts by offering alternative tailor-made services. A retail bank might hold on to traditional services such as a customer’s checking or savings account, but fintech rivals will be edging their way in to grab the highly profitable loans or credit cards an individual might want.

“Lost opportunities to existing customers are often the most profitable. Rivals are siphoning off the high-margin products and can offer more individualised services because they use analytics and AI to determine just what products their customers need, and flexible automation to provide a streamlined self-service experience,” says Safron.

The ability to automate is changing every sector. Thousands of decisions can be made which are factored by data and analytics. Automation enables organisations to understand the best offer to make to a customer and do it automatically and in real-time.

PNC Bank, for example, has automated the majority of loan applications that just a few years ago had to be processed manually, reducing by over 80% the number of applications that must be manually reviewed.

Speed of automation

Many organisations remain encumbered by time-consuming manual processes. Management consultancy McKinsey found that in 60 % of occupations worldwide, up to 30 % of activities can be automated.

The speed and scale of automation and digital transformation is highlighted by the fact that in a typical bank, a few years ago a medium-sized loan would take several days to approve, but today approval can be achieved in minutes or seconds, almost in real-time, says Safron.

“Business rules, real-time analytics, AI and decision management are tackling more and more complex problems. People can be assigned to work where they really add value to an organisation rather than doing repetitive, time-consuming tasks,” he says.

Scaling operations to become more competitive and flexible and putting the customer at the centre of the business is increasingly making organisations turn to digital business automation.

IBM Automation Platform for Digital Business offers the most comprehensive automation capabilities in the market, to automate more types of work at scale.

Learn how IBM can help companies automate all types of work at scale to drive growth. To find out more, click here.

September 7, 2018  5:42 PM

Key considerations when choosing an RPA supplier

Michael Tidmarsh Michael Tidmarsh Profile: Michael Tidmarsh

The market for robotic process automation (RPA) is forecast to reach $2.1bn per year by 2021, according to Forrester Research, as IT leaders see the benefit of automating repetitive tasks by replicating routine human-computer interactions with software.

Initial use of RPA is often a gateway to full automation, and to ensure the intended benefits in cost and productivity, it’s important to understand the key factors for selecting a supplier.

“Getting started with RPA is an easy decision to make; it’s a no-brainer. RPA is a painless technology that lets you easily automate certain tasks that are time-consuming and done on a regular basis,” says Brian Safron, worldwide programme director for digital business automation at IBM.

Think long term

Despite the low risk and quick return of introducing a standalone solution and using RPA to address a specific task such as data entry that would otherwise be done manually, it is worth considering a longer-term automation strategy.

“The question to ask is, ‘Do you want to be limited by an RPA that is finite and fenced off?’ You can automate specific tasks by mirroring the actions of a human at a PC, but RPA can be part of something bigger,” says Safron.

Limiting RPA to a specific task does not unlock its full value because a business process comprises a complex series of tasks that are chained together. It is only by putting into perspective how RPA works within business automation, integrating technologies like decision management and content management, that you can see the bigger picture so greater value can be derived for the organisation.


CIOs need to think about RPA within the context of an automation strategy, or “RPA-plus”, when choosing a provider. Otherwise CIOs risk losing out on longer-term gains because RPA on its own is not a replacement for digital business automation.

“RPA is great for replicating repetitive tasks that otherwise can take a person several minutes or sometimes even hours to complete. But you also want the ability to run longer processes comprised of numerous tasks where you can track the status and progress of that process, apply operational analytics, and report on where processing can be improved,” says Safron.

By choosing a provider that offers RPA-plus, an organisation can report on which processes hit their service level agreements (SLAs), for example. This can determine how RPA can be leveraged to expand its value to the organisation.

RPA can be used with data capture, business rules, workflow and content as part of a richer business automation strategy. Finding a provider that offers this breadth of capabilities, integrated with RPA, is the way to unleash this potential.

“RPA is the sweet spot for automating repetitive, mundane, human tasks but it has limited use cases. Some solutions require RPA plus some other things. While RPA can provide quick returns, you need to make sure you don’t restrict your opportunities,” says Safron.

A roadmap for integration

When integrated with other automation technologies, RPA can be used in a variety of situations to bring wider benefits to the organisation. CIOs should have this roadmap in mind when selecting a supplier.

“By using RPA in the broader context, from an architectural standpoint you can combine RPA with things like business rules, decision management, data capture and workflow,” says Safron.

This allows organisations with complex processes that may run over days or hours to use RPA as part of a sophisticated automation strategy. It can enable RPA to be integrated with decision-making and non-routine tasks with a variety of outcomes leading to a high return on investment.

An automation portfolio

Safron advises that in choosing a supplier, CIOs needs to look at their portfolio of automation requirements and map out the different types of solutions. Automation is not one size fits all, so it is important to consider factors such as the volume of automation, the amount of human involvement required and the uniqueness of each instance.

“Think about automation from a portfolio perspective,” says Safron.

Identify tasks that are carried out by humans that are repetitive and can be replicated by a computer to reduce human error and speed up the process. Look at the division of where repeatable tasks end and where decision management begins.

“Look at how those intersect and what combinations you need to accomplish your business goals,” says Safron.

Finally, it is vital to consider the opportunities of RPA-plus when considering a provider to ensure your organisation can put together the right combination of solutions to solve challenges in the future to improve customer and employee experiences. The ability to deliver a broader portfolio will help create a clear roadmap towards full automation and a digital-first model.

IBM Automation Platform for Digital Business offers the most comprehensive automation capabilities in the market, to automate more types of work at scale.

Learn how IBM can help companies automate all types of work at scale to drive growth. To find out more, click here.

December 27, 2017  3:23 PM

Using machine learning to bolster unified endpoint management

Michael Tidmarsh Michael Tidmarsh Profile: Michael Tidmarsh
Endpoint management, MDM

Using Machine Learning to Bolster Unified Endpoint Management

Machine learning is critical to your future. According to IDC, 40% of all digital transformation initiatives will be supported by cognitive/AI models by 2019. However, if you’re looking to modernise your approach to endpoint management, you don’t have to wait until next year. You can leverage machine learning now. Here’s why and how.

Why unified endpoint management?

The first step in leveraging the benefits of machine learning is to understand where there are gaps in your overall approach to endpoint management. If yours is like most companies, you support a wide range of devices, running various platforms and different operating systems within those platforms.

Centralizing management of that environment has been like herding cats. Traditional solutions—typically mobile device management (MDM)—have not been up to the task. They are cumbersome to use and don’t provide centralised visibility and control for all your endpoints. This means your organisation is exposed to heightened risks of security breaches, compliance violations and cost overruns.

There is a new class of unified endpoint management solutions that offer much greater control and protection. These new solutions are cloud native and offer a way to centralise management of all devices under a single pane of glass. IT can much more efficiently manage diverse devices, ensure timely updates and consistently apply and enforce policies. These solutions are also more agile and less expensive than traditional MDM because they offer cloud scalability, simplicity and economics.

Why machine learning?

The most effective, advanced and intelligent versions of this new class of cloud-native unified endpoint management solutions also leverage machine learning as an integrated part of their core functionality. Machine learning delivers contextual analytics and actionable insights that allow you to strengthen security, identify new opportunities and reduce costs through smarter and more efficient lifecycle management of your devices.

Here’s an example of how it can work: The machine learning engine reads hundreds of thousands of sources of information and discovers that 5% of Android devices are running without the latest security patch. An alert goes out to Android users. But what about users of iOS devices? They may be experiencing similar vulnerabilities.  The system understands that it should conduct a similar scan over iOS devices to determine which ones are not using the latest security patch. There is no need for manual intervention: The system sends out the appropriate bullet point with a recommendation on how to remediate the problem.

Machine learning is also valuable in resource management and lifecycle management. With the right solution you can leverage advanced analytics to help the organization make more informed endpoint decisions, including  those around spending. Machine learning capabilities enable you to:

  • Discover best practices for user productivity.
  • Receive automatic recommendations to improve IT optimization and address potential security threats.
  • Define which insights are most important for your organization.
  • Assess the impact of best practices or security risks to your devices, users and applications.
  • Act on the intelligence to identify business opportunities or remediate security threats.

Taking the next step

It is important to regard machine learning as just one function—albeit a very exciting and innovative function—within a holistic and unified approach to endpoint management. Machine learning provides extreme value when it is leveraged as part of a broader solution that lets you manage all of your endpoints from a single pane of glass, regardless of form factor, platform or operating system version. Machine learning enhances the capabilities of the solution with actionable insights and contextual analytics. Your organization can be more secure and more strategic.

When evaluating potential unified endpoint management solutions that incorporate machine learning, any search will lead you first to IBM MaaS360 with Watson. That’s because this is the first solution that truly uses machine learning as part of a cloud-native approach to unified endpoint management. You get the benefits of machine learning, plus all of the additional benefits of a cloud-native model—enhanced agility, simplified deployments and cost savings through cloud economics.

To learn more about how machine learning can benefit your organisation, visit IBM here for a 30-day free trial of IBM MaaS360 with Watson.

December 27, 2017  3:22 PM

Top 6 things to look for in a unified endpoint management solution

Michael Tidmarsh Michael Tidmarsh Profile: Michael Tidmarsh
Endpoint management, MDM

Unifying the management of your various endpoint devices is essential in 2018. Mobility is more critical than ever to successful business outcomes, while risks are increasing, regulations are more stringent and the impact of security breaches are more harmful. It’s a perfect storm.

The question is not whether to unify endpoint management, but how to give your IT teams and users the greatest flexibility and the most effective security safeguards. That’s where we come in. Here are the top six things to look for in a unified endpoint management solution in 2018.

No. 1: Ease of use

If you’re using legacy mobile device management (MDM) solutions, or if you have no solution at all, you well understand the complexity involved in managing your endpoints. You’re also probably keenly aware of the lack of features/user friendliness in existing solutions. Your admins are probably weary from all the manual processes involved in updating systems and enforcing consistent policies. It doesn’t have to be that way. In 2018, you can deploy a simple-to-use, user-friendly cloud-native unified endpoint management solution. This will simplify all aspects of mobile endpoint management by giving you a centralized, single-pane-glass console to view, manage and secure all of your devices.

No. 2: Support for all platforms

Users demand extraordinary levels of convenience, flexibility and reliability. IT consumerization leaves you no room for error and you are at a competitive disadvantage if you don’t empower your workforce with BYOD. That means you are supporting a wide range of endpoints, including laptops, tablets, smartphones, wearables and Internet of Things (IoT) devices. Plus, myriad platforms and versions of those platforms: iOS, MacOS, Android and all the various flavors of Windows. You may still have users fondly attached to their Blackberries. You must support and secure all these devices. With the right unified endpoint management solution, you get a consolidated view of device security status and user activity of all devices, regardless of form factor, platform or operating system version.

No. 3: Machine learning

Machine learning is a new capability in unified endpoint management and most solutions do not offer it, which puts them at a distinct disadvantage to those that do. With machine learning you can use advanced analytics and actionable insight to identify risks before they cause damage, while also achieving efficiencies that are simply not possible with legacy solutions. For example, if Android users receive a security alert that they are running without the latest patches, the system can automatically trigger a similar alert to users of other operating systems.

No. 4: Expense management

One of the biggest challenges with giving users a high degree of flexibility is that they can abuse their privileges—if you don’t have the proper controls in place. Soon you can find yourself paying overage charges because users are streaming movies on company-managed devices. With a unified management platform that uses machine learning, you can identify where and when users are breaking policy and take remedial action. You can also use machine learning to be smarter about device lifecycle management, thereby saving the organization money and reducing risk.

No. 5: Native malware detection

Hackers are increasingly targeting mobile endpoints for malware. Twenty percent of security professionals say their organizations have experienced a mobile breach, and 94% expect the frequency of mobile attacks to increase, according to a survey by Dimensional Research. Ransomware is a particularly insidious threat: Ransomware payments reached a billion dollars in 2016, up from just $24 million paid in 2015, according to statistics from the U.S. Federal Bureau of Investigation. A unified endpoint management platform can significantly reduce risk if it incorporates native malware detection and defence with automated remediation capabilities.

No. 6: Cloud-native

It’s 2018, which means it’s time to leverage cloud services wherever possible. The benefits of using a cloud-native unified endpoint management solution are significant:

  • Faster deployments.
  • Lower costs for both Capex and ongoing operations.
  • Cloud economics for more accurate budgeting.
  • Easier to scale, much more agile to use, less stress on IT operations teams and admins.
  • Assurance that you always have the latest software versions.
  • Automatically updated code as platform vendors release expanded APIs.


If your organization is using an older MDM solution, or if you are still manually managing endpoints, you are bearing unnecessary costs and taking unnecessary risks. Granted, endpoint management has not been easy. But it is now.

With a cloud-native solution, specifically MaaS360 with Watson from IBM, you can simplify endpoint management, reduce risk and gain control over all your myriad mobile devices and platforms. You also take a giant leap into the future by using the machine learning capabilities of IBM Watson to identify opportunities and efficiencies through contextual analytics and actionable insights.

Don’t wait until your organization has been breached. Contact IBM here and take advantage of a free 30-day trial.

December 27, 2017  3:21 PM

Forget MDM: 2018 is the time for unified endpoint management

Michael Tidmarsh Michael Tidmarsh Profile: Michael Tidmarsh
Endpoint management, MDM

It’s time to take a smarter approach to endpoint management. The unfortunate reality is that most organisations have not made endpoint management a top priority. On one level, this is understandable. The proliferation of devices and platforms has made unified management difficult using traditional mobile device management (MDM) solutions.

Legacy MDM solutions are often not kept current because they are not simple to use and scale. In addition, they typically can’t integrate with one another and don’t make it easy for IT administrators to apply and enforce management policies.

In 2018, this posture is too risky. And it’s no longer necessary: Today there are cost-efficient cloud-native solutions that use machine learning to bring advanced unified management capabilities to all endpoints—laptops, desktops, smart phones, tablets, wearables and Internet of Things (IoT) devices.

In this blog post we discuss three key reasons that make unified endpoint management essential in 2018. In a companion post we discuss the top 6 things to look for in a unified endpoint management solution. And in a third post we discuss why machine learning must be a critical aspect of your unified endpoint management solution.

First let’s look at the three critical risk factors that make unified endpoint management the only way to go in 2018:

Risk Factor #1: The Changing Regulatory Climate, Particularly GDPR

With the General Data Protection Regulation (GDPR) deadline coming in May 2018, it is essential that all businesses—large, midsize and small—prepare now. GDPR requires continuous compliance and levies huge penalties for breaches of personal identifiable information. It also requires immediate notification to the proper authorities. Unified endpoint management can address your compliance challenges in a robust and systematic way. With the right solution, you can bring management of all devices, including BYOD, smartphones and IoT, into a single pane of glass for visibility and control. You can centralise patch management, policy consistency and enforcement to enable continuous compliance and ensure early warnings of potential problems.

Risk Factor #2: A More Dangerous and Sophisticated Threat Landscape

More than 60% of security executives say mobility comes with a greater number of security risks and concerns than expected, according to research from International Security Media Group. And threats continue to grow as hackers increasingly seek to exploit mobile vulnerabilities. Research from ATT reveals that:

  • Employee mobile devices were the primary source of breaches (51%) due to the exploitation of known devices over the past year.
  • More than a third of respondents said IoT devices were the primary source of a data breach experienced over the past year; 68% expect IoT threats to increase in 2018.
  • Ransomware has eclipsed almost all other cyber threats as a top concern, cited by nearly 50% of respondents.

Endpoint protection gaps are caused by a multitude of issues, including lack of visibility and inability to support and patch all of your devices consistently and currently. Remember, it only takes one vulnerable endpoint for an intruder to penetrate your system. You will have a much better chance of succeeding if you modernise your approach with a cloud-native unified endpoint management system that enables you to consolidate management of all types of devices, regardless of form factor, platform or operating system version.

Risk Factor #3: The Growth of Rogue User Behaviour, Including Shadow IT

When it comes to user behaviour, what you don’t know CAN hurt you. Rogue user behaviour can lift its ugly head in a number of ways; software-as-a-service applications such as Box and DropBox; shadow IT initiatives; failure to follow BYOD guidelines, and the use personal applications, such as Gmail or Netflix. Some of these activities can be dangerous, impacting compliance and security. Others can be expensive, with the company incurring overage charges when users stream movies. The right unified endpoint management solution not only gives you single-pane-of-glass  visibility; it also gives you a single management console to consolidate endpoint management tasks across all devices with machine learning for contextual analytics and actionable insights.

Taking the Next Step

Mobility is an important differentiator in business today. Organizations that safely and securely empower mobile users achieve competitive advantage and make their workplaces more attractive in recruiting and retaining employees. But increasing mobility can also mean increasing risk in compliance, security and the impact of rogue user behaviour.

A cloud-native unified endpoint management solution is the best way to mitigate your risks and empower your teams in 2018. Of all the solutions on the market, only IBM MaaS360 with Watson can identify risks and opportunities through the machine-learning capabilities of Watson. To learn more about how IBM MaaS360 with Watson can help you modernise your approach to endpoint management, visit IBM here.

December 19, 2017  2:19 PM

Top 6 factors in choosing the best CMS for 2018

Michael Tidmarsh Michael Tidmarsh Profile: Michael Tidmarsh
CMS, Content Management

It’s 2018, which means it’s time to modernise your approach to content management. We’re in the era of machine learning, cloud computing and digital transformation. If your content management system (CMS) doesn’t have cloud agility and isn’t powered with machine-learning capabilities, then you are officially in jeopardy of falling beyond your competition.

Fear not. The path to a faster, smarter and simple-to-use CMS is clear. And, if you choose wisely, you’ll end up with a solution that makes life easier for both marketers and developers by leveraging artificial intelligence to improve agility, business smarts and speed to market. Get ready. Here are the six factors that will help you choose the best CMS for 2018.

Factor No. 1: Headless Design

The rush to headless CMS models has been a boon to developers and a bane to marketers. In 2018, you don’t have to settle: You can utilize a solution that delivers great tools for marketers and developers. By leveraging a cloud-native headless solution, developers can get the rich API environment they need, while marketers get easy access to their most critical tools, such as email marketing systems and personalization engines.

Factor No. 2: Machine Learning

IDC predicts 40% of all digital transformation initiatives will be supported by cognitive/AI models by 2019. Why wait till next year to build for the future when modern, simple-to-use CMSs are already available? With a CMS powered with machine learning capabilities, you can quickly and easily find the right content for the right platform, whether for the Web, mobile apps, kiosks, cars or any other potential use case.

Factor No. 3: Lower Costs Via Cloud Economics

A cloud-native CMS delivers huge benefits in speed, costs, agility and functionality versus an on-premises solution. With a cloud-native hub, marketers can consolidate and access data from locations all around the world, scaling easily as requirements change. More than that, the organisation lowers costs by using a pricing model based that supports multiple users under one price. It’s 2018 and per-user pricing is a relic of the past. And an expensive one, at that.

Factor No. 4: Omni-Channel Delivery to Support Developers

To accelerate creation and delivery of new content, services and features, front-end developers require a rich array of APIs with tools that enable the development of sites either headlessly or not. The CMS should enable developers to integrate content on all channels, without duplicating, so it is always up to date. In addition to demanding a wide array of APIs, look for a solution that uses a command line client so developers can work without the CMS getting in the way.

Factor No. 5: Simple to Use and Elegant in Design

The CMS should be both simple to use and built for teams, empowering marketers and developers to be faster, more agile and more collaborative. Marketers are far more effective when they can easily organize and orchestrate their content to execute campaigns. Developers can do a better job when they can easily provide marketers with the tools they need to quickly and easily leverage content across all channels.

Factor No. 6: A Coordinated Content Ecosystem

With the right solution, the CMS can be a content hub for all kinds of content from a wide range of sources – Web, marketing emails, social networking posts, the mobile push platform, merchandising campaigns and more. One of the biggest challenges for marketers is organizing the massive amounts of constantly changing content at their disposal so that they can use it most effectively. With a modern CMS that leverages cognitive content management, what had been a challenge can be transformed into a competitive advantage.

Taking the Next Step

One of the best ways to evaluate the best CMS for your organization is to try it for yourself. Potential customers of IBM Watson Content Hub can take advantage of a free trial offer to determine how they can leverage the benefits of a headless, cloud native CMS that is powered with the machine learning capabilities of IBM Watson artificial intelligence. For information on how you can take advantage of a free trial to modernize your approach to content management in 2018, please visit IBM Watson Content Hub.

October 10, 2017  2:37 PM

Are you GDPR ready?

Michael Tidmarsh Michael Tidmarsh Profile: Michael Tidmarsh

Organisations are becoming aware that the General Data Protection Regulation (GDPR) may require a transformational shift in how to manage the personal information of EU data subjects – but they may not know the best approach to take.

The current challenge is less about falling foul of breaching the new data protection regulations which become enforceable in May 2018, and more about knowing what is required to avoid the potential financial and reputational damage of a breach.

A potential fine of up to $20m or 4% of global revenue (whichever is greater) for a GDPR breach is galvanising action, but many are struggling with aspects of the journey towards GDPR readiness, and we believe having a roadmap and the right partner can assist with safe arrival.

The GDPR journey

Richard Hogg, global GDPR & governance offerings evangelist at IBM Cloud, points out that companies are at different stages of the journey, but may be struggling with the full ramifications of the new rules and the next steps.

“GDPR is all around the personal data of data subjects, which includes any employees, and any external customers and clients you have. Data privacy regulations across the 28 EU states have raised the bar for obligations surrounding personal data. You must know what data you have, where it is stored, how it is processed, secured and protected,” says Hogg.

EU data subjects, have new rights. For example, a customer may no longer want a business relationship and has the “right to erasure”, which means the deletion of any personal data held on them. They can also submit a subject access request to discover their personal data held by an organisation, which must comply within a month without charging a fee. The right of portability means data can be easily switched to a new supplier; and GDPR prescribes strict data quality.

Organisations must have explicit consent from data subjects for their data processing purposes. Data transparency is fundamental and data security and privacy is vital.

If there is a breach of GDPR, a company must report this within 72 hours to the proper regulatory authority and possibly the data subject.

Role of the data processor

Many organisations will turn to a cloud service provider to help with these GDPR challenges, but using a third party – what the regulation calls a data processor – is not a way to abdicate responsibility for ensuring compliance, because that obligation stays with the company as the data controller. Nonetheless, using a cloud service provider can undoubtedly help ease readiness and reduce load and work.

“One of the biggest challenge facing organisations is that they don’t know what personal data they have in multiple systems, with no clear view of where it is from and where it goes to during processing,” says Hogg.

IBM Cloud can help with these challenges by conducting a privacy impact assessment (PIA) to provide a Record of Processing Activities, which is an obligation under Article 30 of GDPR.

“A PIA can help with determining data lineage, and categories of data and how it is processed and the requirement of consent. It surveys across the enterprise and identifies potential gaps against GDPR policies,” says Hogg.

Following analysis with a PIA, which can take just weeks, IBM Cloud can help you migrate your data to its platform to help simplify your GDPR readiness journey by categorising and protecting data.

Accelerating compliance

High-level data mapping, which identifies data at risk of breaching GDPR, can be accelerated using IBM tools. They work from the bottom-up to search for pre-defined data types and catalogue what data is where.

“You can identify and delete obsolete data and determine whether you need explicit consent for data processing from a data subject,” says Hogg.

IBM Cloud can help organizations identify where their data resides – in the cloud or on premises – and help to discover and map for GDPR readiness.

“Organisations could choose to move more of their data to the cloud going forward. IBM Cloud can help with infrastructure deployment around security and privacy of data processing. They can benefit from its strong capabilities for data privacy, security and protection,” says Hogg.

If unauthorised access occurs, IBM Cloud offers “incident management as a service” to help clients discover the source of the problem.

“GDPR gives organisations only 72 hours in which to report a breach. Discovery commonly takes a company 150 days. IBM Cloud can help a client meet that deadline. It has monitoring ability in spades,” says Hogg.

For a GDPR readiness assessment visit or speak with a cloud seller about GDPR readiness.


Clients are responsible for ensuring their own compliance with various laws and regulations, including the European Union General Data Protection Regulation. Clients are solely responsible for obtaining advice of competent legal counsel as to the identification and interpretation of any relevant laws and regulations that may affect the clients’ business and any actions the clients may need to take to comply with such laws and regulations.  The products, services, and other capabilities described herein are not suitable for all client situations and may have restricted availability.

October 10, 2017  2:34 PM

Simplify GDPR readiness with IBM Cloud’s platform

Michael Tidmarsh Michael Tidmarsh Profile: Michael Tidmarsh

Trust can be viewed as a key factor amongst clients and service providers working together towards preparing for readiness with the EU General Data Protection Regulation (GDPR). These stringent regulations come into force in May 2018 to ensure that personal data is processed adhering to strict privacy and security requirements.

Fines of up to €20m or 4% of global revenue can be levied for non-compliance with how the personal data of EU data subjects is processed, stored and accessed – which could be enough to put some companies out of business.

However, choosing a third party to handle data processing to simplify your organisation’s journey to GDPR readiness, does not mean you hand over responsibility for that data if any breach of the regulations occur.

The data processor

GDPR includes the concept of a data controller and a data processor –  [i]“data controller” means a person who (either alone or jointly or in common with other persons) determines the purposes for which and the manner in which any personal data are, or are to be processed

“data processor”, in relation to personal data, means any person (other than an employee of the data controller) who processes the data on behalf of the data controller.

“processing”, in relation to information or data means obtaining, recording or holding the information or data or carrying out any operation or set of operations on the information or data, including—

  1. a)  Organisation, adaptation or alteration of the information or data,
  2. b)  Retrieval, consultation or use of the information or data,
  3. c)  Disclosure of the information or data by transmission, dissemination or otherwise making available, or
  4. d) Alignment, combination, blocking, erasure or destruction of the information or data

As such, the choice of data processor is critical, and IT directors and data protection officers should consider the benefits around working with an experienced cloud service provider.

Crucially, a provider should be able to supply the evidence to show it adheres to specific security and privacy standards. One way for a cloud service provider to do this, under GDPR, is to adhere to a Code of Conduct, which is designed to do precisely that.

EU Cloud Code of Conduct

IBM Cloud is one of the the first organisations to declare [ii]24 IBM Cloud infrastructure and IBM Cloud Platform services to the EU Cloud Code of Conduct (“Code”). Development on the Code begun in 2013 and it is the only Code developed in collaboration with EU authorities and the cloud computing community specific to GDPR.

The Code provides assurance to organisations that data processors signed up to the Code are focusing on data privacy, security and information governance to assure GDPR’s strict requirements are adhered to.

Furthermore, it is the only Code that is independently governed, by the monitoring body of [iii]SCOPE Europe. [iv]It is also the only Code that covers the full spectrum of cloud services from software as a service (SaaS) and platform as a service (PaaS) through to infrastructure as a service (IaaS).

IBM Cloud has already signed up 24 of its IasS and PaaS services to the Code since March 2017 and can help its clients towards GDPR readiness.

“The Code comes from existing security standards – ISO 27001, ISO 27018 and will map to emerging data privacy standards such as  ISO 27552, and it requires evidence that companies adhere to standards,” says Jonathan Sage, government and regulatory affairs executive at IBM.  He goes on to clarify “Self-declaration of compliance has no impact. Ticking a box saying you’ve done all that is required is not enough. Behind the Code there are supervisory controls that will document and manifest whether cloud service providers really do comply.”

A quality standard

IBM has 16 datacentres in Europe which gives customers choice about data residency and whether this needs to be within the EU, including a new datacentre built in Frankfurt offering a Bluemix platform. Clients are reassured that IBM Cloud infrastructure has signed up to a Code that’s transparent and its services can provide a quality standard that is GDPR-specific.

“Transparency is very important to the Code. It means that clients can check that third party audits or other mechanisms to comply are in place, rather like a one-stop shop. It can save them a lot of work as it is can offer assurances to customers and to the data protection authorities on GDPR readiness,” says Sage.

Organisations working towards compliance with GDPR and concerned about meeting the May 2018 deadline can be reassured that by working with IBM Cloud they can be well-positioned in their readiness journey.

A tool to reach compliance

IBM Cloud, as a signatory to the EU Cloud Code of Conduct, demonstrates its commitment to helping assure that the personal information of EU data subjects is kept private.

“No company can claim they are compliant with GDPR as it is not in existence until May 2018. The Code is a tool to reach compliance and a great way of driving compliance for cloud service providers, and their clients,” says Sage.

By engaging with the Code early, IBM Cloud can demonstrate its internal change programme towards GDPR readiness.

“The fact it is demonstrable and externally transparent is proof to the market. IBM Cloud had an important role in developing the Code and there is a real buzz in the community with co-developers. It is a feather in our cap and shows we have taken leadership and offer transparency around GDPR readiness,” says Sage.

Clients wanting to know more about how IBM Cloud’s platform can help simplify GDPR readiness can visit


Clients are responsible for ensuring their own compliance with various laws and regulations, including the European Union General Data Protection Regulation. Clients are solely responsible for obtaining advice of competent legal counsel as to the identification and interpretation of any relevant laws and regulations that may affect the clients’ business and any actions the clients may need to take to comply with such laws and regulations.  The products, services, and other capabilities described herein are not suitable for all client situations and may have restricted availability.





September 28, 2017  4:10 PM

Think Beyond x86 Clustering For an Enterprise-Class Linux Solution

Michael Tidmarsh Michael Tidmarsh Profile: Michael Tidmarsh

Linux has made impressive inroads into enterprise-class computing environments because of its performance, security, scalability and global open source support network. As a result, more and more enterprise-class workloads—such as analytics, databases and transaction processing—are moving to Linux.

Now, dramatic increases in data volume and velocity are being driven by such trends as big data, pervasive enterprise mobility and the Internet of Things. This surge in data for lightning-fast transactions and transformative business insights has put big pressure on data center infrastructure to handle the load. The question is: What’s the best approach to meet those needs?

Clustering of x86 servers is one option that inevitably comes up. After all, x86 servers come with relatively low price tags, are easy to install and support a wide range of applications. For specific environments, like fast-growing small businesses or departmental applications, clustered x86 servers may be a good way to go.

But for enterprise-class, mission-critical workloads that demand high-end processing power, throughput, infinitely scalable storage, fast network connectivity and rock-solid security, x86 clusters are likely to come up short. There are several reasons why:

  • Economics: Despite a typically low initial capital expense, x86 clusters actually are likely to cost more money as those clusters expand and proliferate to meet increased need for more storage, processing power and connectivity. Adding more “oomph” to handle enterprise workloads means adding more servers, as well as more storage in the form of NAS appliances and expensive storage-area networks. Software licenses and support fees also are going to expand in lockstep with increased cluster size.
  • Enterprise workloads are built around applications that require a lot of memory (which often means expanding the x86 cluster farms). Also, not all enterprise applications lend themselves to cluster-based architectures—something IT executives and their business users can’t afford to find out after the fact.
  • Management and monitoring. More boxes mean more management complexity, especially if those clusters are built on different server brands and operating system versions. That makes automation more challenging, which in turns puts a bigger burden on IT staffs to monitor system operations and application performance.
  • For enterprise workloads, organizations want to limit the potential points of entry for hackers and other threat vectors. More servers and more clusters are going to increase the threat vulnerabilities, not reduce them.

Fortunately, there’s a better option: A mainframe-class Linux server with vastly improved economics. IBM’s LinuxONE server is optimized for enterprise-class Linux workloads that demand performance, scalability, security and manageability, as well as an attractive total cost of ownership model.

One major area where the IBM has focused LinuxONE is disaster recovery—a critical requirement for enterprise workloads that cannot afford downtime, or even short interruptions in application availability. “Because LinuxONE systems utilize a shared-everything data architecture, there is no need for multiple copies of files or databases,” according to an IBM-sponsored report issues by IT consultants Robert Frances Group. “This not only eliminates out-of-sync conditions, but also simplifies the set-up and execution of the recovery point objectives and recovery time objectives.”[1]

Three key principles underscore the IBM LinuxONE philosophy:

  • Open source delivered on the best environment for your organization—on-premises or in the cloud, be it public, private or hybrid.
  • Limitless flexibility and scalability for expanding workloads, combining the best of open source and enterprise computing.
  • Risk reduction, utilizing proven hardware and software platforms from a trusted source and heightened security features including blockchain technology.

As the performance, security and cost efficiency of Linux-based solutions becomes increasingly apparent, enterprise workloads are increasingly migrating to Linux. Smart IT and business decision-makers will continue gravitating toward purpose-built Linux solutions, rather than built-on-the-fly x86 clusters, to handle enterprise workloads in modernized data center environments

[1] “10 reasons LinuxONE is the best choice for Linux workloads,” Robert Frances Group, 2015

September 28, 2017  4:08 PM

Successful Blockchain Adoption Requires Smart Infrastructure Decisions

Michael Tidmarsh Michael Tidmarsh Profile: Michael Tidmarsh

Blockchain technology, initially associated exclusively with exotic new cryptocurrencies like Bitcoin, is aggressively expanding its market footprint as organizations look to leverage its core capabilities in security and cost efficiency.

Analysts project blockchain adoption will increase more than tenfold between 2016 and 2021, eclipsing $2.3 billion in global revenues.[1] And data presented at the World Economic Forum indicates that 10% of global gross domestic project will be stored on blockchain technology by 2027.[2]

Although blockchain—a distributed ledger and decentralized database for secure transactional data—is most widely utilized in financial services because of its Bitcoin relationship, it also is widely used for such use cases as counterfeit prevention, trading market clearings, claims fraud, Internet of Things and public records management. Those and other uses make it very attractive in such industries as retailing, public sector, insurance and healthcare.

It’s easy to lapse into deeply technical discussions about blockchain’s underpinnings—algorithms, permissioning and hashing. But blockchain’s main benefits are widely agreed upon: It is a highly secure, scalable and cost-efficient way to maintain and validate rights and privileges for all kinds of digital transactions.

“Blockchain has no single point of failure,” wrote “If one node goes down, it’s not a problem because all of the other nodes have a copy of the ledger.”[3] With that strength, it’s easy to see the attraction of blockchain in an era when security threats are increasing—dramatically—in their frequency, geographic scale and magnitude of data compromised.

So, adopting blockchain technology seems like a good idea, regardless of industry or geography. But with a commitment to blockchain also comes a responsibility to make smarter, longer-lasting infrastructure choices that match blockchain’s security, reliability and cost efficiency.

What should you look for in your blockchain infrastructure? According to a report from consulting firm Pund-It (and commissioned by IBM), there are some essential capabilities and functionality you should require when evaluating blockchain infrastructure.[4]

  • Multi-tenant separation and isolation, to ensure that participating entities data and activities are walled off.
  • External attack security to protect blockchain deployments through encapsulated software in a secure, signed, trusted, appliance-style container.
  • Cryptographic key safety to prevent privileged users from creating snapshots of blockchain data.
  • Integrated protection, rather than technology bolt-ons that often leave data exposed and accessible to attacks.

As organizations increase their use of blockchain technology across multiple use cases in order to improve security and cost efficiency, the selection of the underlying platform and infrastructure is critical. IBM has made a strategic commitment to this technology with the IBM Blockchain Platform, powered by IBM LinuxONE systems.

IBM Blockchain Platform represents an integrated approach, because it allows organizations to develop, govern and operate blockchain networks. The platform aligns with the Linux Foundation’s Hyperledger Project, an open source-based collaboration for blockchain development. IBM Blockchain Platform is optimized for cloud environments—specifically IBM Cloud—because it utilizes techniques designed to close potentially devastating blockchain vulnerabilities.

The platform is underpinned by IBM LinuxONE, a robust, highly available and mission-critical hardware system offering users the security, speed and scale required in blockchain use cases. IBM LinuxONE delivers rock-solid security through unique service containers and crypto-cards, while ensuring high throughput with dedicated I/O processing and high scalability supporting up to 2 million Docker containers.

Blockchain is a big, big deal for enterprises that need the combination of security, availability, performance and cost efficiency for their most important applications and workloads. That’s why IBM has made blockchain a major component of its enterprise solutions strategy via IBM Blockchain Platform and IBM LinuxONE.




[4] “Ensuring Secure Enterprise Blockchain Networks: A Look a IBM Blockchain and LinuxONE,” Pund-It Inc., August 2017

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: