Innovation Through Infrastructure

Page 1 of 212

October 10, 2017  2:37 PM

Are you GDPR ready?

Michael Tidmarsh Michael Tidmarsh Profile: Michael Tidmarsh

Organisations are becoming aware that the General Data Protection Regulation (GDPR) may require a transformational shift in how to manage the personal information of EU data subjects – but they may not know the best approach to take.

The current challenge is less about falling foul of breaching the new data protection regulations which become enforceable in May 2018, and more about knowing what is required to avoid the potential financial and reputational damage of a breach.

A potential fine of up to $20m or 4% of global revenue (whichever is greater) for a GDPR breach is galvanising action, but many are struggling with aspects of the journey towards GDPR readiness, and we believe having a roadmap and the right partner can assist with safe arrival.

The GDPR journey

Richard Hogg, global GDPR & governance offerings evangelist at IBM Cloud, points out that companies are at different stages of the journey, but may be struggling with the full ramifications of the new rules and the next steps.

“GDPR is all around the personal data of data subjects, which includes any employees, and any external customers and clients you have. Data privacy regulations across the 28 EU states have raised the bar for obligations surrounding personal data. You must know what data you have, where it is stored, how it is processed, secured and protected,” says Hogg.

EU data subjects, have new rights. For example, a customer may no longer want a business relationship and has the “right to erasure”, which means the deletion of any personal data held on them. They can also submit a subject access request to discover their personal data held by an organisation, which must comply within a month without charging a fee. The right of portability means data can be easily switched to a new supplier; and GDPR prescribes strict data quality.

Organisations must have explicit consent from data subjects for their data processing purposes. Data transparency is fundamental and data security and privacy is vital.

If there is a breach of GDPR, a company must report this within 72 hours to the proper regulatory authority and possibly the data subject.

Role of the data processor

Many organisations will turn to a cloud service provider to help with these GDPR challenges, but using a third party – what the regulation calls a data processor – is not a way to abdicate responsibility for ensuring compliance, because that obligation stays with the company as the data controller. Nonetheless, using a cloud service provider can undoubtedly help ease readiness and reduce load and work.

“One of the biggest challenge facing organisations is that they don’t know what personal data they have in multiple systems, with no clear view of where it is from and where it goes to during processing,” says Hogg.

IBM Cloud can help with these challenges by conducting a privacy impact assessment (PIA) to provide a Record of Processing Activities, which is an obligation under Article 30 of GDPR.

“A PIA can help with determining data lineage, and categories of data and how it is processed and the requirement of consent. It surveys across the enterprise and identifies potential gaps against GDPR policies,” says Hogg.

Following analysis with a PIA, which can take just weeks, IBM Cloud can help you migrate your data to its platform to help simplify your GDPR readiness journey by categorising and protecting data.

Accelerating compliance

High-level data mapping, which identifies data at risk of breaching GDPR, can be accelerated using IBM tools. They work from the bottom-up to search for pre-defined data types and catalogue what data is where.

“You can identify and delete obsolete data and determine whether you need explicit consent for data processing from a data subject,” says Hogg.

IBM Cloud can help organizations identify where their data resides – in the cloud or on premises – and help to discover and map for GDPR readiness.

“Organisations could choose to move more of their data to the cloud going forward. IBM Cloud can help with infrastructure deployment around security and privacy of data processing. They can benefit from its strong capabilities for data privacy, security and protection,” says Hogg.

If unauthorised access occurs, IBM Cloud offers “incident management as a service” to help clients discover the source of the problem.

“GDPR gives organisations only 72 hours in which to report a breach. Discovery commonly takes a company 150 days. IBM Cloud can help a client meet that deadline. It has monitoring ability in spades,” says Hogg.

For a GDPR readiness assessment visit or speak with a cloud seller about GDPR readiness.


Clients are responsible for ensuring their own compliance with various laws and regulations, including the European Union General Data Protection Regulation. Clients are solely responsible for obtaining advice of competent legal counsel as to the identification and interpretation of any relevant laws and regulations that may affect the clients’ business and any actions the clients may need to take to comply with such laws and regulations.  The products, services, and other capabilities described herein are not suitable for all client situations and may have restricted availability.

October 10, 2017  2:34 PM

Simplify GDPR readiness with IBM Cloud’s platform

Michael Tidmarsh Michael Tidmarsh Profile: Michael Tidmarsh

Trust can be viewed as a key factor amongst clients and service providers working together towards preparing for readiness with the EU General Data Protection Regulation (GDPR). These stringent regulations come into force in May 2018 to ensure that personal data is processed adhering to strict privacy and security requirements.

Fines of up to €20m or 4% of global revenue can be levied for non-compliance with how the personal data of EU data subjects is processed, stored and accessed – which could be enough to put some companies out of business.

However, choosing a third party to handle data processing to simplify your organisation’s journey to GDPR readiness, does not mean you hand over responsibility for that data if any breach of the regulations occur.

The data processor

GDPR includes the concept of a data controller and a data processor –  [i]“data controller” means a person who (either alone or jointly or in common with other persons) determines the purposes for which and the manner in which any personal data are, or are to be processed

“data processor”, in relation to personal data, means any person (other than an employee of the data controller) who processes the data on behalf of the data controller.

“processing”, in relation to information or data means obtaining, recording or holding the information or data or carrying out any operation or set of operations on the information or data, including—

  1. a)  Organisation, adaptation or alteration of the information or data,
  2. b)  Retrieval, consultation or use of the information or data,
  3. c)  Disclosure of the information or data by transmission, dissemination or otherwise making available, or
  4. d) Alignment, combination, blocking, erasure or destruction of the information or data

As such, the choice of data processor is critical, and IT directors and data protection officers should consider the benefits around working with an experienced cloud service provider.

Crucially, a provider should be able to supply the evidence to show it adheres to specific security and privacy standards. One way for a cloud service provider to do this, under GDPR, is to adhere to a Code of Conduct, which is designed to do precisely that.

EU Cloud Code of Conduct

IBM Cloud is one of the the first organisations to declare [ii]24 IBM Cloud infrastructure and IBM Cloud Platform services to the EU Cloud Code of Conduct (“Code”). Development on the Code begun in 2013 and it is the only Code developed in collaboration with EU authorities and the cloud computing community specific to GDPR.

The Code provides assurance to organisations that data processors signed up to the Code are focusing on data privacy, security and information governance to assure GDPR’s strict requirements are adhered to.

Furthermore, it is the only Code that is independently governed, by the monitoring body of [iii]SCOPE Europe. [iv]It is also the only Code that covers the full spectrum of cloud services from software as a service (SaaS) and platform as a service (PaaS) through to infrastructure as a service (IaaS).

IBM Cloud has already signed up 24 of its IasS and PaaS services to the Code since March 2017 and can help its clients towards GDPR readiness.

“The Code comes from existing security standards – ISO 27001, ISO 27018 and will map to emerging data privacy standards such as  ISO 27552, and it requires evidence that companies adhere to standards,” says Jonathan Sage, government and regulatory affairs executive at IBM.  He goes on to clarify “Self-declaration of compliance has no impact. Ticking a box saying you’ve done all that is required is not enough. Behind the Code there are supervisory controls that will document and manifest whether cloud service providers really do comply.”

A quality standard

IBM has 16 datacentres in Europe which gives customers choice about data residency and whether this needs to be within the EU, including a new datacentre built in Frankfurt offering a Bluemix platform. Clients are reassured that IBM Cloud infrastructure has signed up to a Code that’s transparent and its services can provide a quality standard that is GDPR-specific.

“Transparency is very important to the Code. It means that clients can check that third party audits or other mechanisms to comply are in place, rather like a one-stop shop. It can save them a lot of work as it is can offer assurances to customers and to the data protection authorities on GDPR readiness,” says Sage.

Organisations working towards compliance with GDPR and concerned about meeting the May 2018 deadline can be reassured that by working with IBM Cloud they can be well-positioned in their readiness journey.

A tool to reach compliance

IBM Cloud, as a signatory to the EU Cloud Code of Conduct, demonstrates its commitment to helping assure that the personal information of EU data subjects is kept private.

“No company can claim they are compliant with GDPR as it is not in existence until May 2018. The Code is a tool to reach compliance and a great way of driving compliance for cloud service providers, and their clients,” says Sage.

By engaging with the Code early, IBM Cloud can demonstrate its internal change programme towards GDPR readiness.

“The fact it is demonstrable and externally transparent is proof to the market. IBM Cloud had an important role in developing the Code and there is a real buzz in the community with co-developers. It is a feather in our cap and shows we have taken leadership and offer transparency around GDPR readiness,” says Sage.

Clients wanting to know more about how IBM Cloud’s platform can help simplify GDPR readiness can visit


Clients are responsible for ensuring their own compliance with various laws and regulations, including the European Union General Data Protection Regulation. Clients are solely responsible for obtaining advice of competent legal counsel as to the identification and interpretation of any relevant laws and regulations that may affect the clients’ business and any actions the clients may need to take to comply with such laws and regulations.  The products, services, and other capabilities described herein are not suitable for all client situations and may have restricted availability.





September 28, 2017  4:10 PM

Think Beyond x86 Clustering For an Enterprise-Class Linux Solution

Michael Tidmarsh Michael Tidmarsh Profile: Michael Tidmarsh

Linux has made impressive inroads into enterprise-class computing environments because of its performance, security, scalability and global open source support network. As a result, more and more enterprise-class workloads—such as analytics, databases and transaction processing—are moving to Linux.

Now, dramatic increases in data volume and velocity are being driven by such trends as big data, pervasive enterprise mobility and the Internet of Things. This surge in data for lightning-fast transactions and transformative business insights has put big pressure on data center infrastructure to handle the load. The question is: What’s the best approach to meet those needs?

Clustering of x86 servers is one option that inevitably comes up. After all, x86 servers come with relatively low price tags, are easy to install and support a wide range of applications. For specific environments, like fast-growing small businesses or departmental applications, clustered x86 servers may be a good way to go.

But for enterprise-class, mission-critical workloads that demand high-end processing power, throughput, infinitely scalable storage, fast network connectivity and rock-solid security, x86 clusters are likely to come up short. There are several reasons why:

  • Economics: Despite a typically low initial capital expense, x86 clusters actually are likely to cost more money as those clusters expand and proliferate to meet increased need for more storage, processing power and connectivity. Adding more “oomph” to handle enterprise workloads means adding more servers, as well as more storage in the form of NAS appliances and expensive storage-area networks. Software licenses and support fees also are going to expand in lockstep with increased cluster size.
  • Enterprise workloads are built around applications that require a lot of memory (which often means expanding the x86 cluster farms). Also, not all enterprise applications lend themselves to cluster-based architectures—something IT executives and their business users can’t afford to find out after the fact.
  • Management and monitoring. More boxes mean more management complexity, especially if those clusters are built on different server brands and operating system versions. That makes automation more challenging, which in turns puts a bigger burden on IT staffs to monitor system operations and application performance.
  • For enterprise workloads, organizations want to limit the potential points of entry for hackers and other threat vectors. More servers and more clusters are going to increase the threat vulnerabilities, not reduce them.

Fortunately, there’s a better option: A mainframe-class Linux server with vastly improved economics. IBM’s LinuxONE server is optimized for enterprise-class Linux workloads that demand performance, scalability, security and manageability, as well as an attractive total cost of ownership model.

One major area where the IBM has focused LinuxONE is disaster recovery—a critical requirement for enterprise workloads that cannot afford downtime, or even short interruptions in application availability. “Because LinuxONE systems utilize a shared-everything data architecture, there is no need for multiple copies of files or databases,” according to an IBM-sponsored report issues by IT consultants Robert Frances Group. “This not only eliminates out-of-sync conditions, but also simplifies the set-up and execution of the recovery point objectives and recovery time objectives.”[1]

Three key principles underscore the IBM LinuxONE philosophy:

  • Open source delivered on the best environment for your organization—on-premises or in the cloud, be it public, private or hybrid.
  • Limitless flexibility and scalability for expanding workloads, combining the best of open source and enterprise computing.
  • Risk reduction, utilizing proven hardware and software platforms from a trusted source and heightened security features including blockchain technology.

As the performance, security and cost efficiency of Linux-based solutions becomes increasingly apparent, enterprise workloads are increasingly migrating to Linux. Smart IT and business decision-makers will continue gravitating toward purpose-built Linux solutions, rather than built-on-the-fly x86 clusters, to handle enterprise workloads in modernized data center environments

[1] “10 reasons LinuxONE is the best choice for Linux workloads,” Robert Frances Group, 2015

September 28, 2017  4:08 PM

Successful Blockchain Adoption Requires Smart Infrastructure Decisions

Michael Tidmarsh Michael Tidmarsh Profile: Michael Tidmarsh

Blockchain technology, initially associated exclusively with exotic new cryptocurrencies like Bitcoin, is aggressively expanding its market footprint as organizations look to leverage its core capabilities in security and cost efficiency.

Analysts project blockchain adoption will increase more than tenfold between 2016 and 2021, eclipsing $2.3 billion in global revenues.[1] And data presented at the World Economic Forum indicates that 10% of global gross domestic project will be stored on blockchain technology by 2027.[2]

Although blockchain—a distributed ledger and decentralized database for secure transactional data—is most widely utilized in financial services because of its Bitcoin relationship, it also is widely used for such use cases as counterfeit prevention, trading market clearings, claims fraud, Internet of Things and public records management. Those and other uses make it very attractive in such industries as retailing, public sector, insurance and healthcare.

It’s easy to lapse into deeply technical discussions about blockchain’s underpinnings—algorithms, permissioning and hashing. But blockchain’s main benefits are widely agreed upon: It is a highly secure, scalable and cost-efficient way to maintain and validate rights and privileges for all kinds of digital transactions.

“Blockchain has no single point of failure,” wrote “If one node goes down, it’s not a problem because all of the other nodes have a copy of the ledger.”[3] With that strength, it’s easy to see the attraction of blockchain in an era when security threats are increasing—dramatically—in their frequency, geographic scale and magnitude of data compromised.

So, adopting blockchain technology seems like a good idea, regardless of industry or geography. But with a commitment to blockchain also comes a responsibility to make smarter, longer-lasting infrastructure choices that match blockchain’s security, reliability and cost efficiency.

What should you look for in your blockchain infrastructure? According to a report from consulting firm Pund-It (and commissioned by IBM), there are some essential capabilities and functionality you should require when evaluating blockchain infrastructure.[4]

  • Multi-tenant separation and isolation, to ensure that participating entities data and activities are walled off.
  • External attack security to protect blockchain deployments through encapsulated software in a secure, signed, trusted, appliance-style container.
  • Cryptographic key safety to prevent privileged users from creating snapshots of blockchain data.
  • Integrated protection, rather than technology bolt-ons that often leave data exposed and accessible to attacks.

As organizations increase their use of blockchain technology across multiple use cases in order to improve security and cost efficiency, the selection of the underlying platform and infrastructure is critical. IBM has made a strategic commitment to this technology with the IBM Blockchain Platform, powered by IBM LinuxONE systems.

IBM Blockchain Platform represents an integrated approach, because it allows organizations to develop, govern and operate blockchain networks. The platform aligns with the Linux Foundation’s Hyperledger Project, an open source-based collaboration for blockchain development. IBM Blockchain Platform is optimized for cloud environments—specifically IBM Cloud—because it utilizes techniques designed to close potentially devastating blockchain vulnerabilities.

The platform is underpinned by IBM LinuxONE, a robust, highly available and mission-critical hardware system offering users the security, speed and scale required in blockchain use cases. IBM LinuxONE delivers rock-solid security through unique service containers and crypto-cards, while ensuring high throughput with dedicated I/O processing and high scalability supporting up to 2 million Docker containers.

Blockchain is a big, big deal for enterprises that need the combination of security, availability, performance and cost efficiency for their most important applications and workloads. That’s why IBM has made blockchain a major component of its enterprise solutions strategy via IBM Blockchain Platform and IBM LinuxONE.




[4] “Ensuring Secure Enterprise Blockchain Networks: A Look a IBM Blockchain and LinuxONE,” Pund-It Inc., August 2017

June 23, 2017  3:46 PM

Top 5 reasons SMBs should invest in software-defined hybrid cloud storage

Michael Tidmarsh Michael Tidmarsh Profile: Michael Tidmarsh
Cloud storage, Hybrid cloud

IT storage has become a critical decision point for small and midsize businesses (SMBs). Without the right storage solutions, SMBs run the risk of being overwhelmed by spiraling data growth, which can negatively impact IT costs, complexity, security and availability. At the same time, SMBs face the same pressures as larger companies to use data strategically to drive business operations, improve the customer experience and support digital transformation.

SMBs typically have the added challenge of tighter IT budgets and less IT personnel resources than enterprise companies. Therefore, the importance of managing return on investment (ROI) and delivering measurable business benefits can be even greater for IT decision-makers in SMB environments. Storage is a particularly important investment because data has become the lifeblood of most businesses and is growing significantly in volume, velocity and variety.

To address these issues, many SMBs have turned to public cloud storage options. However, using just the public cloud can be a challenge because IT has less control over costs, performance, security and compliance. In addition, if the public cloud storage is not centrally managed, the organisation can face additional risks caused by shadow IT. For those reasons and others, approximately 80% of midmarket customers are looking at how to integrate cloud-based storage with on-premises storage.[1]

The hybrid cloud storage environment created by the mix of public cloud and on-premises storage allows SMBs to maximise ROI while delivering the performance, capacity, scalability and flexibility required by developers, users and line-of-business managers throughout their organisations. The top five benefits of hybrid cloud storage for SMBs are:

  1. Improved ROI: With hybrid cloud storage, IT teams can manage costs more strategically, using the strengths of public cloud and on-premises infrastructure to their best advantage. For example, public cloud can be used to support seasonal spikes in demand and new infrastructure for DevOps, while on-premises storage can be used for applications where IT needs to maintain strict control over security and data protection. Both environments can be centrally managed from the same platform.
  1. Increased agility: With the right solution, IT organisations can use the storage platform of their choice, including all-flash storage or hybrid flash/spinning disk. They can also leverage features such as intelligent tiering and hybrid cloud object storage to maximise their investment. When IT is more agile, businesses benefit in a number of ways, including increased productivity, higher customer satisfaction, improved quality assurance and a greater ability to strategically leverage data analytics to meet the needs of employees, partners and customers.
  1. Reduced risk: SMBs can take more control over security, compliance and disaster recovery in a hybrid cloud storage environment. Hybrid cloud gives IT teams more options to support best practices in backup and reduce recovery time objectives and recovery point objectives. With disaster recovery as a service and backup as a service, IT teams can reduce overall storage costs and improve performance of production storage environments.
  1. Reduced complexity: With a software-defined model, IT teams can use automation and orchestration features in the management platform to streamline deployments, leverage shared resources and utilise the self-service capabilities of cloud environments. Software-defined hybrid storage solutions make scaling much simpler, and overall management is much less of a strain on IT resources—SMBs can have a single person managing the entire storage infrastructure from a single pane of glass.
  1. Ability to future-proof the business: SMBs are subject to the same volatility as larger organisations. Initiatives such as the Internet of Things, big data analytics and digital transformation are disrupting business processes across many industries. By leveraging a hybrid cloud storage platform, SMBs give themselves the best opportunity to maximise their storage investment over the next five years and beyond. The key to maximising hybrid cloud storage is to use a software-defined architecture. As noted by Neuralytix, “SDS provides a flexibility and consistency to span different deployment models with a consistent operational experience.”[2]


SMBs need to focus on the bottom line. They must invest in technologies that help them make more money now, while supporting the business through the unanticipated changes—and challenges—looming on the horizon. Software-defined hybrid cloud storage enables SMBs to improve ROI today with a platform that also prepares them for the future. It’s time to take the next step forward.

[1]SDS: The Road to a Hybrid Cloud Storage Strategy,” Neuralytix, May 2017

[2] Ibid

June 23, 2017  12:54 PM

Improving SAP HANA Performance, Resiliency and Flexibility: It’s a lot easier than you think

Michael Tidmarsh Michael Tidmarsh Profile: Michael Tidmarsh

Early adopters of the SAP HANA in-memory database were stuck with one infrastructure platform option: industry-standard x86 servers. As some of these organisations have discovered the hard way, limited options have left them with limited performance, resiliency and flexibility.

These limitations are becoming problematic for many customers. It’s been more than five years since SAP HANA arrived on the scene, and it is taking on a much more important role in driving innovation and competitive differentiation for many organisations. Some may be new to SAP HANA, while others have older x86 implementations. But wherever they are coming from, organisations need to get more out of their SAP HANA investments.

There is a better way. Companies are no longer confined to using x86 servers for SAP HANA. They can now use more powerful enterprise-grade Linux servers. This is an important development in the evolution of the SAP HANA market. By leveraging more powerful servers, IT teams can boost performance, reliability and flexibility while lowering operational costs.

What kind of an impact can IT make with more powerful servers? Some companies see a price/performance gain of as much as 80% using IBM Power Systems versus servers built on Intel Ivy Bridge EX. They also benefit from twice as many transactions per core, four times as many simultaneous threads per core, and about four times the memory bandwidth.

Cost savings are another benefit. IT can run as many as eight SAP HANA instances on a single server, versus two on an x86 server. This provides a greater ability to run mixed workloads and reduces energy, space and maintenance costs. It also reduces scale-out sprawl for organisations that need to refresh older SAP HANA appliances. There is also more flexibility, particularly when using Tailored Data Center Integration (TDI), which is more versatile than an x86 appliance. With TDI, organisations can maximise the value of existing server, storage and networking assets.

Dispelling the complexity myth

The combination of these benefits leads to better business results, including faster and more accurate insights, better forecasting, improved customer service, greater support for individual lines of business, higher availability and myriad other competitive differentiators. Such gains are particularly important as data growth continues unabated and companies embrace big data analytics as part of their digital transformation endeavors.

As attractive as it may be to maximise the value of SAP HANA, there is still one obstacle some organisations must overcome: the inaccurate belief and unnecessary fear that deploying these more powerful servers—or migrating to them from an x86 appliance—is overly complex and requires resources that many IT teams don’t have.

This is simply not the case. In fact, with the open architecture of IBM Power Systems, combined with the high levels of support customers receive from IBM and SAP, IT teams will discover that once they make the move, they will actually reduce complexity in managing ongoing operations and scaling their deployments. These are the facts:

  • IBM Power Systems use the same Linux interface as x86 appliances, which means that the tasks involved in configuration are very similar.
  • By running SAP HANA on more instances on a single server, IT teams can scale far more easily, without the need to purchase, deploy and manage more hardware.
  • IT can increase physical and virtual capacity on demand through a cloud architecture that enables components to be added without disruption.
  • Greater reliability and resiliency increase availability and productivity, while reducing the amount of time IT teams must devote to troubleshooting and problem resolution.
  • Migrating an existing database is not nearly as difficult as you might imagine, and there is plenty of easily accessible support from SAP and its customers. In fact, IBM and SAP are both leaders in customer support and committed as partners to helping customers accelerate digital transformation with SAP HANA.
  • By using a more powerful, flexible, scalable and resilient server platform, you future-proof your SAP HANA deployment. Your team won’t have to go through a refresh cycle nearly as often, which saves time, hassle and the complexity involved in specifying, purchasing and deploying new systems every few years.


SAP HANA has the potential to be one of the most important drivers of innovation and competitive differentiation—now and in the future. Organisations can get far more value out of their deployments by using a server platform that delivers greater performance, resiliency and flexibility, specifically SAP HANA on IBM Power Systems.

With this solution, organisations can leverage the same architecture that enables top performing high-performance computing applications. SAP HANA on IBM Power Systems not only delivers greater performance, but is also simple to deploy, manage and scale.

If you are holding back from moving to a better platform over concerns about making a change, perhaps it’s time to re-examine things and re-evaluate the risks and rewards. You may find that what you now think of as a risk will actually turn out to be a benefit.

May 3, 2017  3:11 PM

How you can leverage hybrid cloud object storage to reduce complexity, increase agility and improve security

Michael Tidmarsh Michael Tidmarsh Profile: Michael Tidmarsh
Hybrid cloud, Object storage

IT teams are increasingly turning to hybrid cloud storage as a means to increase flexibility and agility, particularly in dealing with the challenges created by exponential data growth. At the same time, object storage has emerged as an important technology in managing and storing unstructured data in cloud environments.

While object storage has been integral to the development of the public cloud market, it’s usage has been more limited among enterprises and midsize businesses. That is partly because these customers have had only two options: They could use public cloud or on-premises object storage solutions.

Now, however, organizations can take advantage of object storage in hybrid cloud storage models thanks to the development of a new class of solution called hybrid cloud object storage. With hybrid cloud object storage, IT can deploy object storage on premises, in the public cloud and in a hybrid cloud, leveraging a streamlined, simplified object storage approach that uses the same technology wherever the data is stored.

Hybrid cloud object storage is an important breakthrough in the evolution of the hybrid cloud storage market because it provides a cost-efficient, agile and highly secure method for organizations to manage unstructured data and use that data for competitive advantage.

Object storage works particularly well in cloud use cases because it is much simpler to scale than traditional file and block storage solutions. It is also more flexible and manageable in handling large data volumes in multi-use cloud environments. Among the key advantages:

  • Greater flexibility: You can choose the deployment options that work best for your data workloads, moving easily between cloud and on-premises environments. You can also leverage highly adaptive object storage that you can scale and adjust by workload.
  • Improved security: IBM’s Cloud Object Storage (COS) solution incorporates a range of features designed to help organizations meet security and compliance requirements. These include built-in encryption of data at rest and in motion, as well as authentication and data control capabilities.
  • Increased scalability: Object storage has been utilized in the largest cloud environments in the world and is designed for virtually unlimited scalability. IBM COS software has been tried and tested at web-scale with production deployments exceeding 100 petabytes of capacity at multiple customers. It has the ability to scale to exabytes while maintaining reliability, availability, manageability and cost efficiencies.
  • Simplified manageability: Hybrid cloud object storage can provide always-on availability, supporting a wide range of tasks with virtually zero downtime. These include software upgrades, hardware maintenance, storage capacity expansion, hardware refreshes and physical relocation of the storage system.


If you’re not yet familiar with hybrid cloud object storage, it’s time to catch up. Hybrid cloud object storage has the potential to be an important technology innovation for enterprise IT, particularly as companies continue to generate more and more unstructured data.

The ability to use object storage on premises or in the cloud gives IT teams much more flexibility to enhance agility, increase availability, simplify management, strengthen security and dramatically improve scalability. When it comes to object storage in the hybrid cloud, the future is now.

May 3, 2017  3:08 PM

The real cost of flash storage…and the higher cost of not using flash

Michael Tidmarsh Michael Tidmarsh Profile: Michael Tidmarsh
flash storage

Flash storage is rapidly displacing spinning disk drives for primary applications. IDC says 76% of enterprises plan to move more primary storage workloads into All Flash storage as legacy platforms come up for technology refresh.[1] 451 Research says almost 90% of organizations now have flash storage in their data centers while All Flash approaches “are becoming increasingly standard to support transactional applications.”[2]

Performance, of course, has been the main driver of the All Flash market. All Flash storage delivers orders-of-magnitude greater IOPS performance than spinning disks. With All Flash storage, organizations can modernize and upgrade their infrastructures to drive major business improvements and enable critical initiatives such as cloud computing, data analytics, mobile and social engagement, the Internet of Things (IoT) and security.

If there has been any impediment to the growth of the All Flash market it has been cost. All Flash storage has traditionally been more expensive than traditional spinning disks when measured on a per-gigabyte basis and, in some cases, IT decision-makers haven’t been able to justify the increased capital investment. Fair enough.

However, the dynamics are changing rapidly and dramatically. With significant price declines over the past few years, the cost of All Flash storage is approaching that of spinning disks—even on a per-gigabyte basis. Here’s what IDC has to say: “Continuing cost declines, coupled with flash-driven data reduction in primary storage environments in particular, should have the effective cost per gigabyte (the cost when factoring in storage efficiency technologies like data reduction) of enterprise-class flash media actually lower than 10,000-rpm and 15,000-rpm HDD raw capacity costs by the end of 2017 for most primary storage workloads.”

Even as the price gap closes, the reality is that per-gigabyte costs don’t even begin to measure the real incremental value that All Flash storage  delivers to today’s businesses when viewed through the lens of overall contribution to total cost of ownership (TCO).

And, with All Flash solutions that leverage modern software-defined storage architectures, such as the IBM FlashSystem V9000, the TCO advantages of All Flash storage are magnified even further. What are the TCO advantages of using software-defined All Flash storage arrays? Here are just a few:

  • Increase revenue: Organizations can process more transactions in shorter time frames and be far more responsive to the needs of customers, leveraging tools such as big data analytics and social networking.
  • Reduce costs and overhead: IT teams can reduce software licensing fees on databases and other applications, while lowering energy consumption costs and reducing the physical space required by the storage infrastructure.
  • Simplify IT: All Flash storage is typically much simpler to deploy, scale and manage than traditional spinning disks. With software-defined storage, IT teams can leverage automation and orchestration capabilities to reduce costs and risks.
  • Shrink the storage footprint: IT can leverage features such as virtualization, compression, data tiering, deduplication and data copy services to significantly reduce the storage footprint. This is critical in today’s era, where unabated data growth continues.
  • Accelerate time to market: Infrastructure can be deployed faster, which means users and IT can be more productive. DevOps teams can be faster and more efficient by processing larger and more complex datasets in shorter time periods to accelerate development and improve quality assurance.
  • Achieve better, faster, more accurate decision-making: With All Flash storage, the organization can be far more effective and efficient in using its data to drive real-time insights and decision-making.

When it comes to All Flash storage and TCO there’s a new reality for IT professionals these days: It is actually more expensive–  and risky – to not use All Flash storage for primary applications than it is to invest in the right software-defined All Flash storage platform.

[1]IDC’s Worldwide Flash in the Data Center Taxonomy 2017,” IDC, January 2017

[2]Flash-based approaches are increasingly becoming mainstream for primary storage,” 451 Research, June 2016

May 3, 2017  3:06 PM

Four key mistakes to avoid in implementing big data analytics

Michael Tidmarsh Michael Tidmarsh Profile: Michael Tidmarsh
Analytics, Big Data

Competitive advantage is increasingly determined by how quickly and effectively organizations can leverage advanced analytics and insights to drive measurable results. McKinsey describes this as “The Age of Analytics” and says the critical question facing companies today is how to “position themselves in a world where analytics can upend entire industries.”[1]

Despite the growing importance of big data analytics, many organizations are still figuring out how to maximize the value it can deliver across the enterprise. According to one survey, nearly 50% of big data decision-makers said they are not leveraging big data extensively across all business units.[2]

Where are they coming up short? Here are four mistakes that can impede big data analytics efforts:

  1. Not having a plan. Big data analytics is not just a technology issue; it is also cultural. Management needs to buy in, and new processes have to be baked into the culture. If you don’t have a strategic plan in place, you won’t know which technologies to invest in or be able to establish the necessary governance and data management policies and practices.
  1. Not focusing on talent. Big data analytics requires specific skill sets. The Harvard Business review has called data scientist “The sexiest job of the 21st century,” but, in reality, there is still a shortage of individuals with the knowledge and experience to drive enterprise-wide deployments. Make sure you either have the talent in house or are working with technology partners that can supplement the skills and experience of your own teams.
  1. Not modernizing your infrastructure. The need for speed and accuracy in analytics puts intense pressure on the underlying infrastructure, particularly for workloads that have larger datasets and growing volumes and varieties of data. This is one of the reasons why companies are rushing to embrace All Flash storage and hybrid cloud storage solutions.
  1. Not upgrading the server platform. Because the storage infrastructure is so central to the delivery of big data analytics, many IT leaders believe that if they invest in the right storage platform their infrastructure challenges will be addressed. This is not the case. Big data analytics also puts enormous pressure on the compute infrastructure for processing speed, reliability, operational simplicity and resiliency.

Leveraging modern servers for analytics workloads

Modernizing the server platform is one of the first steps that organizations can take to support big data analytics. As you build your strategic plan and bring on the requisite talent, having a modern server infrastructure in place will accelerate your ability to deliver real-time actionable insight to your managers, employees and customers.

Many organizations are finding that they can drive immediate performance gains in their analytics workloads by using integrated server and software platforms that have been designed specifically for big data environments.

As an example, IBM’s high-performance Power Systems Linux-based servers are now available in configurations designed specifically for big data and analytics workloads, including packages for SAP HANA, NoSQL, operational analytics, data warehouses and several others. Research shows that there are clear advantages to modernizing your analytics infrastructure with these types of solutions, including:

  • Increased performance: The IBM Power8 server has demonstrated 1.8x faster per-core performance for SAP HANA compared to x86 platforms, resulting in faster and more efficient analytics of business data and setting a world record in the 2-billion record category.
  • Accelerated time to value: Organizations can save setup time and maintenance costs by utilizing a complete pre-assembled infrastructure that has been designed specifically for analytics workloads with pre-installed and tested software.
  • High reliability and resiliency: As companies increase their reliance on analytics to drive business initiatives, uptime becomes more important than ever. IBM Power Systems are designed for 99.997% uptime and use self-monitoring and predictive failure alerts to pre-emptively migrate applications before failures occur.
  • Flexibility and agility: Organizations should be able to leverage either a scale-out or scale-up architecture for their analytics workloads, while also incorporating features such as server virtualization and support for multi-tenant cloud functionality.

Turning information into power

In the years ahead, competitive advantage will increasingly go to those organizations that are best able to use their information to create business value and drive innovation through real-time analytics and insight. The foundation IT puts in place today will go a long way in determining the success of the business in the future.

In order to put that foundation in place, IT leaders must address the potential pitfalls discussed in this article, namely:

  • Put a strategic plan in place.
  • Make sure you have the right talent, either on staff or through your tech partners.
  • Modernize your infrastructure.
  • Upgrade your server platform.

Any successful analytics initiative will be built on an infrastructure foundation that can deliver the requisite performance, capacity, scalability, reliability, resiliency and agility. As you’re building your plan and hiring the right talent, make sure you are investing in infrastructure solutions that will give you the best chance of success.

[1]The age of analytics: Competing in a data-driven world,” McKinsey& Company, December 2016

[2]Survey of Big Data Decision-Makers,” Attivio, May 25, 2016

May 3, 2017  2:58 PM

How transparent tiering can help you reduce complexity and increase agility in hybrid cloud storage

Michael Tidmarsh Michael Tidmarsh Profile: Michael Tidmarsh
Cloud storage, Hybrid cloud

Cloud computing has caused a seismic shift in how IT teams manage and deploy storage. The ability to quickly add storage resources in the public cloud—and to use cloud services for backup, archiving and disaster recovery—has create a groundswell of demand for cloud storage.

The overall cloud storage market reached nearly $24 billion in 2016 and will grow at a compound rate of nearly 26% a year through 2021, when sales are expected to reach a staggering $75 billion.[1]

However, while cloud storage has delivered significant value to many businesses, it has also created its fair share of risks and challenges— particularly for the IT teams charged with managing storage.

With public cloud, IT runs the risk of not having full control over key areas such as costs and performance. According to one survey, between 30% and 45% of the money spent on public cloud is wasted.[2] Security and compliance are also potential problem areas with public cloud, particularly when users and line-of-business managers deploy cloud services as shadow IT separate from the corporate IT department.

Why hybrid cloud storage?

For these and a host of other reasons, organizations are increasingly embracing hybrid cloud storage models that use a mix of public cloud services and on-premises storage infrastructure. Hybrid cloud storage gives IT more control over costs, performance, security and compliance.

Hybrid cloud storage also enables IT to be much more strategic in how, where and when it uses the public cloud to offload and augment important functions such as disaster-recovery-as-a-service or backup-as-a-service.

One of the fundamental benefits of hybrid cloud storage is that it provides infrastructure teams with another storage tier that can be deployed and scaled quickly, easily and strategically. Not only can public cloud be used for second-tier storage functions such as archiving, backup and recovery; it can also be used to scale production environments and support new initiatives or business-critical workloads, such as DevOps.

The benefits of transparent tiering

To achieve these benefits IT teams have had to work long and hard to manage cloud storage in conjunction with existing on-premises infrastructure. They have not had access to technology that would give them a simple and elegant way to transparently use public cloud storage with the same ease in which they use a local disk array in a hybrid cloud environment.

But that was then, and this is now, and now that technology is available.

It is called “transparent cloud tiering” and it is being offered for the first time in the IBM Spectrum Scale solution. With transparent cloud tiering, cloud storage becomes another software-defined storage tier on the menu, along with flash, disk and tape.

Intelligent tiering capabilities allow file, block and object stores to migrate non-disruptively among all tiers— based on information lifecycle management criteria and policies established by the IT department.

With transparent cloud tiering, enterprises can easily bridge storage silos on-premises while adding the benefits of cloud storage to their overall storage solutions. Transparent cloud tiering also reduces IT complexity and increases agility, giving IT more control over performance and security.


As the use of public cloud storage keeps growing, IT teams continue to look for simple ways to incorporate public cloud services into their overall hybrid cloud storage strategies. The availability of transparent cloud tiering is an important breakthrough in adding versatility, simplicity and control to hybrid cloud storage models.

[1]Cloud Storage Market Worth $74.94 Billion USD by 2021,” marketsandmarkets, September 2016

[2]2017 State of the Cloud Report,” Rightscale. Feb. 15, 2017

Page 1 of 212

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: