Ahead in the Clouds

Page 4 of 8« First...23456...Last »

June 3, 2016  1:49 PM

Why unfair contract terms put end-user trust in cloud at risk

Caroline Donnelly Profile: Caroline Donnelly
Cloud compliance, Cloud Security, Contracts, trust

Cloud contracts are notorious for being weighted in favour of providers but,  for an industry still grappling with how best to win the trust of users, it’s a risky way to do business, argues Caroline Donnelly

Whenever news breaks about a cloud company going out of business or announcing shock price hikes, the first thought that usually crosses their customers’ minds is, “what are my rights?”

Having covered the demise of a few high-profile cloud firms over the years, experience has taught Ahead in the Clouds (AitC) that if people only ask this question once their provider runs into trouble (or does something they are not happy with) it’s probably too late.

Finding out where they stand should the company decided to up their prices, terminate a service they rely on or carry out some other dastardly deed, should – ideally – be established well before they sign on the dotted line.

Experience also tells us that, in the rush to get up and running in the cloud, not everyone does. In fact, AitC would wager, when faced with pages and pages of small print, written in deathly-dull legal speak, very few actually do.

So, one might argue, when something goes wrong, the customer has no-one to blame but themselves if the terms and conditions (T&Cs) give the provider the right to do whatever the heck they like with very little notice or regard for the impact these actions may have on users.

But is that right, and should the cloud provider community be doing more to ensure their T&Cs are fairly weighted in favour of users, and are not riddled with clauses designed to trip them up?

In AitC’s view, that’s a no-brainer. End-users aren’t as fearful as they once were about entrusting their data to the cloud, but if providers are not willing to play fair, all the good work that’s gone into getting to this point could be quickly undone.

And it’s not just AitC that feels this way, because the behaviour of the cloud provider community has emerged as a top concern for consumer rights groups and regulators of late – and rightly so.

Held to account

The Competition and Markets Authority’s 218-page (CMA) Consumer Law Compliance Review, published in late May 2016, raised red flags about five dubious behaviours it claims cloud storage companies have a habit of indulging in that risk derailing the public’s trust in off-premise services.

And, while the CMA’s review set out to examine whether the way online storage firms behave could be considered at odds with consumer law, a lot of what it covers could be easily applied to any type cloud service provider and how it operates.

Examples of bad behaviour outlined in the report include failing to notify end-users about their automatic contract renewal procedures, which could result in them getting unexpectedly locked in for another year of service or hit with surprise charges.

Remote device management company LogMeIn’s activities in this area have come under close scrutiny from Computer Weekly, with customers accusing the firm of failing to tell them – in advance of their renewal date – that the price they pay for its services was set to rise.

LogMeIn refutes the allegations, and claims customers are notified via email and through in-product messaging when users login to the company’s control panel, even though its T&Cs suggest it’s under no legal obligation to do so.

Other areas of concern raised by the report include T&Cs that allow cloud firms to terminate a service at short notice and without offering users compensation for any inconvenience this may cause.

Microsoft’s decision in November 2015 to drop its long-standing unlimited cloud storage offer for OneDrive customers, after users (unsurprisingly) abused its generosity, would fall under this category.

The 2013 demise of cloud storage firm Nirvanix also springs to mind here, when users were given just two weeks to shift their data off its servers or risk losing it forever after the company filed for bankruptcy.

The borderless nature of the cloud often works against users intent on seeking some form of legal redress in some of these scenarios, as the provider’s behaviour might be permissible in their own country, but not in the jurisdiction where the customer resides.

The costs involved with trying to pursue something like this through the courts may vastly outweigh any benefit the customer hopes to get out of doing so, anyway.

In cloud we trust

On the back of its report, the CMA is now calling on cloud storage firms to commit to providing fairer terms of use for customers, so – if a consumer fails to read (or understand) the small print – the risk posed to the financial and operational health of their organisation is minimised.

It’s certainly a step in the right direction, and here’s hoping similar initiatives, incorporating a wider range of suppliers spanning cloud software and infrastructure start to emerge as time goes on. Because if customers can’t trust a provider to put their interests first, why should they assume they’ll treat their data any differently?

April 22, 2016  1:24 PM

Think the Open Compute Project isn’t for you? Think again

Caroline Donnelly Profile: Caroline Donnelly
Cloud Computing, datacentre, Green energy, Open source

In this guest post, James Bailey, director of datacentre hardware provider Hyperscale IT, busts some enterprise-held myths about the Open Compute Project

Market watcher Gartner predicts the overall public cloud market will grow by 16.5% to be worth $203.9bn by the end of 2016.

This uptick in demand for off-premise services will put pressure on service providers’ hardware infrastructure costs at a time when many of the major cloud players are embroiled in a race to the bottom in pricing terms, meaning innovation is key.

On the back of this, The Open Compute Project (OCP) is slowly (but surely) gaining traction.

Now in its fifth year, the initiative is designed to facilitate the sharing of industry know-how and best practice between hardware vendors and users so that the infrastructures they design and produce are efficient to run and equipped to cope with 21st century data demands.

Over time, a comprehensive portfolio of products have been created with the help of OCP. For the uninitiated, these offerings may appear to only suit the needs of an elite club of hyperscalers, but could they have a role to play in your average enterprise’s infrastructure setup?

To answer this question, it is time to bust a few myths around OCP.

Myth 1: Datacentre efficiency is all that matters to OCP

This is largely true. After all, the mission statement of OCP founder, Facebook, was to create the most efficient datacentre infrastructure, combined with the lowest operational costs. The project encompasses everything from servers, networking and storage to datacentre design.

The server design is primarily geared around space and power savings. For example, many of the servers can be run at temperatures exceeding 40C, which is way higher than the industry norm, resulting in lower cooling costs.

This efficiency adds up to an important cost saving and a smaller carbon footprint. When Facebook published the initial OCP designs back in 2011, they were already 38% more energy-efficient to build and 24% less expensive to run than the company’s previous setup.

Myth 2:  Limited warranty

Most OCP original design manufacturers (ODMs) offer a three-year return to base with an upfront parts warranty as standard. This can often be better than what is offered by other OEM hardware vendors today.

The warranty options do not stop there. Given the quantities most customers purchase, vendors are open to creating bespoke support and SLAs.

In recent times, some of the more mainstream players have got in on the action. Back in April 2014, HPE announced a joint venture with Foxconn, resulting in HPE Cloudline servers aimed specifically at service providers.

Myth 3: Erratic hardware specifications

Whilst specifications do indeed evolve, the changes are not taken lightly. Any specification change is submitted to the OCP body for scrutiny and acceptance.

The reality of buying into the OCP ecosystem is that you are protecting yourself from vendor lock-in. Many manufacturers build the same interchangeable systems from the same blueprints, thus giving you a good negotiation platform.

That said, there is a splintering of design. A clear example is difference in available rack sizes.

The original 12-volt OCP racks are 21-inches but – more recently – ‘OCP-inspired’ servers have emerged that fit into a standard 19-inch space.

Overall, this is positive as you can integrate OCP-inspired machines into your existing racks, which has created a good transition path for datacentre operators looking to kit out their sites exclusively with OCP hardware.

Google’s first submission to the community is for a 48V rack which would create a third option. But surely this is all healthy?

Google estimate this could have energy-loss savings of over 30% compared to the current 12V offering, and who would not want that? There are also enough ODMs to ensure older designs will not disappear overnight.

Myth 4: OCP is only for the hyperscalers

Jay Parikh, vice president of infrastructure at OCP founder Facebook, claims using OCP kit saved Facebook around $1.2 billion in IT infrastructure costs within its first three years of use, by doing its own designs and managing its supply chain.

Goldman Sachs have a ‘significant footprint’ of OCP equipment in their datacentres, and Rackspace – another founding member – heavily utilises OCP for its OnMetal product. Microsoft is also a frequent contributor and runs over 90% of their hardware as OCP.

Additionally, there are a number of telcos – including AT&T, EE, Verizon, and Deutsche Telekom – that are part of the adjacent OCP Telco Infra Project (TIP).

Granted, these are all very large companies but that quantity of scale drives the price down for everyone else. So, if you are buying a rack of hardware a month, OCP could be a viable option.

Opening up the OCP

In summary, the cloud service industry has quickly grown into a multi-billion dollar concern, with hardware margins coming under close scrutiny.

The only result can be the rise of vanity-free whitebox hardware (ie hardware with all extraneous components removed). Recent yearly Gartner figures show Asian ODMs like Quanta and Wistron growing  global server market share faster than the traditional OEMs. Nevertheless, if Google is one of your customers, it is easy for these numbers to get skewed.

Even for those not at Google’s scale, the commercials of whitebox servers are attractive, and it might give smaller firms that are unable to afford their own datacentre a foot in the door.

However, most importantly, the project has also led to greater innovation and that is where it really gains strength. 

OCP brings together a community from a wide range of business disciplines, with a common goal to build better hardware. They are not sworn to secrecy and can work together in the open, and that really takes the brakes off innovation.


March 18, 2016  11:32 AM

What Apple, Dropbox and Spotify’s shifting cloud strategies really mean for AWS

Caroline Donnelly Profile: Caroline Donnelly
Apple, AWS, Cloud Computing, customer story, Google

Amazon Web Services (AWS) celebrated its 10th anniversary on 14 March, having devoted the past decade to popularising the cloud computing concept and – in turn – shaking up the IT industry.

To mark the occasion, the Infrastructure-as-a-Service (IaaS) giant released a series of blog posts that saw execs -such as CTO Werner Vogels – taking a fond look back at some of the high points of AWS’ first decade in business.

These include signing up a million active users to its cloud platform, such as Netflix, AirBnB, Lebara, Guardian Media Group, Trinity Mirror Group, Aviva and others, while cultivating a product release cadence that sees it rollout hundreds of new features and services for subscribers each year.

However, while the firm and its execs set about looking back over its successes, industry watchers were busy pondering what the company’s next 10 years in business are likely to look like. Particularly in light of the news that several of the firm’s high-profile customers have started scaling back their use of its services. Or have they?

Music streaming site Spotify announced in February that it was in the throes of moving its IT infrastructure over to the Google Cloud Platform, having previously been hailed as a reference customer of AWS.

Earlier this week, Dropbox, a major user of Amazon’s Simple Storage Service (S3), outlined details of the work it is doing to curtail its use of cloud, resulting in 90% of its users’ data now being stored on-premise.

 A few days later, this was followed by a (source-led) report that consumer electronics giant Apple was following Spotify over to Google’s cloud. The company is already known to run unspecified amounts off its operations in both AWS and the Microsoft Azure cloud, incidentally.

Shifting sands of enterprise IT

This apparent “mass exodus” of big AWS customers has prompted a degree of debate online about whether or not this is indicative of a wider industry trend, and that – after a decade of steady growth and big customer wins – Amazon might be losing its hold on the cloud market. I personally don’t subscribe to that notion.

You see, while the Spotify, Apple and Dropbox news is certainly interesting (I wouldn’t have written about it, if it wasn’t), I personally don’t think what we’re bearing witness to here is necessarily a sign that Amazon’s grip on the cloud market is weakening.

According to Ahead In the Cloud sources, Spotify and Dropbox are still using Amazon’s cloud. And – certainly in the latter case – looks set to use more of its capacity over time to prop up its international operations for data sovereignty purposes.

So, no, I don’t think we’re witness the beginning of the end of AWS, despite what some rather over-excited folks on Twitter might claim.

Instead, what we’re actually seeing is the cloud market coming of age. And, by that, I mean really starting to deliver on the promises the industry’s great and good have made in the past about off-premise services giving enterprises greater freedom when it comes to IT.

I’ve spent more time than I ever care to think about sat in IT conference keynotes, listening to vendor execs wax lyrical about how cloud will allow enterprises to move their workloads – based on their cost, performance and security requirements – to wherever makes most sense to run them.

With that in mind, what we’re really seeing – in the case of Spotify, Dropbox and (allegedly) Apple – is simply them exercising their right to do this.

It’s also worth mentioning that cloud is still a relatively nascent technology concept, and many companies are still getting to grips with how best to use it, undoubtedly resulting in several tweaks to their product and supplier strategy as time goes on. Again, what we’re seeing with Spotify, Dropbox and Apple (reportedly) is probably them going through the same process.

Social gaming firm Zynga went through something similar several years ago, that saw it set out plans to ditch AWS, in favour of building out its own datacentres because – given the sheer number of people playing its games – they could achieve the economies of scale needed to make it worthwhile for them.

Unfortunately, this change in strategy occurred just before demand for its flavour of desktop- and browser-based games dropped through the floor, and mobile gaming took off, prompting it to abandon its build-your-own datacentre strategy and ramp up its use of AWS again.

All-in or all-out? 

What the Zynga example neatly serves to highlight is the futility of discussing company’s cloud strategies in absolute terms: you’re either all-in or you’re not, and once you’ve moved your final workload or whatever to a certain provider’s cloud, the job’s done.

The reality is, for many firms, their cloud strategies will probably end up being a lot more fluid than that, with end users moving to shift workloads from one provider to another or back on-premise as and when they want and need to.

As the price of using cloud continues to drop and providers add more features and functionality to their platforms, end users will get more comfortable with using off-premise services. This, in turn, means they will become more adept at switching providers – if someone is offering a sweeter deal elsewhere – or move to adopt a multi-provider cloud strategy.

While we watch and wait for all this to play out, here’s to the next ten years of cloud. Or whatever we’re calling it by then.


February 24, 2016  3:08 PM

Could cloud be the gateway to innovation for financial services firms?

Caroline Donnelly Profile: Caroline Donnelly
Cloud Computing, Cloud Security, Guest Post

In this guest post, Ashish Gupta, BT’s UK president of corporate and global banking financial markets, shares his views on how the banking sector should go about embracing cloud.

The need to scale – add more customers, trade new asset classes, expand locations – at speed has overcome the financial services sector’s initial reluctance to use cloud services. What’s more, the flexibility of cloud-based resources and services is an attractive alternative to the expense of owning and running large datacentres.

When talking cloud, the first question is always about security. Just how secure can customer data and commercial operations be when stored on someone else’s infrastructure? The short answer is: very secure indeed. Cloud services should be at least (if not more inherently secure) than their in-house equivalents.

However, the absence of industry-wide standards and the ease with which an individual department or business unit can sign up to the cloud mean some organisations are using cheaper, consumer-grade cloud services that could leave them vulnerable to security breaches.

A piece of research  by BT exploring attitudes and levels of preparedness towards distributed denial of service (DDoS) attacks found more than a third of financial services organisations admit using mass market cloud services. Others may not even know they are.

Innovation is key to success

Of course, one of the great positives about cloud computing is that it encourages innovation, helping to build a more responsive, agile organisation. But if allowed to flourish uncontrolled, so called ‘shadow IT’ can open up a host of problems.

As such, banks and financial services companies need to know where their customer data is at all times, and details about how it is being handled. They need to be sure that an external cloud service isn’t going to leave the door open to malicious activity and DDoS attacks.

For the CIO, the challenge is how to let the organisation exploit the choice and flexibility of on-demand services without compromising corporate security or contravening regulatory requirements.

A CIO must – somehow – exercise a degree of control over the whole varied and shifting cloud estate.

Specialised cloud services for the financial community are part of the solution; they provide a highly secure ecosystem that connects thousands of applications and services with users worldwide. But what about your broader enterprise cloud applications? They also need to be secure.

The answer is to roll all your distinct cloud services – public, private and hybrid – into one single cloud that you can manage and secure centrally.

Adopting this type of approach without the support of an external service partner is quite a big task, even for the most experienced of IT professionals. The pragmatic CIO will look for an expert partner, such as an independent global network provider with skills in connectivity, security and integration.

Or, as industry analyst Ovum puts it: “Enterprises are increasingly likely to discriminate toward cloud service providers with combined datacentre and networking orchestration skills as their trusted brokers across hybrid clouds.”

Bursting the cloud of uncertainty

Centralising control with this type of strategy will help build security into the whole cloud environment, so employees (or customers) will to be able to connect securely from anywhere on any device to any service.

There’s no reason why mobile devices cannot be as secure as a desktop PC with the right controls. So cloud-based proxy servers let users connect securely via the internet from wi-fi, fixed and mobile lines.

You can remotely lock down the microphone and camera on smartphones so they can be used securely on the trading floor. Your own app store gives you control over what your users can download and use over the cloud.

Financial regulators including the SEC and the Financial Conduct Authority are taking a keen interest in cyber security. Taking a an approach like this will help financial services companies demonstrate that they understand the operational risks of cloud computing and have the right measures in place for secure trading and to protect data. For business, it offers the best of both worlds: the freedom to innovate, in a secure and compliant environment.


February 11, 2016  2:54 PM

EU-US Privacy Shield: A viable alternative to Safe Harbour?

Caroline Donnelly Profile: Caroline Donnelly
Data protection, Guest Post, safe-harbour

In a joint guest post, Rafi Azim-Khan, the European head of data privacy, and Steven Farmer, Counsel, for Pillsbury Law set out the reasons why cloud firms and users must tread carefully around Safe Harbour’s replacement 

The European Commission and the US Department of Commerce have reached an accord on a new transatlantic data transfer protocol to replace the defunct ‘Safe Harbour’ agreement.

Known as the EU-US Privacy Shield, the new-look agreement was met with a mixed reaction from those relying on Safe Harbour (which was invalidated in October 2015) to shift EU data to the US. But, is it really the cure-all solution that industry watchers in some quarters have heralded it to be?

Although the text of the new framework is not yet available, reported key features of the Privacy Shield include:

  • Stronger obligations to be imposed on U.S. companies to protect the personal data of EU citizens, and stronger monitoring and enforcement to be carried out by the US Department of Commerce and Federal Trade Commission. It is yet to be confirmed how such activities will take shape.
  • Written assurances from the US that its government will not commit indiscriminate mass surveillance of data transferred pursuant to the Privacy Shield, and that government access to EU citizens’ data for law enforcement and national security purposes will be subject to clear limitations, safeguards, and oversight mechanisms.
  • Similar to Safe Harbour, US companies wishing to rely upon the Privacy Shield will have to register their commitment to do so with the US Department of Commerce.
  • Imposing a “necessary and proportionate” requirement for when the US government can snoop on EU citizens’ data that would otherwise be protected.
  • New contractual privacy protections and oversight for data transferred by participating US companies to third parties (or processed by those companies’ agents).
  • A privacy ombudsman within the US to whom EU citizens can direct data privacy complaints and, as a last resort, the Privacy Shield would offer EU citizens a no-cost, binding arbitration mechanism.
  • An annual joint review of the Shield that would also consider issues of national security access.

While adoption of the Privacy Shield is arguably preferable to the gaping hole that was left by the defunct Safe Harbour, there are several issues that may undermine its value.

With the new framework not yet finalised, it is possible the threshold for keeping tabs on EU citizen data may not be satisfactorily defined. 

This could lead to the re-establishment of a vague legal standard subject to political whims on both sides of the Atlantic. The end result being that companies relying on the Privacy Shield could be subjected to shifting policies and interpretations.

Additionally, if the annual joint review of the framework allows for it to be dismantled or substantially changed each year, then this could also diminish the certainty that US companies would seek to achieve through compliance.

All this raises the question of whether the Privacy Shield will offer a more valuable solution to those currently available to US importers of data. At this point, maybe not.

With uncertainty surrounding the Privacy Shield, other options for transatlantic data transfers – namely model contract clauses and binding corporate rules – are arguably more attractive alternatives for US companies transferring data Europe at this point.

More will be revealed as the EU and US move closer towards a binding agreement but at this stage companies might be better off considering the alternatives rather than putting all of their faith in the Privacy Shield.


January 20, 2016  5:59 PM

Using the Working Set to improve datacentre workload efficiency

Caroline Donnelly Profile: Caroline Donnelly
datacentre, Efficiency, Guest Post, workloads

In this guest post, Pete Koehler, technical marketing engineer for PernixData, explains why datacentre operators need to get a handle on the Working Set concept to find out what’s really going on in their facilities. 


There are no shortage of mysteries in the datacentre, as unknown influencers undermine the performance and consistency of these environments, while remaining elusive to identify, quantify, and control. 

One such mystery as it relates to modern day virtualised datacentres is known as the “working set.” This term has historical meaning in the computer science world, but the practical definition has evolved to include other components of the datacentre, particularly storage. 

What is a working set? 
The term refers to the amount of data a process or workflow uses in a given time period. Think of it as hot, commonly accessed data within the overall persistent storage capacity. 
But that simple explanation leaves a handful of terms that are difficult to qualify, and quantify. 

For example, does “amount” mean reads, writes, or both? Does this include the same data written over and over again, or is it new data? 

There are a few traits of working sets that are worth reviewing. These are: 
•Driven by the applications driving the workload, and the virtual machines (VMs) they run on. Whether the persistent storage is local, shared, or distributed doesn’t matter from the perspective of how the VMs see it.  
•Always related to a time period, but it’s a continuum, so there will be cycles in the data activity over time.
•Comprised of reads and writes. The amount of each is important to know because they have different characteristics, and demand different things from the storage system.
•Changed as your workloads and datacentre evolves, and they are not static.

If a working set is always related to a period of time, then how can we ever define it? Well, a workload often has a period of activity followed by a period of rest. 

This is sometimes referred to the “duty cycle.” A duty cycle might be the pattern that shows up after a day of activity on a mailbox server, an hour of batch processing on a SQL server, or 30 minutes compiling code. 

Working sets can be defined at whatever time increment desired, but the goal in calculating a working set will be to capture at minimum, one or more duty cycles of each individual workload.

Why it matters
Determining a working set size helps you understand the behaviours of your workloads, paving the way for a better designed, operated, and optimised environment.
 
For the same reason you pay attention to compute and memory demands, it is also important to understand storage characteristics; which includes working sets. 

Therefore, understanding and accurately calculating working sets can have a profound effect on a datacentre’s consistency. For example, have you ever heard about a real workload performing poorly, or inconsistently on a tiered storage array, hybrid array, or hyperconverged environment? 

Not accurately accounting for working set sizes of production workloads is a common reason for such issues.

Calculating procedure
The hypervisor is the ideal control plane for measuring a lot of things, with storage I/O latency being a great example of that. 

It doesn’t matter what the latency a storage array advertises, but what the VM actually will see. So why not extend the functionality of the hypervisor kernel so that it provides insight into working set data on a per VM basis? 

Then, once you’ve established the working set sizes of y our workloads, it means you can start taking corrective action and optimise your environment. 

For example, you can:
•Properly size your top-performing tier of persistent storage in a storage array
•Size the flash and/or RAM on a per host basis correctly to maximize the offload of I/O from an array
•Take a look at the writes committed on the working set estimate to gauge how much bandwidth you might need between sites, which is useful if you are looking at replicating data to another datacentre.
•Learn how much of a caching layer might be needed for your existing hyperconverged environment
•Demonstrate chargeback/showback. This is one more way of conveying who are the heavy consumers of your environment, and would fit nicely into a chargeback/showback arrangement

In summary
Determining an environment’s working set sizes is a critical factor of the overall operation of your environment. Providing a detailed understanding of working set sizes helps you make smart, data-driven decisions. Good design equals predictable and consistent performance, and paves the way for better datacentre investments. 


December 17, 2015  9:00 AM

The cloud migration checklist: What to consider

Caroline Donnelly Profile: Caroline Donnelly
Cloud Computing, Guest Post, Skills, Training

In this guest post, Sarvesh Goel, an infrastructure management services architect at IT consultancy Mindtree, offers enterprises a step-by-step guide to moving to the cloud.

There are many factors that influence the cloud migration journey for any enterprise. Some of them may trigger change in the way software development is approached, or even internal service level agreements, and information security standards.

The risk of downtime, and the knock-on effect this could have on the company’s brand value and overall reputation, should the switch from on-premise to cloud not go to plan, is often a top concern for some.

Below, we run through some of the other issues that can dictate how an enterprise proceeds with their cloud migration, and how their IT team should set about tackling them.

Application architecture

If there are multiple applications that talk to each other often, and require high speed connections, it is best to migrate them together to avoid any unforeseen timeout or performance issues.

The dependency of applications should be carefully determined before moving them to the cloud. Standalone apps are usually easier to move, but it’s worth being mindful that there are likely to be applications that simply aren’t cloud compatible at all.

Network architecture

There could be a few applications that require fast access to internal infrastructure, telephone systems, a partner network or even a large user base located on-premise. These can rely on a complex network environment and present challenges that will need to be addressed before moving to cloud.

Alternatively, if there are applications that are being served to global users and require faster download of static content, cloud can still be the top choice to provide customers with access to local or closer locations for content. Such examples include the media, gaming and content delivery industries.

Business continuity plan

Business continuity and internal/external SLA with customers often drive the application migration journey to cloud for disaster recovery purposes.

Cloud is an ideal target for hosting content for disaster recovery. It provides businesses with access to certified datacentres, hybrid offerings, bandwidth, storage and all at a lower cost.

The applications can be easily tested for failover and customisations can be made to hardware sizing of applications if and when disasters occur.

Compliance requirements

There could be legal reasons why personal or sensitive information needs to remain within enterprise’s firewalls or in on-premise datacentres.

Such requirements should be carefully analysed before making any decision on moving applications to cloud, even when they are technically ready.

IT support staff training

Undertaking a migration requires having people on hand who understand the cloud fundamentals and can support the move.

Such fundamentals include knowledge of storage, backup, building fault tolerant infrastructures, networking, security, recovery, access control and, most importantly, keeping a lid on costs.

Disaster recovery

For businesses around the world, including Europe, building a disaster recovery solution can be expensive, difficult, and requires regular testing. 

Many European cloud vendors offer services on a pay as you use basis, with built-in disaster recovery, application or datacentre failure recovery, and continuous replication of content.

Using cloud for disaster recovery could provide a significant cost reduction in terms of infrastructure hardware procurement and the maintenance of the datacentre footprint.

Organisations could also choose disaster recovery locations in the same region as the business or several thousand miles away.

To conclude, once the applications are tested on cloud, and the legal/compliance concerns are addressed, organisations can opt for rapid cloud transformation.

This allows the development team to adopt the cloud fundamentals and use the relevant tool sets for rapid scaling of applications and create a more robust application experience, embracing all the power that cloud provides – not to mention the fallback option that gradual cloud migration provides to enterprises.


December 14, 2015  12:58 PM

Addressing the datacentre skills gap by changing the cloud conversation

Caroline Donnelly Profile: Caroline Donnelly
Business strategy, Cloud Computing, datacentre, Skills, Training

Ahead in the Clouds recently attended a tour of IO’s modular datacentre facility in Slough, along with a handful of PhD students from University College London (UCL).

The event’s aim  was to open up the datacentre to a group of people who may never have stepped inside one before to enlighten them about the important (and growing) role these facilities play in keeping the digital economy ticking over.

And, based on the reactions of some of the students on the tour, it’s a lesson that’s long overdue.

For example, all of them largely understood the concept of cloud computing, but seemed surprised to learn that it is a little more grounded in the on-premise world than its name may suggest.

Indeed, the idea that “cloud” has a physical footprint – in the form of an on-premise datacentre – seemed to come as news to almost all of them.

For most people working in the technology industry today, that’s either a realisation they made a very long time ago or can be simply filed away in a folder marked “things I’ve always sort of known”. But, if you’re an outsider, why would you?

The datacentre industry prides itself on creating and running facilities that, to most people, resemble non-descript office blocks, if they bother to cast their eye over them at all.

Given the sensitivity of the data these sites house, as well as the cost of the equipment inside, it’s not difficult to work out why providers aren’t keen on drawing attention to them.

At the same time, datacentre operators often talk about the challenges they face when trying to recruit staff with the right skills, particularly as the push towards converged infrastructure and the use of software-defined architectures gathers pace. 

On top of that is all the talk about how the growth in connected devices, The Internet of Things (IoT), big data and future megatrends look set to transform how the datacentre operates, as well as the role it will play in the enterprise in years to come.

The latter point is one of the reasons why IO is keen to broaden the profile of people, aside from sales prospects, who visit its site.

 “Getting people from different walks of life with different skillsets and different capabilities to comment on what we’re doing, why we’re doing it and what the future might look like is really important,” said Andrew Roughan, IO’s business development director, during a follow-up chat with AitC.

“We’ve got to listen to them and get involved with their line of thinking as that group will be tomorrow’s customers.”

Opening up the datacentre

The range of PhD students the company invited along to the IO open day included some from an artsy, and more creative background, whereas others were in the throes of complex research projects into the impact of the technology industry’s activities on the world’s finite resources.

It was a diverse group, but isn’t that what the datacentre industry is crying out for? A mix of mechanical and software engineers, business-minded folks, creatives, as well as sales and marketing types.

But, if these people don’t know the datacentre exists, thanks in no small part to the veil of secrecy the industry operates under, why would they ever think to work in one?

In this respect, IO could be on to something by opening up its facilities and holding open days, but – as previously touched upon – that’s not something all operators will be able or willing to do.

IO is in a better position than most to, as its customers’ IT kit is locked away in self-contained datacentre chambers that only they have access to. It’s a setup akin to a safety deposit box, and means the risk of some random passer-by on a datacentre tour tampering with the hardware is extremely low.

What might be altogether more effective is getting the entire industry to rethink how it positions the datacentre in the cloud conversation more generally, so its vital contribution is more explicitly stated.

Otherwise, there is a real risk the datacentre will continue to be overlooked by the techies and engineers that UK universities produce simply because they don’t know it’s there. 


November 26, 2015  11:50 AM

The benefits of adopting a “what if…?” approach to datacentre management

Caroline Donnelly Profile: Caroline Donnelly
datacentre, Guest Post

In this guest post, Zahl Limbuwala, CEO of datacentre optimisation software supplier Romonet, explains why IT departments should be employing a more philosophical approach when solving business issues

The question “what if…?” is often used to refer to the past. What if a few hundred votes in Florida had gone the other way in 2000? What if Christopher Columbus had travelled a little further north or south? What if Einstein had concentrated on his patent clerk career?

For the IT department, the question can be equally applied to the future, as it needs to know the the decisions it makes will have the best possible impact for the business. Yet IT departments are often under financial constraints, meaning for every choice it faces, the department needs to bear in mind both the business and budgetary impact of its actions.

Asking the right questions

This need is exemplified by the datacentre – one of the most complex and cost-intensive parts of modern IT. While any organisation will want to know how datacentre decisions will affect the business, in too many cases IT teams simply don’t know what questions they should ask in the first place.

For example, an organisation might ask what servers they need to buy in order to meet a 10-year energy reduction target. Yet this won’t tell them what to do when those servers become obsolete in three years’ time. Or what proportion of their energy use will actually be reduced by choosing more efficient servers (hint: not a huge proportion). Or whether there’s a better way to reduce energy use and costs.

Instead, the IT team should be asking “what if…” for every potential change it could make to the datacentre to shape its strategy. In the example above, the organisation might ask what the effect would be if it replaced expensive, branded energy-efficient servers with a lower-cost commoditised alternative. It might ask what happens if it removes cooling systems. It might even ask what happens if it moves a large part of its infrastructure to the cloud. Regardless, by asking the right questions the IT team will have a much clearer idea of the options available.

Getting the right answers

Once an organisation knows the questions to ask, it needs to consider how it wants them answered. A simple question about energy usage and cost could produce answers using a variety of measurements, some of which will be more useful than others.

For instance, does the IT department benefit most from knowing the Power Usage Efficiency (PUE) of proposed data centre changes? Or the total energy used? Or the cost of that energy? While PUE can provide some indication of efficiency, it certainly doesn’t tell the entire story.

A datacentre could have an excellent PUE and still use more energy and be more expensive than a smaller (or older) data centre that better fits the organisation’s needs. A much better metric in most cases would be the total energy use or cost of any options; so that the organisation can see the precise, real-world impact of any changes.

Working it out

Once the organisation knows the right question, and the right way to answer it, the actual calculations might seem simple. However, there is still a large amount of misunderstanding around what influences datacentre costs. A single datacentre can produce hundreds of separate items of data every second, all of which may or may not be useful for answering IT teams’ questions.

This can make the calculation a catch-22 situation. Does the organisation consider every single possible piece of data, making calculations a time-consuming, complex process? Or does it aim to simplify the factors involved, making calculations faster but making any answer an approximation or guesstimate at best?

To solve this, IT teams need to look at how they answer questions for the rest of the business. We are increasingly seeing big data and data-driven decision making used to support business activity in all areas, from marketing to overall strategy.

IT should be able to turn these practices inwards; using the same data-driven approach to answer questions on its own strategy. For instance, there is actually a relatively small number of factors that can be used to predict data centre costs.

Combining these with the right calculations and big data tools, IT teams can quickly and confidently predict the precise impact of any potential decision they make. Combining this approach with the right “what if…?” questions, IT departments can see precisely what will be the best course of action to the business, whatever its goals.


November 13, 2015  10:04 AM

How green is your datacentre?

Caroline Donnelly Profile: Caroline Donnelly
Apple, AWS, Cloud Computing, Google, Green energy, Guest Post, Renewable

In this guest post, Dominic Ward, vice president of corporate and business development at datacentre provider Verne Global, explains why the green power commitments of the tech giants may not be all that they seem.

 

The rise of the digital economy has a well-kept dirty secret. The movies we stream, the photos we store in the cloud and the entire digital world we live in means, on a global basis, the power used by datacentres now generates more polluting carbon than the aviation industry.

 

Perhaps to diffuse any concerns and attention with regard to their growing use of power, tech giants like Microsoft, Apple and Google, have announced plans to open datacentres supposedly run on renewably-produced electricity.

 

Apple, for instance, claims all of the energy used to by its US operations – including its corporate offices, retail stores and datacentres – came from renewable sources, winning the consumer tech behemoth praise from environmental lobbying group Greenpeace.

 

The reality is, however, a little different.

 

If a company sources power from a solar or wind farm, what happens when night falls or the wind drops?  The company will revert to power from the main electricity grid.

 

In the US, around 10% of power comes from renewable sources, while Iceland is the only country in the world with 100% green energy production. So how can Apple claim to be 100% green at its Cork facility, or at its soon-to-open Galway datacentre, when only about 20% of Ireland’s power grid is from renewable sources?

 

The answer is a little-publicised renewable market mechanism that is allowing companies from Silicon Valley and around the world to get away with a big green marketing scam: Renewable Energy Certificates (RECs).

 

This system and its sister scheme, the European Energy Certificate System (EECS), operates like airline carbon trading, allowing power users to buy ‘certificates’, which testify that their dollars have financed production of renewable electrons elsewhere.

 

In essence, if you use 1 kWh of coal or nuclear energy, you can buy a certificate to claim an equivalent 1kWh from renewable energy.

 

This is not real renewable energy, and it does not support the claims made by large tech companies about the provenance of their power.

 

We have known about this smokescreen for some time, but the issue gained prominence recently when Truthout, a campaigning journalism website, called the practice “misrepresentation” and “a boldfaced lie on Apple’s part“.

 

The problem is becoming endemic amongst tech companies, though, with the big Silicon Valley tech giants being the worst offenders.

 

All the major internet firms are using this strategy, many of whom now state that their new datacentres are 100% renewable.

 

Given their location and disclosed sources of power, this simply is not true, save for their use of purchased certificates.

 

Unless a datacentre generates all of its own power from renewable sources, or sources power from a national grid that uses entirely renewable energy, enterprises and consumers will continue to underestimate the true environmental impact of their computing. Google, to its credit, has at least publicly recognised this problem.

 

Several firms including Google and Apple this summer allowed their various initiatives to be highlighted by the White House as indication of a US commitment to the upcoming United Nations Climate Change Conference in December. Their commitments to increased generation of renewable power are welcome. But, until they abandon this certification charade, these commitments will continue to appear as hollow claims.

 

So, what needs to happen?

1. Increased transparency on power sources

The RECs and EECS schemes currently allow tech companies and datacentre operators to hide the truth about their power cleanliness. Companies should be obliged by law to disclose the true nature of their power sources, including an explicit disclosure on the purchase of energy certificates. Only then will enterprise customers and consumers know the truth about their energy consumption from computing.

 

2. Upgrade the RECs and EECS schemes

The current systems are massively flawed. What began as a well-intended mechanism to promote new generation of renewable power has been poorly executed. It is time to upgrade the system to guarantee that every dollar, euro and pound spent on an energy certificate is truly invested in the installation of new renewable power generation.

 

3. Go green for real

Most renewable energy is not naturally suited to the tech industry: wind drops and the sun sets. Yet, the technology industry needs constant energy. The easy marketing ‘win’ is that it is easier for tech companies to simply pay for an energy certificate rather than shift to an entirely renewable energy source.

 

However, as we enter an era in which the technology and datacentre industry now has a carbon footprint in excess of the airline industry, surely this is not the right attitude.

 

Take the time to understand the finer points of your own energy contract and where the power you are using really comes from. Is it truly ‘green’? And if you haven’t taken the time to do so before, put some research into green datacentre options. The reality is that the only way to move the tech industry to 100% true clean energy is to clean up the power grids or move the tech industry to grids that are already clean.


Page 4 of 8« First...23456...Last »

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: