The datacentre industry operates under a veil of secrecy, which could be having a detrimental impact on its ability to attract new talent, fear market experts.
When you tell people you write about datacentres for a living, the most common response you get is one of bafflement. Not many people – outside of the IT industry – know what they are, to be honest.
Depending on how much time I have (and how interested they look), I often try to explain, while making the point that datacentres are a deceptively rich and diverse source of stories that cover a whole range of subjects, sometimes beyond the realms of traditional IT reporting.
For instance, I cover everything from planning applications to M&A deals, sustainability, data protection, skills, mechanical engineering, construction, not to mention hardware, software and all the innovation that goes on there.
But, while it means there is no shortage of things to write, it makes the datacentre industry a difficult one for outsiders to work out what it’s all about.
Clearing the datacentre confusion
This was a point touched upon during a panel debate on skills at the Datacentre Dynamics Europe Zettastructure event in London this week, overseen by Peter Hannaford, chairman of recruitment company Datacenter People.
During the discussion – the core focus of which was on how to entice more people to work in datacentres – Hannaford said the industry is in the midst of an on-going “identity crisis” that may be contributing to its struggle to draw new folks in.
“Are we an industry or are we a sector? We’re an amalgam of construction, electrical, mechanical engineering, and IT,” he said. “That’s a bit of a problem. It’s an identity crisis.”
To emphasis this point another member of the panel – Jenny Hogan, operations director for EMEA at colocation provider Digital Realty – said she struggles with how to define the industry she works in on questionnaires for precisely this reason.
“If you go on LinkedIn and you tick what industry you work under, there isn’t one for datacentres. There is one for flower arranging, and gymnastics, but there isn’t one for datacentres,” added Hannaford.
Meanwhile, Mariano Cunietti, CTO of ISP Enter, questioned how best to classify an IT-related industry whose biggest source of investment is from property firms.
“The question that was rising in my head was, [are datacentres] a sector of IT or is it a sector of real estate? Because if you think about who the largest investors are in datacentres, it is facilities and real estate,” he added.
While a discussion on this point is long overdue, it also serves to show the sheer variety of roles and range of opportunities that exist within the datacentre world, while emphasising the work the industry needs to do to make people aware of them.
Opening up the discussion
This is ground Ahead In the Clouds has covered before about year ago. Back then we made the point that – if the industry is serious about wanting more people to consider a career in datacentres – they need to start raising awareness within the general population about what they are. And, in turn, really talk up the important role these nondescript server farms play in keeping our (increasingly) digital economy ticking over.
When you consider the size of this multi-billion dollar industry, it almost verges on the bizarre that so few people seem to know it exists.
According to current industry estimates, the datacentre industry could be responsible for gobbling up anywhere between two and five per cent of the world’s energy, putting it on par with the aviation sector in terms of power consumption.
The difference is it’s not uncommon to hear kids say they want to be a pilot who flies jet planes when they get older, but I think you’d be hard pushed to find a single one who dreams about becoming a datacentre engineer.
At least, not right now, but that’s not to say they won’t in future.
One of the really positive things that really came across during the aforementioned panel debate was the understanding within the room that this needs addressing, and the apparent appetite among those there to do something about it.
And, as the session drew to a close, there were already discussions going on within the room about coordinating an industry-wide effort to raise awareness of the datacentre sector within schools and universities, which would definitely be a step in the right direction.
Because, as Ajay Sharman, a regional ambassador lead at tech careers service Stem Learning, eloquently put it during the debate, this is an industry where there are plenty of jobs and the pay ain’t bad, but it is up to the people working in it to make schools, colleges and universities aware of that.
“We are not telling the people who are guiding engineering students through university about our industry enough, because when you talk to academics, they don’t know anything about datacentres,” he said.
“We need to do that much more at all the universities in the UK and Europe, to promote the datacentre as a career path for engineers coming through, because there are lots of jobs there and it pays well. So why wouldn’t you steer your students into that?” Well, quite.
Alan Crawford, CIO of City & Guilds, is taking some time out of leading the cloud charge at the vocation training charity to join the thousands of IT workers taking part in Action for Children’s annual charity sleep out event, Byte Night, on Friday 7 October.
In this guest post, the former Action for Children IT directors shares his past Byte Night experiences, and explains why he continues to take part year after year.
When I joined Action for Children as IT director in 2013 I knew Byte Night was an event every major IT company got involved with, and I considered it part of my job description to sleep out too.
On my first Byte Night, we heard from a teenager whose relationship with his mother had broken down to such an extent, he ended up spending part of his final A-Level year sofa surfing with friends. But when they were unable to give him somewhere to stay, he began sleeping in barns and public toilets.
It was at this time, thanks to the intervention of his school, an Action for Children support worker stepped in.
By the time the October 2013 sleep out rolled round, the young man had shelter, was rebuilding the relationship with his family, had passed his A-Levels and started at university. Just thinking about his story gives me goose bumps,
Unfortunately, 80,000 young people each year find themselves homeless in the UK, and it is because of this I’ve agreed to sleep out again on Friday.
Byte Night: What’s in store?
Every Byte Night follows a similar pattern. Participants are treated to a hot meal, and take part in quiz (or some other fun activities), which are often overseen and supported by a range of celebrities.
For some participants, which include CIOs, IT directors and suppliers, the evening also provides them with an opportunity to network and swap details, with a view to doing business together sometime at a later date.
In the case of the London sleep out, all this takes place at the offices of global law firm Norton Rose Fulbright, and there will be more than 1,700 people taking part in the event at 10 locations across the UK this year. At the time of writing, the 2016 cohort are on course to raise more than £1m for Action for Children.
Regardless of where the sleep out is taking place, at 11pm all participants head out with our sleeping bags under our arms, ready to spend the night under the stars.
While that may sound a tad whimsical and romantic, the fact is sleep will come in fits and starts, and by daybreak we will all be cold and tired. But, as I trudge up Tooley Street on my way home, my heart will be warmed by memories of the night’s camaraderie and the feeling I’ve spent the evening doing something good and worthwhile.
While Byte Night may only be a few days away, there is still time to get involved and support the cause by agreeing to take part in a sleep out local to you, or by sponsoring someone already taking part.
Thank you for reading, on behalf of Byte Night, Action for Children and the vulnerable young people at risk of homelessness in the UK.
In this guest post by Morten Brøgger, CEO of cloud-based collaboration firm Huddle, explains why enterprises need to look beyond price to get a good deal on cloud.
More than $1 trillion in IT spending will be directly or indirectly affected by the shift to cloud over the next five years, Gartner research suggests, yet getting a decent return on investment for their off-premise endeavours remains difficult for many enterprises.
And, while cloud spending rises, Gartner found the pace of end-user adoption is much slower than anticipated. But what’s causing this gulf between investment and adoption?
Confusion around cloud
A recent Huddle survey found end-users actively avoid company cloud collaboration and storage services, finding them restrictive and unintuitive.
For example, SharePoint is used by 70% of firms in the accountancy sector, and only 35% of them use it for collaboration purposes. The majority (75%) rely on email, USB drives and consumer cloud services instead.
The lack of suitable cloud services to support collaboration also hinders the promise of enterprise mobility. Mobile workers should, theoretically, be able to work on-the-go with minimal difference to their working processes.
This means access to documents, the ability to perform basic tasks, and regular communication with team members should be barely affected, regardless of device. While many workers do now use their smartphones and tablets for emails (73%) as well as access (34%), share (32%) and approve documents (25%), the Huddle survey suggests productivity has either been hampered or stopped entirely for these end-users.
With dedicated cloud services being avoided, this raises issues around security, productivity and client experience, which may have huge repercussions on the efficiency of both the business and client servicing.
As enterprises continue to look to cloud, how can companies ensure the gulf between investment and adoption is minimised and ROI is being delivered?
Value is more expensive than price
After years of SaaS vendors pricing for growth, the market is accustomed to the idea that cloud-based software will always be cheaper than on-premise, and suppliers should be assessed on price alone.
However, it’s important for companies to rid themselves of these preconceptions. For example, multi-national vendors who offer on-premise and cloud-based products can be priced in a way that only works within a certain deployment size, or takes up to five years for Total Cost of Ownership to that level.
The ability to recognise the value these services can offer your company is also critical. Are you looking to save on infrastructure costs? Are you planning to ramp up operations in the coming months, and need a solution that offers greater scalability? Is the cost of on-site support and maintenance your biggest headache? Perhaps it’s none of these.
Regardless of the reason behind the cloud investment, companies must factor in the real value that it offers the business and not just the price tag.
Adoption is now a metric for ROI
The metrics measuring ROI now extends beyond simple infrastructure savings. The cheapest vendor might deliver some up-front savings but what happens to your ROI six months down the line when user adoption is at just 10%?
Companies must learn not to just throw technology at an existing problem. For example, if a cloud service is chosen to help teams share and store information, it must do just this. The service needs to actively support the user by being easy-to-use, accessible on all devices and make work processes like approvals simpler. At the same time, the technology must be secure and transparent.
Don’t forget the SLA
Cloud providers often shy away from talking about SLAs, favouring instead to publish generic T&Cs online. These tend to be buried deep on their site, making the deliverables easy to forget during a negotiation.
However, enterprise companies must be bold and ensure the solution meets both the technical and operational requirements. Typical SLA components should include service availability, maintenance periods, and support.
With more than $1 trillion in IT spending on the line, the shift to cloud must deliver an appropriate ROI for the enterprise. Businesses must now factor in usability, access and education to drive end-user adoption, choosing only to deploy cloud services that can add value to both the business and its workers.
VMworld US 2016 suggests VMware is under no illusions about the challenges it faces, as its traditional customers cede control for IT buying to line of business units and start exploring their multi-cloud options.
“There’s no such thing as challenges, only opportunities,” seems to be the mantra of VMware’s senior management team, based on Ahead in the Cloud’s (AitC) trip to VMworld US in Las Vegas this week.
Over the course of the week-long VMware partner and customer event, its execs adopted an almost Pollyanna-style approach to discussing the issues the company is facing, on the back of cloud computing changing the way enterprise IT is consumed, purchased and operated.
As a direct result of this trend, enterprises are becoming increasingly inclined to shutter the datacentres VMware has spent the past decade helping them optimise the performance of.
While some might consider this a challenge for a company that has previously sold a heck of a lot of server virtualisation software to the enterprise, VMware thinks differently.
After all, the workloads that previously whirred away in these private datacentres will need to run somewhere, which opens up sales opportunities for VMware’s network of service provider partners.
As more enterprises go down this path, presumably, this will drive demand within its pool of service provider partners for more datacentre capacity, but also for better performing and more efficient facilities too.
This, it is hoped, should serve to offset any downturn in enterprise sales of VMware’s software, as service providers snap up its wares to kit out their new facilities.
Line of business cloud
Another challenge is the shift in IT buying power that cloud has caused within the enterprise, which has seen line of business units dip into their own budgets to procure services without the involvement of the IT department.
For a company whose products have traditionally been purchased by CIOs and championed by IT departments, you might be fooled into thinking this sounds like an awful development. But you would be wrong. It’s great, according to VMware.
While the marketing, HR and finance department might not need IT’s help with procuring cloud, they will almost certainly turn to them for support in addressing their security, compliance and uptime requirements, the company’s CEO Pat Gelsinger assured attendees during the opening day of VMworld 2016.
In turn, every line of line business unit will come to realise they need the IT department (probably when something they’ve whacked on the company credit card goes kaput), paving the way for the IT and the wider business to become more closely aligned. Or so the theory goes.
Even the prospect of VMware customers opting to run workloads in the Amazon Web Services (AWS), Microsoft Azure or Google Cloud is something to be cheered, rather than feared, claims VMware.
While it would love customers to run their enterprises exclusively from a VMware software-laced datacentre, private cloud or public cloud, the reality of the situation is that enterprises are increasingly taking a multi-supplier approach to IT procurement.
Part of this is being driven by line of business unit assuming responsibility for IT purchases, which can lead to a patchwork of cloud services being used within the walls of some companies, says Mark Chuang, senior director of VMware’s software-defined datacentre (SDDC) division.
“People can set up on that path, saying ‘I’m going to have a multi-vendor strategy and optimise for whatever is the best deal at the time,’ and by deal I mean performance, SLAs and coupled with the cost and all that. Other times it could be because of shadow IT, or because an acquisition has taken place,” he tells AitC.
VMware, he argues, is perfectly positioned to help enterprises manage their multi-cloud environments, because – after years of taking care of their datacentre investments – CIOs trust it to fulfil a similar role in the public cloud.
To this end, the company announced the preview release of its Cross-Cloud Services offering, which Computer Weekly has dug a little deeper into here, at VMworld US.
Crossing the cloud divide
CCS is effectively an online hub that will allow enterprises to concurrently manage their AWS, Microsoft Azure and Google Cloud Platform deployments via a central, online control hub.
The setup relies on the public cloud giant’s open APIs and public interfaces to work, and – the impression AitC gets is – that’s as close as the collaboration between VMware and the big three gets on this. At least for now.
While the show saw VMware announce an extension of its on-going public cloud partnership with IBM, it appears we’re a little way off VMware embarking on formal agreements of a similar ilk with AWS, Microsoft and Google.
Making multi-cloud work
That’s not to say it will not happen, and – as outlined in the Computer Weekly’s deep-dive into VMware’s Cross-Cloud Services vision – there are plenty of reasons why it would make sense for Amazon and Google, specifically, too.
Both companies are looking to drive up their share of the enterprise market, so aligning with VMware (given the size of its footprint in most large firms’ datacentres) wouldn’t be a bad move.
Such collaboration, geared towards making the process of managing multiple, complex cloud deployments easier, could also give IT departments the confidence to run more of their workloads in off-premise environments, which is only ever going to be good news for all concerned.
Obviously, this would require all parties to set aside their competitive differences to work, which is the biggest stumbling block of all. But, given the multi-supplier approach companies appear to be increasingly taking in the cloud, they might need to swallow pride and just get stuck in.
In this guest post, Jessica Figueras, chief analyst at market watcher Kable, mulls over the role of the Government Digital Service (GDS) in the next wave of public sector cloud adoption.
The Government Digital Service (GDS) has been the subject of numerous Computer Weekly stories of late, as Whitehall sources claim some senior civil servants want to break it up and return IT to its previous departmental model.
For GDS, power struggles with other parts of Whitehall are a feature of normal operation. Given its mission to wage war on the old Whitehall CIOs and their big suppliers, it’s hardly surprising its demise has been predicted every year since its foundation.
But government’s take on digital has reached a turning point, and it looks likely the balance is tilting back to Whitehall departments.
I’d like to argue the shift we’re witnessing isn’t about personalities but technology, and a maturing market where government departments must take the lead in developing new skills and capabilities.
Government cloud: The first generation
Cloud has been one of the bright spots in an otherwise static public sector IT market in recent times. Kable’s data indicates public sector spend on cloud services (Iaas, PaaS and SaaS) has been growing at 45% per year.
GDS deserves credit here. It didn’t invent cloud, but it put off-premise IT services on the public sector IT road map, thanks in part to the introduction of the UK government’s ‘Cloud First’ policy in 2013.
Cloud First was greeted with enthusiasm by new digital teams who were committed to bringing agile working practices to Whitehall for the first time, and found commodity cloud services quicker to procure and easier than their own IT departments to work with. And the first generation of digital services were well suited: greenfield, handling little sensitive data, and with light back-end integration requirements.
GDS nurtured and grew G-Cloud, which has been an excellent vehicle for early cloud projects: not just commodity IaaS, but also PaaS, and SaaS tools supporting new collaborative ways of working.
It offers buyers a simple and quick route to access a wide range of suppliers, especially SMEs who were often shut out of the public sector market. On average, the public sector is now spending over £50m per month via G-Cloud (although some 80% of that has been invested in professional services).
Reaching a plateau
This all begs the question: if public sector cloud use is growing, and digital teams are getting what they need, why is GDS coming under fire?
A Kable analysis of G-Cloud sales data, supported by anecdotal evidence from suppliers, suggests sales growth through the procurement framework has reached a plateau, as many of the quick-win cloud projects that are typically funnelled through G-Cloud have been done.
The next wave of cloud activity – call it public sector cloud 2.0 – is not about G-Cloud or even Cloud First. It’s being driven by the digital transformation of existing services, rather than just new digital projects.
Opportunities will be open up as legacy IT outsourcing contracts come to an end, and organisations look to shift workloads to the cloud.
Make no mistake, this is where the big wins are. Cloud progress from here on in will dwarf anything achieved in the last few years. But this is complex work.
Next-generation public sector cloud
As my Kable colleague Gary Barnett has argued, people sometimes talk about cloud as if it has magical, unicorn-like properties. The nitty gritty reality of cloud migration is ignored, and buyer enthusiasm is not always backed up by expertise.
We see evidence of this in digital services not being built for cloud, crashing under completely predictable demand spikes: the digital car tax and voter registration systems are good examples of this.
Another factor of note, interesting in the wake of the Brexit vote, is that the UK government’s stated preference for public cloud is very much out of step with its European counterparts, who are largely investing in secure, government-only private clouds.
Another instance of plucky British exceptionalism, or a sign that Cloud First is fundamentally misguided? I’d like to suggest, diplomatically, that hybrid cloud is the destination to which most departments are headed.
Making a success of cloud 2.0 is not just about swapping out expensive hardware for cheap public cloud infrastructure, and it’s not about G-Cloud either. It’s not even about trucking old servers down the M4 to the Crown Hosting datacentre.
It’s about enterprise architecture; data quality projects; new development and deployment processes; and governance models, security policies and service management. In short, it’s about hybrid cloud and orchestration.
GDS: Where does the buck stop?
It is not, however, in GDS’ gift to deliver this kind of complex departmental transformation.
Perversely, too much GDS involvement in departmental digital projects can actually reduce accountability when something goes wrong.
For instance, GDS played an important part in the fateful technology choices made for Rural Payments Agency’s (RPA) digital service, resulting in senior RPA and GDS staff blaming each other for a major project failure that cost taxpayers dearly and caused misery for farmers.
Responsibility – and accountability – should be assumed by the party with the most skin in the game. This is not to say that GDS has no role to play, simply that it would benefit from handing the reins back to departments in some areas, and allowing them more say over shared infrastructure that affects their client groups, such as Government-as-a-Platform.
Much has been said about GDS stopping departments making bad decisions. But in the more complex cloud 2.0 world, there’s an increasing chance GDS will inadvertently stop them making good decisions that are appropriate to their unique circumstances.
Where next for GDS in a cloud 2.0 world?
Arguably, the need for high-profile interventions and vetoes is decreasing anyway, partly thanks to GDS’ own efforts to increase departmental capability and to establish common standards.
To my mind, GDS deserves huge credit for the creation of a new digital movement – culture, values and ways of working – which has influenced practice beyond Whitehall and even beyond the UK.
The Digital-by-default Service Standard; the Service Design Manual; the Digital, Data and Technology Profession which has replaced the moribund Civil Service IT profession.
The leaders networks, the blogs, conferences and culture of openness. Hiring new senior people and digital teams into departments. These are GDS’ real levers of power.
GDS’ proudest achievement is not the vanquishing of its political enemies, but its winning of hearts and minds. It’s by the continuing projection of that soft power that it can best support the whole of government to move forward with cloud.
The cloud industry has felt the wrath of Rupert Murdoch’s legal team more than most in recent years, as the broadcaster has taken against several firms for daring to use the word “Sky” in their branding.
Microsoft encountered the company’s commitment to preserving the use of the word “Sky” for its own business ventures back in August 2013, when it was made to change the name of its SkyDrive online storage service to OneDrive following a legal challenge.
Voice over IP messaging service Skype was subjected to something similar after taking steps to register its name with the Office for Harmonisation in the Internal Market (OHIM) in 2004, paving the way for a decade-long legal wrangle.
A European Union ruling in May 2015 concluded there was a risk consumers may confuse the two brands and their respective offerings, prompting Microsoft – who acquired Skype in May 2011 – to announce plans to appeal.
Microsoft has continued to operate the service under the Skype name since then and has faced no direct pressure to rebrand it. Meanwhile, as far as we know, Sky has not (yet) pursued an infringement claim against the company for doing so.
Sky’s earlier success with getting Microsoft to change the name of SkyDrive has been implicated in the firm’s latest trademark dispute involving public sector-focused provider Skyscape Cloud Services.
At least that was the name of the company until today, when Skyscape officially rebranded itself as UK Cloud, bringing an end to its long-running dispute with Sky, who claimed the name infringed on its trademarks.
While the pair are said to have been embroiled in a letter writing campaign for the past two years with regard to the name, Sky stopped short of launching legal proceedings against Skyscape.
Concerned about the uncertainty surrounding its right to use the name long-term, Skyscape sought to secure a declaration of non-infringement from Sky earlier this year by embarking on a legal challenge of its own.
The case was dismissed, and – as a result – Skyscape has decided to rebrand, rather than appeal or pursue any further stake or claim on the name.
In a statement to Ahead in the Clouds, Simon Hansford, CEO of the company formerly known as Skyscape, said the rebrand was its way of drawing a line under the matter.
“With the High Court’s decision, we felt the logical way for us to move forward and continue delivering exceptional assured cloud services to our customers would be to rebrand as UK Cloud,” he said.
“As a company, we decided to focus our time and money on creating a brand that showcased our unequivocal focus on the UK public sector and reaffirms our commitment to the market, rather than tying ourselves up in endless and costly legal proceedings.”
According to the company’s most recent set of results, for the financial year ending 31 March 2016, its services are used in around 200 active public sector cloud projects, while its revenue has risen from £3.7m to £32.1m over the past two years.
Public sector vs. general public perception
What’s interesting about the Skyscape case is, unlike the Microsoft SkyDrive and Skype debacles, the company involved is not a consumer-facing brand and its services are not marketed to the general public. Skyscape only sells to the public sector.
So, while the brand and its services might be well-known within government procurement circles, awareness of it within the general population is likely to be far lower. And, so too, one might argue is the risk of consumers confusing the two brands, which is the crux of Sky’s past issues with SkyDrive and Skype.
Hansford even raised this point with the High Court, to little avail, stating: “Given the ways we promote our services and the procurement frameworks through which we contract, I do not believe the general public is ever likely to become aware of our business or the services we offer; they simply are not relevant to consumers.”
While the rationale behind the rebrand does make sense, the circumstances surrounding it seem a tad unfair. Particularly when Skyscape has spent so much time building up its brand, as wells working to drive up the use of cloud services within the wider UK public sector.
In this guest post, Dr. Peter Agel, global segment leader for hotels at software giant Oracle, explains how pay-as-you-go computing is helping the hospitality sector improve profitability and respond to changing consumer demands.
Like every consumer-facing business today, the hospitality industry is confronted with unprecedented—and ever-increasing—demands from customers. To stand out from the competition, hoteliers at every price point are striving to provide a smoother, friendlier, and more personalised guest experience.
To do that successfully, organisations need to identify, adopt, and integrate new technological capabilities as soon as they become available. Otherwise, each new development becomes a negative differentiator—something you can’t do but the other guy can.
The CIO should lead this process of ongoing innovation, but assuming this role within the hospitality sector has typically been a challenge, because of the industry’s fragmented approach to IT.
Typically, each individual site has their own IT team running their own servers, carrying out maintenance and software upgrades, and collecting its own data.
In companies where a centralised system exists, hotel staff can and do bypass it in the interest of, say, getting a guest checked-in quickly. If there is no record of a certain guest in the hotel’s database, rather than waste time checking the central data repository, a busy desk clerk may simply create a new record on the spot.
In such an environment, it is extremely difficult to upgrade system capabilities without adding a lot of bolted-on point solutions, which in turn makes the system even more difficult to maintain and scale up.
Opportunity for innovation
Cloud offers CIOs an opportunity to leapfrog over these structural difficulties by moving, organisation-wide, to a simplified environment that is secure, stays current, and can scale rapidly. Through its cost efficiencies, it can also enable them to develop and add-on their own proprietary innovations.
By replacing the traditional, decentralised hotel-chain IT structure with a centralised, easily maintained and upgraded system, CIOs are afforded the opportunity to stop being the guy who keeps things running to someone who works hand-in-hand with the organisation’s business stakeholders to drive innovation.
The case for innovation in this industry is not hard to make. The entire travel sector has been feeling the aftereffects of the economic crisis, while online travel agencies sought to disrupt hotel chains in terms of distribution cost and customer relationship management.
Hoteliers have responded by attempting to build a direct relationship with their customers, while seeking out opportunities to merchandise and upsell.
Creating these relationships, however, has yielded challenges of its own. Hoteliers are inundated with bits and pieces of information, which collectively promise to unlock vital insights into their business but lie scattered and inaccessible throughout their operations.
Bringing it all together requires a digital technology management system or repository, where the data can be tapped quickly and easily by any qualified user and shared enterprise-wide in an instant.
Greater security and data integrity
Moving from a local IT model to the cloud is a major change, though, and there is an understandable hesitation in the industry about making (what can appear to be) such a radical step. While some hotel companies are aware of the advantages of the cloud and are working toward making the transition, a greater contingent is held back by concerns about system security and data integrity. A centralised function that is off-site seems simpler, but triggers key concerns: What if I lose power? What if I lose my data?
Though seemingly counter-intuitive, it can be argued these problems are less likely to occur in a cloud-based system. By its very nature, data processing in the cloud is distributed across a large network of servers, which means it is less likely that there is a single point of failure.
This redundancy, coupled with 24/7 global support for the systems, enables major suppliers of cloud-based services to run at higher uptime than many local IT systems.
As for data integrity, a cloud-based system could handle all customer records in one place, automatically merging and updating them virtually in real time – providing a more robust customer database than decentralised systems.
To protect this asset, major cloud service providers offer encryption, virus scans and whitelist support, and these protective systems are continually maintained and upgraded. The same cannot always be said for local IT operations.
Arguably, the greatest benefit of cloud is that it takes responsibility for IT away from local hotel managers, allowing them to devote their attention to guests. Moreover, by creating a central customer database, the cloud enables hotel managers to provide even better, more personalised care as well as offer ancillary products and services.
On the enterprise level, the cloud makes it possible to stay abreast of business and technological developments without having to make unexpected investments in IT infrastructure.
Along with these reduced implementation and ongoing resource costs come automatic and universal software upgrades. Other benefits include increased power, storage capacity and performance, and a drastically shortened deployment process.
With the tumultuous challenges facing the industry today, it’s no wonder hoteliers are kept awake at night with weighty questions: How can we broaden our array of services or enhance interactions with customers? How can perishable inventory – a room that stands empty for a night is revenue lost forever – be better distributed? How can work efficiency be increased? Fortunately, some answers can be found in the cloud.
In this guest post, Frank Denneman, Chief Technologist at PernixData, advises operators to apply Rubik Cube-solving strategies to datacentre management problems.
When attempting to solve a Rubik’s cube most people pick a colour and complete one face of the cube before moving on to the next. While this approach is fun, it is ultimately doomed to fail, because addressing the needs of one side of the cube causes the remaining five to be thrown into chaos.
The components of a virtual datacentre are similarly inter-twined, as isolated changes in one part of the IT infrastructure can have massive implications elsewhere. For example, a network change might result in bad SAN behaviour. Or, the introduction of a new virtual machine might impact other physical and virtual workloads, residing on a shared storage array.
The shared components of the virtual datacentre are fundamental to its success, as it allows operating systems and applications to be decoupled from physical hardware. This allows workloads to move around dynamically, so they can be paired with the available resources.
This provides better ROI and arguably benefits the quest for business continuity. But it’s this dynamism that makes it so difficult to solve the problems when they arise.
Tackling datacentre management problems
Due to the sheer complexity of today’s datacentre, troubleshooting is typically done per layer. This is an interesting challenge in the world of virtual datacentres, where more virtual machines and workloads are introduced daily, with varying behaviour, activity and intensity.
While context is critical for virtual machine troubleshooting, it is very hard to attain because the hypervisor absorbs useful information from the layers below it.
Furthermore, applications running on top of the hypervisor are very dynamic, which makes traditional monitoring and troubleshooting methods inadequate. You need to take a step back and ask, “Are my current monitoring and management tools providing an answer to a single side of the cube, or are they providing enough perspective to solve the whole puzzle?”
The only way to solve all aspects of a datacenter management problem is to use big data analytics, which have been changing the way things operate for years.
Wal-Mart and Target, for example, are able to correlate many data points to accurately predict customer behaviour. Similarly, bridges are equipped with sensors and big data analytics to identify changes in heat signatures, vibrations and structural stress to prevent mechanical and structural failures. With this in mind, IT should use the power of big data analytics to improve results in their own datacentres.
Applying big data analytics inside the hypervisor taps into the vast collection of data present, with insight into application, storage and other infrastructure elements. You can create a context-rich environment that provides an in-depth understanding of the layers on top and below the hypervisor.
For example, you can get unprecedented insight into workloads generated by virtual machines, and how they impact lower level infrastructure, like storage arrays. You can discriminate workloads from one another, and understand how the infrastructure reacts to these workloads.
This, in turn, helps developers optimise their code to the infrastructure, which then lets infrastructure teams optimise their systems as needed.
With big data analytics inside the hypervisor, everyone wins. You can view your datacentre in a holistic fashion, instead of solving individual problems one at a time.
Cloud contracts are notorious for being weighted in favour of providers but, for an industry still grappling with how best to win the trust of users, it’s a risky way to do business, argues Caroline Donnelly
Whenever news breaks about a cloud company going out of business or announcing shock price hikes, the first thought that usually crosses their customers’ minds is, “what are my rights?”
Having covered the demise of a few high-profile cloud firms over the years, experience has taught Ahead in the Clouds (AitC) that if people only ask this question once their provider runs into trouble (or does something they are not happy with) it’s probably too late.
Finding out where they stand should the company decided to up their prices, terminate a service they rely on or carry out some other dastardly deed, should – ideally – be established well before they sign on the dotted line.
Experience also tells us that, in the rush to get up and running in the cloud, not everyone does. In fact, AitC would wager, when faced with pages and pages of small print, written in deathly-dull legal speak, very few actually do.
So, one might argue, when something goes wrong, the customer has no-one to blame but themselves if the terms and conditions (T&Cs) give the provider the right to do whatever the heck they like with very little notice or regard for the impact these actions may have on users.
But is that right, and should the cloud provider community be doing more to ensure their T&Cs are fairly weighted in favour of users, and are not riddled with clauses designed to trip them up?
In AitC’s view, that’s a no-brainer. End-users aren’t as fearful as they once were about entrusting their data to the cloud, but if providers are not willing to play fair, all the good work that’s gone into getting to this point could be quickly undone.
And it’s not just AitC that feels this way, because the behaviour of the cloud provider community has emerged as a top concern for consumer rights groups and regulators of late – and rightly so.
Held to account
The Competition and Markets Authority’s 218-page (CMA) Consumer Law Compliance Review, published in late May 2016, raised red flags about five dubious behaviours it claims cloud storage companies have a habit of indulging in that risk derailing the public’s trust in off-premise services.
And, while the CMA’s review set out to examine whether the way online storage firms behave could be considered at odds with consumer law, a lot of what it covers could be easily applied to any type cloud service provider and how it operates.
Examples of bad behaviour outlined in the report include failing to notify end-users about their automatic contract renewal procedures, which could result in them getting unexpectedly locked in for another year of service or hit with surprise charges.
Remote device management company LogMeIn’s activities in this area have come under close scrutiny from Computer Weekly, with customers accusing the firm of failing to tell them – in advance of their renewal date – that the price they pay for its services was set to rise.
LogMeIn refutes the allegations, and claims customers are notified via email and through in-product messaging when users login to the company’s control panel, even though its T&Cs suggest it’s under no legal obligation to do so.
Other areas of concern raised by the report include T&Cs that allow cloud firms to terminate a service at short notice and without offering users compensation for any inconvenience this may cause.
Microsoft’s decision in November 2015 to drop its long-standing unlimited cloud storage offer for OneDrive customers, after users (unsurprisingly) abused its generosity, would fall under this category.
The 2013 demise of cloud storage firm Nirvanix also springs to mind here, when users were given just two weeks to shift their data off its servers or risk losing it forever after the company filed for bankruptcy.
The borderless nature of the cloud often works against users intent on seeking some form of legal redress in some of these scenarios, as the provider’s behaviour might be permissible in their own country, but not in the jurisdiction where the customer resides.
The costs involved with trying to pursue something like this through the courts may vastly outweigh any benefit the customer hopes to get out of doing so, anyway.
In cloud we trust
It’s certainly a step in the right direction, and here’s hoping similar initiatives, incorporating a wider range of suppliers spanning cloud software and infrastructure start to emerge as time goes on. Because if customers can’t trust a provider to put their interests first, why should they assume they’ll treat their data any differently?
In this guest post, James Bailey, director of datacentre hardware provider Hyperscale IT, busts some enterprise-held myths about the Open Compute Project
Market watcher Gartner predicts the overall public cloud market will grow by 16.5% to be worth $203.9bn by the end of 2016.
This uptick in demand for off-premise services will put pressure on service providers’ hardware infrastructure costs at a time when many of the major cloud players are embroiled in a race to the bottom in pricing terms, meaning innovation is key.
On the back of this, The Open Compute Project (OCP) is slowly (but surely) gaining traction.
Now in its fifth year, the initiative is designed to facilitate the sharing of industry know-how and best practice between hardware vendors and users so that the infrastructures they design and produce are efficient to run and equipped to cope with 21st century data demands.
Over time, a comprehensive portfolio of products have been created with the help of OCP. For the uninitiated, these offerings may appear to only suit the needs of an elite club of hyperscalers, but could they have a role to play in your average enterprise’s infrastructure setup?
To answer this question, it is time to bust a few myths around OCP.
Myth 1: Datacentre efficiency is all that matters to OCP
This is largely true. After all, the mission statement of OCP founder, Facebook, was to create the most efficient datacentre infrastructure, combined with the lowest operational costs. The project encompasses everything from servers, networking and storage to datacentre design.
The server design is primarily geared around space and power savings. For example, many of the servers can be run at temperatures exceeding 40C, which is way higher than the industry norm, resulting in lower cooling costs.
This efficiency adds up to an important cost saving and a smaller carbon footprint. When Facebook published the initial OCP designs back in 2011, they were already 38% more energy-efficient to build and 24% less expensive to run than the company’s previous setup.
Myth 2: Limited warranty
Most OCP original design manufacturers (ODMs) offer a three-year return to base with an upfront parts warranty as standard. This can often be better than what is offered by other OEM hardware vendors today.
The warranty options do not stop there. Given the quantities most customers purchase, vendors are open to creating bespoke support and SLAs.
In recent times, some of the more mainstream players have got in on the action. Back in April 2014, HPE announced a joint venture with Foxconn, resulting in HPE Cloudline servers aimed specifically at service providers.
Myth 3: Erratic hardware specifications
Whilst specifications do indeed evolve, the changes are not taken lightly. Any specification change is submitted to the OCP body for scrutiny and acceptance.
The reality of buying into the OCP ecosystem is that you are protecting yourself from vendor lock-in. Many manufacturers build the same interchangeable systems from the same blueprints, thus giving you a good negotiation platform.
That said, there is a splintering of design. A clear example is difference in available rack sizes.
The original 12-volt OCP racks are 21-inches but – more recently – ‘OCP-inspired’ servers have emerged that fit into a standard 19-inch space.
Overall, this is positive as you can integrate OCP-inspired machines into your existing racks, which has created a good transition path for datacentre operators looking to kit out their sites exclusively with OCP hardware.
Google’s first submission to the community is for a 48V rack which would create a third option. But surely this is all healthy?
Google estimate this could have energy-loss savings of over 30% compared to the current 12V offering, and who would not want that? There are also enough ODMs to ensure older designs will not disappear overnight.
Myth 4: OCP is only for the hyperscalers
Jay Parikh, vice president of infrastructure at OCP founder Facebook, claims using OCP kit saved Facebook around $1.2 billion in IT infrastructure costs within its first three years of use, by doing its own designs and managing its supply chain.
Goldman Sachs have a ‘significant footprint’ of OCP equipment in their datacentres, and Rackspace – another founding member – heavily utilises OCP for its OnMetal product. Microsoft is also a frequent contributor and runs over 90% of their hardware as OCP.
Additionally, there are a number of telcos – including AT&T, EE, Verizon, and Deutsche Telekom – that are part of the adjacent OCP Telco Infra Project (TIP).
Granted, these are all very large companies but that quantity of scale drives the price down for everyone else. So, if you are buying a rack of hardware a month, OCP could be a viable option.
Opening up the OCP
In summary, the cloud service industry has quickly grown into a multi-billion dollar concern, with hardware margins coming under close scrutiny.
The only result can be the rise of vanity-free whitebox hardware (ie hardware with all extraneous components removed). Recent yearly Gartner figures show Asian ODMs like Quanta and Wistron growing global server market share faster than the traditional OEMs. Nevertheless, if Google is one of your customers, it is easy for these numbers to get skewed.
Even for those not at Google’s scale, the commercials of whitebox servers are attractive, and it might give smaller firms that are unable to afford their own datacentre a foot in the door.
However, most importantly, the project has also led to greater innovation and that is where it really gains strength.
OCP brings together a community from a wide range of business disciplines, with a common goal to build better hardware. They are not sworn to secrecy and can work together in the open, and that really takes the brakes off innovation.