The government has confirmed its long-standing public cloud-first policy is under review, and that it is seeking to launch an alternative procurement framework to G-Cloud. But why?
It is hard not to read too much into all the change occurring in the public sector cloud procurement space at the moment, and all too easy to assume the worst.
First of all there is the news, exclusively broken by Computer Weekly, that the government’s long-standing cloud-first policy is under review, six years after it was first introduced.
For the uninitiated, the policy effectively mandates that all central government departments should take a public cloud-first approach on any new technology purchases. No such mandate exists for the rest of the public sector, but they are strongly advised to do the same.
The policy itself was ushered in around the same sort of time as the G-Cloud procurement framework launched. Both were hailed by the austerity-focused coalition government in power at the time as key to accelerating the take-up of cloud services within Whitehall and the rest of the public sector.
Encouraging the public sector – as a whole – to ramp up their use of cloud (while winding down their reliance on on-premise datacentres) would bring cost-savings and scalability benefits, it was claimed.
Additionally, the online marketplace-like nature of G-Cloud was designed to give SMEs and large enterprises the same degree of visibility to public sector IT buyers. Meanwhile, its insistence on two-year contract terms would safeguard users against getting locked into costly IT contracts that lasted way longer than the technology they were buying would remain good value for money.
Revoking the revolution
The impact the cloud-first policy has had on the IT buying habits of central government should not be underestimated, and can be keenly felt when flicking through the revamped Digital Marketplace sales pages.
Of the £4.57bn of cloud purchases that have made via the G-Cloud procurement framework since its launch in 2012, £3.7bn worth of them have been made by central government departments.
The fact Whitehall is mandated to think cloud-first is always the first thing IT market watchers point to when asked to give an account as to why the wider public sector have been so (comparatively) late to the G-Cloud party.
Public sector IT chiefs often cite the policy as being instrumental in helping secure buy-in from their senior leadership teams for any digital transformation plans they are plotting.
But, according to the government procurement chiefs at the Crown Commercial Service (CCS) and the digital transformation whizz-kids at the Government Digital Service (GDS), the policy is now under review, and set for a revamp.
In what way remains to be see. Although – in a statement to Computer Weekly – CCS suggested the changes will be heavily slanted towards supporting the growing appetite within the public sector for hybrid cloud deployments.
The trouble with curbing cloud-first behaviours
The obvious concern in all this is that, if the cloud-first mandate is revoked completely, central government IT chiefs might start falling back into bad procurement habits, whereby cloud becomes an afterthought and on-premise rules supreme again.
Maybe that is an extreme projection, but there are signs elsewhere that some of the behaviours that G-Cloud, in particular, was introduced to curb could be starting to surface again.
One only has to look at how the percentage of deals being awarded to SMEs via G-Cloud has started to slide of late, which has fed speculation a new oligopoly of big tech suppliers is starting to form, who will – in time – dominate the government IT procurement landscape.
Where G-Cloud is concerned, there are also rumblings of discontent among suppliers who populate the framework that it is becoming increasingly side-lined for a number of reasons.
There are semi-regular grumbles from suppliers that suggestions they have made to CCS or GDS about changes they would like made to the framework being ignored, or not being acted on as quickly as they would like.
Putting users first
Some of these are to do with making the framework less onerous and admin-heavy for SMEs to use, while others are geared towards making the whole cloud purchasing experience easier for buyers overall.
Either way, suppliers fear this perceived lack of action has prompted buyers to take matters into their own hands by setting up cloud procurement frameworks of their own because G-Cloud is no longer meeting their needs.
And that’s an argument that is likely to get louder following the news that CCS is considering launching an additional cloud hosting and services framework, where contracts of up to 5 years in length could be up for grabs.
A lot of innovation can happen over the course of five years, which leads to the logical assumption that any organisation entering into a contract that long might find itself at something of a technological disadvantage as time goes on.
While the public cloud community has stopped publicly making such a song and dance about price cuts, the fact is the cost of using these services continues to go down over time for users because of economies of scale.
However, if you’re locked-in to a five year contract, will they necessarily feel the benefit of that? Or will they be committed to paying the same price they did at the start of the contract all the way through? If so, in what universe would that represent good value for money?
A lot can change between the consultation and go-live stages of any framework, but there are concerns that this is another sign the government is intent on falling back into its old ways of working where procurement is concerned.
Government cloud comes of age
Although, another way of looking at all this is a sign that the cloud-first policy and G-Cloud have served their purpose. Together they have conspired to make public sector buyers feel so comfortable and confident with using cloud, they feel ready to go it alone and launch frameworks of their own.
Or, as their cloud strategies have matured, it has become apparent that for some workloads a two-year contract term works fine, but there are others where a longer-term deal might be a better, more convenient fit.
It is not out of the realms of possibility. It is worth noting the shift from public cloud-first to a policy that accommodates hybrid deployments is in keeping with the messaging a lot of the major cloud providers are putting out now, which is very different to what it was back in 2012-2013.
Around that time, Amazon Web Services (AWS) was of the view that enterprises will want to close down their datacentres, and move all their applications and workloads to the public cloud.
The company still tows a similar line today, but there is an admission in there now that some enterprises will need to operate a hybrid cloud model and retain some of their IT assets on-premise for some time to come. And the forthcoming change to the government’s cloud-first policy might simply be its way of acknowledging the same trend.
In this guest post from Henrik Nilsson, vice president for Europe, Middle East and Africa (EMEA) at machine learning-based IT cost optimisation software supplier Apptio, offers enterprises some advice on what they can do to stop cloud sprawl in its tracks and keep costs down.
On the surface, it seems there are plenty of reasons for businesses to jump head-first into the cloud: agile working practices, the ability to scale resources, and boost the resiliency of their infrastructure, for example.
However, enthusiasm for the newest technology creates a tendency for business leaders to make investments without analysis to support strategic decision-making. Cloud is not immune.
Before long, cloud costs can escalate out of control. Usage can go through the roof, or business value from cloud-based applications can plummet and accountability is replaced by a “Wild West” approach to resource use, whereby whoever needs it first, gets to use it.
In this type of scenario, CIOs should take a step back and consider how to harness the power of the cloud to align with the wider objectives of the business.
Managing cloud sprawl can be the hardest part of aligning cloud usage to a business strategy. Cloud sprawl is the unintentional proliferation of spending on cloud, and is often caused by a lack of visibility into resources and communication between business units. Different departments across organisations want to take advantage of the benefits of cloud, but need to understand how this impacts budgets and the goals of the business.
To successfully master the multiple layers of cloud costs , IT and finance leaders need to see the full picture of their expenditure. They need to utilise data, drive transparent communication, and continuously optimise to stop cloud sprawl, achieve predictable spending, and build an informed cloud migration strategy.
Strategic cloud use
Using consumption, usage and cost data to make cloud purchasing decisions is the first step to stopping cloud sprawl at its root. Migration decisions should not be based on assumptions or emotion.
The cost to migrate workloads can be very high, so businesses need to understand not just the cost to move, but also how moving to the cloud will impact costs for networking, storage, risk management and security, and labour or retraining of an existing IT team. They also need to evaluate the total value achieved by this migration and align decisions with the strategic needs of the business.
A key driver of cloud sprawl is the assumption that cloud is the solution to any given business need. Not every department needs to migrate every part of its operation. In some instances, on-premises might be the right decision. A hybrid approach is considered by many to be the best balance – one survey suggested that a 70/30 split of cloud to on-prem was the ideal mix. This enables certain mission-critical applications to remain, while the majority of computing power may be moved to the public cloud.
Visibility through division
When cloud is carved up among stakeholders (from marketing to human resources to IT itself) it can be hard to get a clear picture of usage and costs. Multiple clouds are used for different needs, and over time usage creeps up on a ‘just in case’ basis, even where demand isn’t present.
To get a handle on this, the ever-expanding cloud demands of a business need to be calculated and then amalgamated into one single source of truth. A cohesive cloud approach is necessary across departments, or there is no hope of maximising the potential benefits.
Ideally, there needs to be a centralised system of record, whereby all cloud data (as well as its cost) can be viewed in a transparent format, and clearly articulated to any part of the business. Without this, cloud usage becomes siloed between departments or spread out across various applications and software, as well as compute power and storage. This makes it nearly impossible to have a clear picture of how much is being paid for and used – or equally, how much is needed.
Once a strong base has been established to visualise cloud usage, and different departments can make investment and purchasing decisions, optimisation is key. This may be something as simple as looking at whether a particular instance would be more efficiently run under a pay-as-you-use model versus a reserved spend or calculating the value that has been gained from migrating depreciated servers to the cloud. This can then inform similar future decisions.
Optimising in this way ensures that cloud usage isn’t allowed to spiral out of control. Cloud is a necessary modern technology to fuel innovation, but businesses need to reign in waste in spend and resources to eliminate cloud sprawl and ensure the right cloud investment decisions are being made to support broader business strategies.
The news that Myspace lost 12 years of user-generated content during a botched server migration has prompted a lot of debate this week, which Caroline Donnelly picks over here.
When details of Myspace’s server migration snafu broke free from Reddit, where the incident has been discussed and known about for the best part of a year, the overriding reaction on Twitter was one of disbelief.
In no particular order, there was the shock at the size of the data loss, which equates to 12 years of lost uploads (or 50 million missing songs), and the fact these all disappeared because Myspace didn’t back the files up before embarking on its “server migration”. Or its backup procedures failed.
There was also a fair amount of surprise expressed that Myspace is still a thing that exists online, which might explain why it took so long for the rest of the internet to realise what had gone down there.
And now they have, the response online has been (predictably) snarky. Myspace, for its part, issued a brief statement apologising for any inconvenience caused by its large-scale data deletion, but that’s been the extent of its public response to the whole thing.
That in itself is quite interesting. The fact a company can lose 12 years of user data, and shrug it off so nonchalantly. Obviously it would be a completely different state of affairs if it was medical records or financial data the company had accidentally scrubbed from its servers.
Digital legacy destruction
What the situation does serve to highlight though is how precarious our digital legacies are for one thing. In amongst all the nostalgia-laden Twitter jokes from people who used Myspace, back when it was at the height of its social networking power, there was also a smattering of genuinely distraught posts from people dismayed at what they had lost.
Individuals who had been prolific Myspace songwriters over the years, who had lost sizeable data dumps of content. In many cases these people had trusted Myspace at its word when it said it was working on restoring access to their files, as complaints about playback issues on the site first started to surface back in early 2018.
These are people who have spent a long time curating content that made it possible for Myspace to double-down on its efforts to become a music streaming site, long after the people who first flocked to the site for social networking had hot-footed it to Facebook and Twitter.
I guess the incident should act as a timely reminder that uploading your data to a social networking site is not the same as backing it up, and if you don’t have other copies of this content stored somewhere else, that’s on you.
It’s not exactly helpful, though, and it doesn’t make the situation any less gutting for the people whose data has been lost forever.
There also seems to be an attitude pervading all this that because it’s just “creative content” that has gone, what’s the harm? I mean, can’t you just make more?
It’s a curious take that highlights – perhaps – how little value society places on creative content that’s made freely available online, while also ignoring the time and effort it takes for people to make this stuff. Also a lot of it is off its time, making it impossible for its makers to recreate it.
Analysing Myspace’s response
There has been speculation that Myspace’s nonchalant attitude to losing so much of its musical content could be down to the fact the deletion was more strategic than accidental.
This is a view put forward by ex-Kickstarter, CTO Andy Baio, who said the firm might be blaming the data loss on a botched server migration because it sounds better than it admitting it “can’t be bothered with the effort and cost of migrating and hosting 50 million old MP3s,” in a Twitter post.
And there could be something in that. It is, perhaps, telling that it has taken so long for details of the data loss to be made public, given Myspace users claim they first started seeing notifications about the botched server migration around eight months ago.
That was about five or six months after reports of problems accessing old songs and video content began to circulate on the web forum, Reddit, with Myspace claiming – at the time – that it was in the throes of a “maintenance project” that might cause content playback issues.
Meanwhile, the site’s news pages do not appear to have been updated since 2015, its Twitter feed last shared an update in 2017, and Computer Weekly’s requests for comment and clarification have been met with radio silence.
The site looks pretty much in the throes of a prolonged and staggered wind-down, which – in turn – shows the dangers of assuming the web entities we entrust our content to today will still be around tomorrow.
In this guest post, Rob Greenwood, CTO at Manchester-based cloud and DevOps consultancy, Steamhaus, sets out why the emergence of Amazon’s managed Kubernetes service is such good news for the enterprise.
In just four years since its conception, Kubernetes has become the de facto tool for deploying and managing containers within public cloud environments at scale. It has won near universal backing from all major cloud players and more than half of the Fortune 100 have already adopted it too.
While alternatives exist, namely in the form of Docker Swarm and Amazon ECS, Kubernetes is by far the most dominant. Adoption has been so pervasive that, even in private cloud environments, the likes of Mesosphere and VMware have fully embraced the technology.
Perhaps the most significant development though, was AWS’s general release of its managed Kubernetes service (EKS) in June 2018. Amazon is well known for listening to its customers and the desire to see Amazon provide a managed service for Kubernetes was palpable – it was one of the most popularly requested features in AWS history.
Despite developing its own alternative in ECS, AWS decided to go with the will of crowd on this one and offer full support. This was a wise move when you consider that 63% of Kubernetes workloads were on AWS when Amazon announced it EKS (although by the time it was made generally available this was said to have fallen to 57%).
Not having this container service for Kubernetes in place was also creating a few headaches for its users. As Deepak Singh, director at AWS container services, explained: “While AWS is a popular place to run Kubernetes, there’s still a lot of manual configuration that customers need to manage their Kubernetes clusters… This all requires a good deal of operational expertise and effort, and customers asked us to make this easier.”
The benefits of curing the Kubernetes management issues
If truth be told, this created a lot of work for teams like my own – which you might think sounds like a good thing. But building and managing Kubernetes clusters is not really the way we would like to use a client’s budget, especially if an alternative exists.
With a managed service now available, we can focus on optimising the overall architecture, migrating applications and enabling our clients to adopt DevOps methodology instead.
Being a multi-cloud platform, organisations could have, in theory, just moved their Kubernetes clusters to another public cloud provider that would support it. But that would have involved even more time and expense and, perhaps more importantly, you would be moving away from many of the features that AWS offers.
With AWS now offering to take care of the build and management side of Kubernetes, however, those difficult decisions don’t need to be made. With all the major public cloud platforms now supporting the technology, the providers will take care of the build and management. This means improvements will be made to ensure we have better integration, scaling and availability. As time goes on, this will only get better and, from the client’s perspective, the assumption is that this will also lead to significant cost savings later down the line.
You could reflect on the current situation and say it’s not good for one technology to be so dominant, but I’m really struggling to see the negatives. It is cheaper for organisations, AWS retains its customers base and the cloud architects get to focus on the job they’ve been brought in to do.
Of course, as an industry we can’t rule out the possibility that new innovations may come along and change the game once again. But, for the time being, the debate is over: we’re working with Kubernetes. And this should end some needless back and forth, allowing us to focus on better ways to introduce new features, scale and secure public cloud infrastructure.
Enterprise cloud users may have written off the spate between gaming companies Improbable and Unity as nothing more than industry in-fighting, but there are some cautionary tales emerging from the dispute they would do well to take note of, argues Caroline Donnelly.
The bun fight that has broken out over the last day or so between UK-based gaming company Improbable and its long-time technology partner Unity has (most likely) gone unnoticed by the majority of enterprise cloud users.
After all, what does a terms and conditions (T&Cs)-related dispute between two gaming firms have to do with them?
But, within the claims and counter claims being bandied about online by both parties (and other assorted gaming industry stakeholders) are some important points that should give enterprises, in the throes of a cloud migration, some degree of pause.
In amongst all the bickering (which we will get on to shortly) there are cautionary notes to be found about why it is so important for enterprises not to rush blindly into the cloud, and to take steps to protect themselves from undue risk. To ensure they won’t be left high and dry if, for example, their preferred cloud partner should alter their service terms without due warning.
From a provider standpoint too, the case also highlights a problem lots of tech firms run into, where their products end up being used in ways they hadn’t quite planned on that might – in turn – negatively affect their commercial interests or corporate ethics.
Improbable vs. Unity: The enterprise angle
For those who haven’t been following (or simply have no clue who either of these two firms are), Improbable is the creator of a game development platform called SpatialOS that some gaming firms pair with Unity’s graphics engine to render their creations and bring them to life.
But, since Thursday 10 January 2019, the two companies have been embroiled in an online war of words. This started with the emergence of a blog by Improbable that claimed any game built using the Unity engine and SpatialOS is now in breach of Unity’s recently tweaked T&Cs.
This could have serious ramifications for developers who have games in production or under development that make use of the combined Unity-Improbable platform, the blog post warned.
“This is an action by Unity that has immediately done harm to projects across the industry, including those of extremely vulnerable or small-scale developers and damaged major projects in development over many years.”
Concerns were raised, in response to the blog, on social media that this might lead to some in production games being pulled, the release dates of those in development being massively delayed, and some development studios going under as a result.
Unity later responded with a blog post of its own, where it moved to address some of these concerns by declaring that any existing projects that rely on the combined might of Unity and Improbable to run will be unaffected by the dispute.
When users go rogue
The post also goes on to call into question Improbable’s take on the situation, before accusing the firm of making “unauthorised and improper use” of Unity’s technology and branding while developing, selling and marketing its own wares ,which is why it is allegedly in breach of Unity’s T&Cs.
“If you want to run your Unity-based game-server, on your own servers, or a cloud provider that [gives] you instances to run your own server for your game, you are covered by our end-user license agreement [EULA],” the Unity blog post continued.
“However, if a third-party service wants to run the Unity Runtime in the cloud with their additional software development kit [SDK], we consider this a platform. In these cases, we require the service to be an approved Unity platform partner.
“These partnerships enable broad and robust platform support so developers can be successful. We enter into these partnerships all the time. This kind of partnership is what we have continuously worked towards with Improbable,” the blog post concludes.
Now, the story does not stop there. Since Unity’s blog post went live, Improbable has released a follow-up post. It has also separately announced a collaboration with Unity rival Epic Games that will see the pair establish a $25m fund to help support developers that need to migrate their projects “left in limbo” by the Unity dispute. There will undoubtedly be more to come before the week is out.
Calls for a cloud code of conduct
While all that plays out, though, there is an acknowledgement made in the second blog post from Improbable about how the overall growth in online gaming platforms is putting the livelihoods of developers in a precarious position, and increasingly at the mercy of others.
Namely the ever-broadening ecosystem of platform providers they have to rely upon to get their games out there. And there are definite parallels to be drawn here for enterprises that are in the throes of building out their cloud-based software and infrastructure footprints too.
“As we move towards more online, more complex, more rapidly-evolving worlds, we will become increasingly interdependent on a plethora of platforms that will end up having enormous power over developers. The games we want to make are too hard and too expensive to make alone,” the blog post reads.
“In the near future, as more and more people transition from entertainment to earning a real income playing games, a platform going down or changing its Terms of Service could have devastating repercussions on a scale much worse than today.”
The company then goes on to make a case for the creation of a “code of conduct” that would offer developers a degree of protection in disputes like this, by laying down some rules about what is (and what is not) permissible behaviour for suppliers within the ecosystem to indulge in.
There are similar efforts afoot within the enterprise cloud space focused on this, led by various governing bodies and trade associations. Yet still reports of providers introducing unexpected price hikes or shutting down services with little notice still occur. So one wonders if a renewed, more coordinated push on this front might be worthwhile within the B2B space as well?
In this guest post, DevOps consultant and researcher Dr. Nicole Forsgren, PhD, tells enterprises why execution is key to building high-performing technology teams within their organisations.
I often meet with enterprise executives and leadership teams, and they all want to know what it takes to make a high-performing technology team.
They’ve read my book Accelerate: The Science of Lean Software and DevOps and my latest research in the Accelerate: State of DevOps 2018 Report (both co-authored with Jez Humble and Gene Kim), but they always ask: “If I could narrow it down to just one piece of advice to really give an organisation an edge, and help them become a high performer, what would it be?”
My first (and right) answer is that there is no one piece of advice that will apply to every organisation, since each is different, with their own unique challenges.
The key to success is to identify the things, such as technology, process or culture, currently holding the organisation back, and work to improve those until they are no longer a constraint. And then repeat the process.
This model of continuous improvement, which is actually identifying strategy, is the most effective way to accelerate an organisational or technology transformation.
However, in looking over this year’s research and checking back in with some of my leadership teams, I realised that one piece of advice would be applicable to everyone, and that is: “High performance is available to everyone, but only if they are willing to put in the work; it all comes down to execution.”
Yes, that’s right – basic execution. While revolutionary ideas and superior strategy are important and unique to your business, without the ability to execute and deliver these to market, your amazing ideas will remain trapped.
Either trapped in a slide deck or in your engineering team, without ever reaching the customer where they can truly make a difference.
If you still aren’t convinced, let’s look at some examples: cloud computing and database change management.
Cloud computing: Done right
Moving to the cloud is often an important initiative for companies today. It is also often something technical executives talk about with frustration: an important initiative that hasn’t delivered the promised value. Why is that? Lack of execution.
In the 2018 Accelerate State of DevOps Report, we asked respondents – almost 1,900 technical experts – if their products or applications were in the cloud. Later in the survey, we asked details about these products and applications they supported. Specifically, these covered five technical characteristics regarding their cloud usage:
- On-demand self-service. Consumers can provision computing resources as needed, automatically, without any human interaction required.
- Broad network access. Capabilities are widely available and can be accessed through heterogeneous platforms (e.g., mobile phones, tablets, laptops, and workstations).
- Resource pooling. Provider resources are pooled in a multi-tenant model, with physical and virtual resources dynamically assigned and reassigned on-demand. The customer generally has no direct control over the exact location of provided resources, but may specify location at a higher level of abstraction (e.g., country, state, or datacentre).
- Rapid elasticity. Capabilities can be elastically provisioned and released to rapidly scale outward or inward commensurate with demand. Consumer capabilities available for provisioning appear to be unlimited and can be appropriated in any quantity at any time.
- Measured service. Cloud systems automatically control and optimise resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported for transparency.
The research revealed 22% of respondents (who said they were in the cloud) also said they were following all five of these essential practices. Why is this important? Because these practices are what make cloud computing, according to the National Institute of Standards and Technology’s (NIST) own definition.
And so, it follows, if teams are not executing on all five of them, they aren’t actually doing cloud. This is not necessarily the fault of the technical teams, but the leadership team, who set initiatives for the organisation and define targets.
Now, why does this matter? We found teams that met all five characteristics of cloud computing were 23 times more likely to be elite performers. This means they are able to develop and deliver software with speed, stability, and reliability — getting their ideas to their end users.
So, if you want to reap the benefits of the cloud and leverage it to really drive value to your customers, you have to execute.
Database: A hidden constraint
Database changes are often a major source of risk and delay when performing deployments so in this year’s research we also investigated the role of database change management.
Our analysis found that integrating database work into the software delivery process positively contributed to continuous delivery and a significant predictor of delivering software with speed, stability, and reliability.
Again, these are key to getting ideas and features (or even just security patches) to end users. But what do we mean by database change management? There are four key practices included in doing this right:
- Communication. Notify upcoming database and schema changes with developers, testers, and the people that maintain your database.
- Including teams. This goes a step beyond discussion to really including the teams involved in software delivery in discussions and designs. Involving all players is what DevOps is all about, and it will make these changes more successful.
- Comprehensive configuration management. This means including your database changes as scripts in version control and managing them just as you do your other application code changes.
- Visibility. Making sure everyone in the technical organisation has visibility to the progress of pending database changes is important.
To restate a point from above, when teams follow these practices, database changes don’t slow software teams down or cause problems when they perform code deployments.
These may look similar to, well, DevOps: integrating functionality throughout the pipeline and shifting left, and that’s exactly what it is.
The great news about this is we can take things we have learned about integrating and shifting left on other parts of our work (like IT operations and security) and apply them to the database.
Our analysis this year also found that database is equally important for teams at every stage of their journey, whether they were low, medium, or high performers.
We see this in our conversations in the industry: data is a hard problem and one that eventually needs to be tackled… and one that needs to be tackled with correct practices.
Particularly in data- and transaction-heavy environments, teams and organisations can leverage their data to drive value to their customers – but only when they execute on the key practices necessary.
Execution: Both simple and hard
For anyone who skips to the bottom of an article looking for the summary, here you go: execute on the basics and don’t skip steps. I acknowledge this is often difficult, but the data is definitive: teams and organisations that don’t fully execute don’t realise the benefits. The great news is this: excellence is available to everyone. Good luck!
In this guest post, Jens Struckmeier, founder and CTO of cloud service provider Cloud&Heat Technologies, shares his views on the ecological and technological challenges caused by Europe’s booming datacentre market.
People have become accustomed to many digital conveniences, thanks to the emergence of Netflix, Spotify, Alexa and YouTube. What many may not know is that with every crime series they stream online, they generate data streams that also gnaw on our environment due to their energy consumption.
Most of the data generated in this way flows through the cloud, which is made up of high-performance datacentres that relieve local devices of most of the computing work. These clouds now account for almost 90 percent of global data traffic. More and better cloud services also mean more and more powerful data centres. These in turn consume a lot of energy and generate a lot of waste heat – this is not a good development from an ecological and economic point of view.
Information and Communication Technology (ICT) with its datacentres now account for three-to-four percent of the world’s electric power consumption, estimates the London research department of Savills. Furthermore, from a global perspective, it is thought they consume more than 416 terawatt hours (TWh) of electricity a year – about as much as the whole of France. And that’s just the tip of the iceberg.
The Internet of Things, autonomous vehicles, the growing use of artificial intelligence (AI), the digitalisation of public administration, and the fully automated networked factories of industry 4.0 are likely to generate data floods far beyond today’s levels.
According to Cisco, global data traffic will almost double to 278 exabytes per month by 2021. This means that even more datacentres will be created or upgraded, with a trend towards particularly large, so-called hyperscale datacentres with thousands of servers.
Why datacentre location matters
In competitive terms, the rural regions of Northern Europe benefit from these trends. In cool regions such as Scandinavia, power is available in large quantities and at low cost.
Additionally, the trend towards hyperscale datacentres means the demand for decentralised datacentres near locations where the processed data is also needed is growing. If the next cloud is physically only a few miles away, there are only short delays between action and reaction, and the risk of failure is also reduced.
With a volume of over £20 billion, the UK is Europe’s largest target market for datacentre equipment. When all construction and infrastructure expenses are added together, Savills estimates that 41% of all investments in the UK were made between 2007 and 2017.
The Knowledge Transfer Network (KTN) estimates annual growth at around 8%. There are currently 412 facilities of different sizes, followed by Germany with 405 and France with 259. For comparison: 2,176 data centres are concentrated in the home country of the Internet Economy, the USA.
In a comparison of European metropolises, London is the capital of data centres. The location has twice the computing capacity of Frankfurt, the number two on the market. According to Savills, however, the pole position could change after the Brexit. Either way, London is likely to remain a magnet as an important financial and technology hub, with the location benefiting from existing ecosystems and major hubs such as Slough.
Environmental pressure on the rise
However, as the European leader in terms of location, the UK also has to deal particularly intensively with the energy and environmental challenges in this sector.
Overall, the larger datacentres consume around 2.47 terawatt hours a year, claims techUK. This corresponds to about 0.76 percent of total energy production in the UK. In London alone, capacities with a power consumption of 384 megawatts were installed in 2016, which corresponds roughly to the output of a medium-sized gas-fired power plant such as the Lynn Power Station in Norfolk.
In view of the strong ecological and economic challenges associated with this development, around 130 data centre operators have joined the Climate Change Agreement (CCA) for datacentres.
In this agreement, they voluntarily commit themselves to improving the power usage effectiveness (PUE) of their facilities by 15 percent by 2023.
PUE stands for the ratio between the total energy fed into the grid and the energy consumption of the computer infrastructure itself. In the UK, the average PUE is around 1.9: almost twice as much energy is used for cooling the data centres as for actual operation.
Smarter power use
There are several ways to use this energy more intelligently: while older, smaller datacentres are often air- or water-cooled, resource-conscious operators rely on thermally more effective hot water-cooling systems.
The computer racks are not cooled with mechanically tempered water or air, but with water that is 40 degrees warm, for example, and the cooling process makes it five degrees warmer, for example. In the northern parts of Europe, where the outside temperature hardly ever rises above 40 degrees, the water can then quite simply be cooled down to the inlet temperature outside.
Because water or air do not have to be cooled down mechanically, the energy savings are significant: Compared to the cold-water version, hot water cooling consumes up to 77 percent less electricity, depending on the selected inlet temperature and system technology.
The benefits of datacentre heat reuse
Modern technological approaches make it possible to utilise waste heat from datacentres that has previously been cooled away in a costly and time-consuming process. Engineers link the cooling circuits of the racks directly with the heating systems of office or residential buildings.
Power guzzlers can thus become heating and power stations for cities, which in future will not only be climate-neutral but also climate-positive. This not only saves energy and money, but also significantly reduces carbon dioxide emissions.
If all datacentres in the UK were to use this technology, up to 530,000 tons of carbon dioxide could be saved annually. This would correspond to 7,500 football pitches full of trees or a forest the size of Ipswich.
Green datacentres and Sadiq Khan’s environmental goals
Many market analysts and scientists are convinced that these solutions will also have a political resonance, as they open fascinating new scope for local solutions and regional providers.
This is ensured by internet law and data protection regulations, but also by technological necessities. It is foreseeable, for example, that concepts such as edge cloud computing, fog computing and the latency constraints mentioned will soon make mobile and local micro supercomputers take over some of the tasks of traditional mainframe computing centres.
Market participants who are prepared to face all these trends in good time are well advised. Digitalisation cannot be stopped – instead, it is necessary to actively shape a sustainable, green digital future.
The special role of datacentres in this digital change is becoming more and more relevant for society in general.
The ambitious goals recently formulated by London Mayor Sadiq Khan to reduce environmental pollution and carbon dioxide emissions will be a real challenge for operators.
By 2020, for example, all real estate will use only renewable energy. By 2050, London is to become a city that no longer emits any additional carbon dioxide. Datacentre operators will inevitably have to rely on sustainable, energy-efficient technologies.
In this guest post, Neil Briscoe, CTO of hybrid cloud platform provider 6point6 Cloud Gateway, sets about busting some common cloud myths for enterprise IT buyers
The cloud provides endless opportunity for businesses to become more agile, efficient and – ultimately – more profitable. However, the hype around cloud has made firms so desperate for the action, they’ve become blinded to the fact it might not be the right move for every company.
Many firms get sucked into the benefits of the cloud, without truly understanding what the cloud even is. I mean, would you buy a car without knowing how to drive it?
The eight fallacies of distributed computing guided many leaders into making the right decision for their business when adopting new systems. However, as the cloud continues to evolve, it can be hard to keep up.
With this in mind, here are some of commonly recurring myths that need to be busted about cloud, so businesses can make sound decisions on what is right for them.
Myth one – You have to go all-in on cloud
As with any hyped innovation, cloud is being positioned by many as a ‘fix all solution’ and somewhere that businesses can migrate all their systems to in one swoop.
In reality, many services are at different stages of their life cycle, and not suitable for full migration, and running legacy systems are no bad thing.
It’s important that each system is fit for purpose, so shop around and see exactly which one fits with your strategic roadmap. By trying to fit a square peg in a round hole, you open yourself up to all sorts of security and operational issues.
Myth two – Cloud is cheaper than on-premise
Datacentres are notoriously expensive but the hidden costs in maintaining a cloud infrastructure can be deceptively high. Don’t get distracted by the desirably low initial CAPEX and onboarding costs for migrating to the cloud, as the ongoing OPEX can shoot you in the foot long-term.
Public cloud service providers like AWS are now being much more transparent with their Total Cost of Ownership (TCO) calculator, and the numbers are quite accurate as long as you have discipline. This in turn is reducing the need for enterprises to invest in large capital expenditures and means they only pay for what you need and use.
By having a long-term plan and understanding where, why and how the cloud will benefit your business, this will – in-turn – reduce your costs. Migrating the whole datacentre will not.
Myth three – Cloud is more secure than on-premise
On-premise datacentres are clear. You have a door to go in and out of, and you can have a reasonable grasp of where your external perimeter is, and then secure it. In the cloud, there are infinite doors and windows for a malicious actor to exploit, if not secured properly.
Cloud providers give you the tools to secure your data and infrastructure probably better than you can with on-premise datacentre tools, as long as they’re used correctly.
It’s very easy to build a service and make it completely public-facing without having any resilience or security at all. Don’t assume that all cloud providers will create such high levels of security in the architecture “by default”. Sometimes you must dig deep and truly understand what is happening under the hood.
Myth four – One cloud provider is all you need
Don’t be afraid to go multi-cloud. Each system is different, and you have the power to choose the best-of-breed system for all workloads, even if this means using more than one provider.
As cloud is relatively new, people are still experimenting and figuring out what suits them best. By going multi-cloud, enterprises aren’t restricted to one particular operator and can get best-for-the-job services without sacrificing on agility. Never sacrifice on agility. You can have your cake and eat it.
Myth five – It is better to build, not buy
Building a private network and connectivity platform in the cloud sounds desirable; you build the best platform to serve your requirements without answering to an external vendor. However, building your own network isn’t as easy as 1,2,3 and once you’ve built it, maintaining it is the hardest step.
Talent is in high demand, so ensuring you can keep your talent in house to keep developing and improving your network without letting them escape is an ongoing challenge. It’s a task to ensure continuity.
By “buying”, it’s possible to remove the headache of connectivity, security and network agility of a potentially complex architecture and will allow enterprises to focus on the actual digital transformation itself without draining resources for, what is becoming, a commodity IT item.
In closing, while cloud isn’t a new innovation, enterprises and suppliers alike are continuously developing and understanding how it can benefit business.
Enterprises need to de-sensitise themselves from the hype and investigate exactly how and why the cloud will improve efficiency, competitiveness and reduce costs.
Cloud is changing every day and everyone’s experiences are different, but keeping your eyes and ears open to the numerous benefits but also the threats it can create will ensure that enterprises use the cloud effectively.
In this guest post, Chris Hodson, chief information security officer for Europe, Middle East and Africa (EMEA) at internet security firm Zscaler, takes a closer look at why cloud security remains such an enduring barrier to adoption for so many enterprises.
Cloud computing is growing in use across all industries. Multi-tenant solutions often provide cost savings whilst supporting digital transformation initiatives, but concerns about the security of cloud architectures persist.
Some enterprises are still wary using cloud because of security concerns, even though the perceived risks are rarely unique to the cloud. Indeed, my own research shows more than half of the most appropriate vulnerabilities with cloud usage (identified by the European Union Agency for Network and Information Security (ENISA)) would be present in any contemporary technology environment.
Weighing up the cloud security risk
To establish the risks, beneﬁts and suitability of cloud, we need to define the various cloud deployments, and operational and service models in use today and in the future.
A good analogy is that clouds are like roads; they facilitate getting to your destination, which could be a network location, application or a development environment. No-one would enforce a single, rigid set of rules and regulations for all roads – many factors come into play, whether it’s volume of trafﬁc, the likelihood of an accident, safety measures, or requirements for cameras.
If all roads carried a 30 mile per hour limit, you might reduce fatal collisions, but motorways would cease to be efﬁcient. Equally, if you applied a 70 mile per hour limit to a pedestrian precinct, unnecessary risks would be introduced. Context is very important – imperative in fact.
The same goes for cloud computing. To assess cloud risk, it is vital that we define what cloud means. Cloud adoption continues to grow, and as it does, such an explicit delineation of cloud and on-premise will not be necessary.
Is the world of commodity computing displacing traditional datacentre models to such an extent that soon all computing will be elastic, distributed and based on virtualisation? On the whole, I believe this is true. Organisations may have speciﬁc legacy, regulatory or performance requirements for retaining certain applications closer-to-home, but these will likely become the exception, not the rule.
Consumers and businesses continue to beneﬁt from the convenience and cost savings associated with multi-tenant, cloud-based services. Service-based, shared solutions are pervasive in all industry verticals and the cloud/non-cloud delineation is not a suitable method of performing risk assessment.
Does cloud introduce new cloud security risks?
According to the International Organisation for Standardisation (ISO), risk is “the effect of uncertainty on objectives.” So, does cloud introduce new forms of risk which didn’t exist in previous computing ecosystems? It is important that we understand how many of these are unique to the cloud and a result of the intrinsic nature of cloud architecture.
Taking an ethological or sociological approach to risk perception, German psychologist Gerd Gigerenzer asserts that people tend to fear what are called “dread risks”: low-probability, high-consequence but with a primitive, overt impact. In other words, we feel safer with our servers in our datacentre even though we would (likely) be better served to leave security to those with cyber security as their core business.
The cloud is no more or less secure than on-premise technical architecture per se. There are entire application ecosystems running in public cloud that have a defence-in-depth set of security capabilities. Equally, there are a plethora of solutions that are deployed with default conﬁgurations and patch management issues.
Identifying key cloud security risks
ENISA, which provides the most comprehensive and well-constructed decomposition of what it considers the most appropriate vulnerabilities with cloud usage, breaks cloud vulnerabilities into three areas:
- Policy and organisational – this includes vendor lock-in, governance, compliance, reputation and supply chain failures.
- Technical – this covers resource exhaustion, isolation failure, malicious insider, interface compromise, data interception, data leakage, insecure data deletion, denial of service (DDoS) and loss of encryption keys
- Legal – such as subpoena and e-discovery, changes of jurisdiction, data protection risks and licensing risks
The fact is, however, that most of these vulnerabilities are not unique to the cloud. Instead, they are the result of a need for process change as opposed to any technical vulnerability. The threat actors in an on-premise and public cloud ecosystem are broadly similar.
An organisation is idiomatically only as strong as its weakest link. Whilst it is prudent to acknowledge the threats and vulnerabilities associated with public cloud computing, there are a myriad of risks to the conﬁdentiality, integrity and availability which exist across enterprise environments.
Ultimately, when it comes to the cloud it’s all about contextualising risk. Businesses tend to automatically think of high profile attacks, such as the Spectre meltdown, but the chances of this type of attack happening is extremely low.
Organisations undoubtedly need to assess the risks and make necessary changes to ensure they are compliant when making the move to the cloud, but it is wrong to assume that the cloud brings more vulnerabilities – in many situations, public cloud adoption can improve a company’s security posture.
In this guest post, Allan Brearley, cloud practice lead at IT services consultancy ECS, advises enterprises to start small to achieve big change in their organisations with cloud.
The success of Elon Musk’s Falcon Heavy rocket (and its subsequent return to earth) wowed the world. Imprinted on everyone’s memory, thanks to the inspired addition of the red Tesla Roadster as its payload blasted out Bowie’s Life on Mars on repeat, the mission demonstrates the “start small, think big, learn fast” ethos evangelised by that other great American innovator, Steve Jobs.
And this “start small, think big” ethos can equally be applied to cloud-based transformation projects.
Making the right decisions at the right time is key. Musk understands this, expounding the need to focus on evidence-based decision making to come up with a new idea or solve a problem.
While thinking big about rocket innovation, he committed to three attempts, setting strict boundaries and budgets for each phase. This meant he didn’t waste cash or resources on something that wouldn’t work.
Coming back down to earth, IT teams tasked with moving workloads to the cloud (rather than putting payloads into space) can learn a lot from this approach to innovation.
For organisations not born in the cloud, the decision to bring off-premise technologies into the mix throws up some tough questions and challenges. As well as the obvious technical considerations, there are other hoops to jump through from a cost, risk, compliance, and regulatory point of view, to ensuring you have suitably qualified people in place.
Lay the groundwork for cloud
It is quite common for highly-regulated businesses with complex infrastructure to enter a state of analysis paralysis at the early stages of a cloud transformation due to the sheer scale and difficulty of the task ahead.
Instead of pausing and getting agreement on a “start small” strategy, they feel compelled to bite off more than they can chew and “go large”.
At this point, enterprises often go into overdrive to establish the business case, scope the project, and formulate the approach all in one fell swoop. But this is likely to result in the cloud programme tumbling back to earth with a bang.
It is simply impossible on day one to plan and build a cloud platform that will be capable of accepting every flavour of application, security posture and workload type.
As with any major transformation project, the cultural and organisational changes will take considerable time and effort. Getting your priorities straight at the outset and straightening out your people, process, and technology issues is critical. This involves getting closer to your business teams, being champions for change, and up-skilling your workforce.
A shift in cloud culture
Moving to the cloud often heralds a shift in company culture, and it’s important to consider up front how the operating model will adapt and what impact this will have across the business.
Leadership needs to prepare for the shift away from following traditional waterfall software development processes to embracing agile and DevOps methodologies that will help the business make full use of a cloud environment, while speeding up the pace of digital transformation.
Cloud ushers in a new way of working that cuts swathes across enterprises’ traditional set-up of infrastructure teams specialising in compute, network, and storage etc. Care must be taken to ensure these individuals are fully engaged, supported, trained and incentivised to ensure a successful outcome.
Start small to achieve big things
Starting with a small pathfinder project is a good strategy, as it allows you to lay the foundations for the accelerated adoption of cloud downstream, as well as for migration at scale – assuming this is the chosen strategy.
Suitable pathfinder candidates might be an application currently hosted on-premise that can be migrated to the cloud, or a new application/service that is being developed in-house to run in the cloud from day one.
Once the pathfinder project is agreed upon, the race is on to assemble a dream team of experts to deliver a Minimum Viable Cloud (MVC) capability that can support the activity; this team will also establish the core of your fledgling cloud Centre of Excellence (CoE).
Once built, the MVC can be extended to support more complex and demanding applications that require more robust security measures.
The CoE team will also be on hand to support the construction of the business case for a larger migration. This includes assessing suitable application migration candidates, and grouping them together into virtual buckets, based on their suitability for cloud.
These quick wins will help to convert any naysayers and secure stakeholder buy-in across the business ahead of a broader cloud adoption exercise. They are also a powerful way to get the various infrastructure teams on side, providing as they do a great platform for re-skilling and opening up fresh career opportunities to boot.
In summary, taking a scientific approach to your cloud journey and moving back and forth between thinking big and moving things forward in small, manageable steps will help enable, de-risk and ultimately accelerate a successful mass migration.
As both the space race and the cloud race heat up, it’s good to remember the wise words of the late great Steve Jobs: “Start small, think big. Don’t worry about too many things at once. Take a handful of simple things to begin with, and then progress to more complex ones. Think about not just tomorrow, but the future. Put a ding in the universe.”