Ahead in the Clouds


June 12, 2019  8:30 AM

Cloud migration: A step-by-step guide

Caroline Donnelly Profile: Caroline Donnelly

In this guest post, Paul Mercina, director of product management, at datacentre hardware and maintenance provider ParkPlace Technologies offers a step-by-step guide about how, what and when to move to the cloud.

 The benefits of using cloud are multiple, but the process of migrating a company’s IT systems off-premise (while simultaneously ensuring ‘business as usual’ for staff, customers and the supply chain) is not without its challenges.

While investing in the cloud will result in less on-site hardware and fewer applications for IT managers to manage, this may not necessarily translate into less work to do.

Cloud computing depends on a significant amount of oversight to ensure suppliers are meeting service level agreements, keeping to budget and cloud sprawl is kept to a minimum.

This vital work requires a different skill set, so you will need to consider upskilling and retraining staff to manage their evolving roles.

Developing a robust cloud migration strategy alongside this work will be a must, and there are few things to bear in mind when seeking to create one.

Cloud migration: Preparation is everything

As the global cloud market matures, CIOs are increasingly presenting compelling business cases for cloud adoption. Moving all your IT systems to the cloud instantly may have strong appeal, but in reality, this is unrealistic. Not everything can or should be moved, and you will also need to consider the order of migration and impact on business and staff.

Considering the unique needs of your organisation will be critical to developing a plan that unlocks the benefits of the cloud without compromising security, daily business activities, existing legacy systems or wasting budget.

Many applications and services are still not optimised for virtual environments, let alone the cloud. Regardless of how ambitious a company’s cloud strategy is, it’s likely you will have a significant datacentre footprint remaining to account for important data and applications.

Supporting these systems can be an ongoing challenge, particularly as organisations place more importance, budget and resource into the cloud.

Cloud migration: The importance of interim planning

Mapping a cloud migration strategy against long-, mid- and short-term goals can be helpful. The long term plan may be to move 80% of your applications and data storage to the cloud; however in the short term you will need to consider how you will maintain accessibility and security of existing data, hardware and applications while cloud migration takes place.

Third party suppliers can help maintain legacy systems and hardware during the transition to ease disruption and ensure business continuity.

In line with this, cloud migration will inevitably involve the retirement of some hardware. From a security perspective it’s imperative to ensure any stored data is secured to avoid exposing your organisation to the risk of data breaches.

Many organisations underestimate hard drive-related security risks or assume incorrectly that routine software management methods provide adequate protection.

Cloud migration: Meeting in the middle

Moving to the cloud often creates integration challenges, leaving IT managers to find ways to successfully marry up on-premise hardware with cloud-hosted systems.

In many cases, this involves making sure the network can handle smooth data transmissions between various information sources. However, getting cloud and non-cloud systems to work with one another can be incredibly difficult, involving complex projects that are not only difficult to manage, but also complicated by having fewer resources for the internal facility.

Cloud migration: Budget management challenges

With more budget being transferred off site for cloud systems and other IT outsourcing services, many IT managers are left with less to spend on their on-site IT infrastructure.

The financial pressure mounts as more corporate datacentres need to take on cloud attributes to keep up with broad technology strategies. Finding ways to improve IT cost efficiency is vital to addressing internal data centre maintenance challenges.

Cloud migration: Post-project cost management

Following a cloud migration, retained legacy IT systems age, so it is worth investigating if enlisting the help of a third-party maintenance provider might be of use.

A third-party maintenance provider can give IT managers the services they need, in many cases, for almost half the cost. The end result is that IT teams can have more resources to spend on the internal datacentre and are better prepared to support the systems that are still running in the on-site environment.

While hardware maintenance plans may not solve every problem, they provide a consistent source of fiscal and operational relief that makes it easier for IT teams to manage their data storage issues as they arise.

May 28, 2019  10:58 AM

The people vs. Amazon: Weighing the risks of its stance on facial recognition and climate change

Caroline Donnelly Profile: Caroline Donnelly

Amazon’s annual shareholder meeting appears to have highlighted a disconnect between what its staff and senior management think its stance on facial recognition tech and climate change should be.

When in need of a steer on what the business priorities are for any publicly-listed technology supplier, the annual shareholders’ meeting is usually a good place to start.

The topics up for discussion usually provide bystanders with some insight into what issues are top of mind for the board of directors, and the people they represent: the shareholders.

From that point of view, the events that went down at Amazon’s shareholder meeting on Wednesday 22 May 2019 paint a (potentially) eye-opening picture of the retail giant’s future direction of travel.

Of the 11 proposals up for discussion at the event, there were a couple of hot button issues. Both were raised by shareholders, with the backing of the company’s staff.

The first relates to whether Amazon should be selling its facial recognition software, Rekognition, to government agencies. This is on the back of long-held concerns that technologies of this kind could be put to use in harmful ways that infringe on people’s privacy rights and civil liberties.

Shareholders, with the support of 450 Amazon staff, asked meeting participants to vote on whether sales of the software should be temporarily halted. That is until independent verification can be sought that confirms it does not contribute to “actual or potential” violations of people’s private and civil rights.

The second equally as contentious discussion point relates to climate change. Or, more specifically, what Amazon is doing to prepare and prevent it, and drive down its own consumption of fossil fuels. Shareholders (along with 7,600 Amazon employees) asked the company in this proposal to urgently create a public report on the aforementioned topics.

Just say no

And in response Amazon’s shareholders and board of directors voted no. They voted no to a halt on sales of Rekognition. And no to providing shareholders, staff and the wider world with an enhanced view of what it’s doing to tackle climate change.

The company is not expected to publish a complete breakdown of the voting percentages on these issues for a while, so it is impossible to say how close to the 50% threshold these proposals were to winning approval at the meeting.

On the whole, though, the results are not exactly surprising. In the pre-meeting proxy statement, Amazon’s board of directors said it would advise shareholders to reject all of the proposals put up for discussion, not just those pertaining to facial recognition sales and climate change.

Now, imagine for a minute you are a shareholder or you work at Amazon, and you’ve made an effort to make your displeasure about how the company is operating known. And, particularly where the facial recognition proposal is concerned, you get effectively told you are worrying about nothing.

In the proxy statement, where Amazon gives an account as to why it is rejecting the proposal, it says the company has never received a single report about Rekognition being used in a harmful manner, while acknowledging there is potential for any technology – not just facial recognition – to be misused.

Therefore: “We do not believe that the potential for customers to misuse results generated by Amazon Rekognition should prevent us from making that technology available to our customers.”

Commercially motivated votes?

A cynically-minded person might wonder if its stance on this matter is commercially motivated, given two of its biggest competitors, Microsoft and Google, are taking a more measured approach to how and who they market their facial recognition technology to.

Microsoft, for example,  turned down a request to sell its technology to a Californian law enforcement agency because it had been predominantly trained on images of white males.  Therefore, there is a risk it could lead to women and minority groups being disproportionately targeted when used to carry out facial scans.

Google, meanwhile, publicly declared back in December 2018 that it would not be selling its facial recognition APIs, over concerns about how the technology could be abused.

Incidentally, Amazon’s shareholder concerns about the implications of selling Rekognition to government agencies were recently echoed in an open letter, signed by various silicon valley AI experts. They include representatives from Microsoft, Google, and Facebook, who also called on the firm to halt sales of the technology to that sector too.

Against that backdrop, it could be surmised that Amazon is trying to make the most of the market opportunity for its technology until the regulatory landscape catches up and slows things down? Or, perhaps, it is just supremely confident in the technology’s abilities and lack of bias?  Who can say?

Climate change: DENIED

Meanwhile, the company’s very public rejection of its shareholder and staff-backed climate change proposal seems, at least to this outsider, a potentially dicey move.

The company’s defence on this front is that it appreciates the very real threat climate change poses to its operations, the wider world and how its operations may contribute towards that. And, to this end, it has already implemented a number of measures to support its transformation into a carbon neutral, renewably-powered and more environmentally-friendly company as a nod to that.

But, where the general public is concerned, how much awareness is there about that? Amazon has a website where it publishes details of its progress towards becoming a more sustainable entity, sure, but it’s also very publicly just turned down an opportunity to be even more transparent on that.

Climate change is an issue of global importance and one people want to see Amazon stand up and take a lead on, and even some of its own staff don’t think it’s doing a good enough right now. For proof of that, one only has to cast a glance at some of the blog posts the Amazon Employees Climate Justice group have shared on the blogging platform, Medium, of late.

Amazon vs. Google: A tale of two approaches

From a competitive standpoint, its rejection of these proposals could end up being something of an Achilles heel for Amazon when it comes to retaining its lead in the public cloud market in the years to come.

When you speak to Google customers about why they decided to go with them over some of the other runners and riders in the public cloud, the search giant’s stance on sustainability regularly crops up as a reason. Not in every conversation, admittedly, but certainly in enough of them for it to be declared a trend.

From an energy efficiency and environmental standpoint, it has been operating as a carbon neutral company for more than a decade, and claims to be the world’s largest purchaser of renewable energy.

It is also worth noting that Google responded to a staff revolt of its own last year by exiting the race for the highly controversial $10bn cloud contract the US Department of Defense is currently entertaining bids for.

Amazon and Microsoft are the last ones standing in the battle for that deal, after Google dropped out on the grounds it couldn’t square its involvement in the contract with its own corporate stance on ethical AI use.

There is a reference made within the proxy statement that, given the level of staff indignation over some of these issues, the company could run into staff and talent retention problems at a later date.

And that is a genuine threat. For many individuals, working for a company whose beliefs align with your own is very important and when you are working in a fast-growing, competitive industry where skills shortages are apparent, finding somewhere new to work where that is possible isn’t such an impossible dream.

And with more than 8,000 of its employees coming together to campaign on both these issues, that could potentially pave the way for a sizeable brain drain in the future if they don’t see the change in behaviour they want from Amazon in the months and years to come.


May 9, 2019  3:39 PM

Has the UK government’s cloud-first policy served its purpose?

Caroline Donnelly Profile: Caroline Donnelly

The government has confirmed its long-standing public cloud-first policy is under review, and that it is seeking to launch an alternative procurement framework to G-Cloud. But why?

It is hard not to read too much into all the change occurring in the public sector cloud procurement space at the moment, and all too easy to assume the worst.

First of all there is the news, exclusively broken by Computer Weekly, that the government’s long-standing cloud-first policy is under review, six years after it was first introduced.

For the uninitiated, the policy effectively mandates that all central government departments should take a public cloud-first approach on any new technology purchases. No such mandate exists for the rest of the public sector, but they are strongly advised to do the same.

The policy itself was ushered in around the same sort of time as the G-Cloud procurement framework launched. Both were hailed by the austerity-focused coalition government in power at the time as key to accelerating the take-up of cloud services within Whitehall and the rest of the public sector.

Encouraging the public sector – as a whole – to ramp up their use of cloud (while winding down their reliance on on-premise datacentres) would bring cost-savings and scalability benefits, it was claimed.

Additionally, the online marketplace-like nature of G-Cloud was designed to give SMEs and large enterprises the same degree of visibility to public sector IT buyers. Meanwhile, its insistence on two-year contract terms would safeguard users against getting locked into costly IT contracts that lasted way longer than the technology they were buying would remain good value for money.

Revoking the revolution

The impact the cloud-first policy has had on the IT buying habits of central government should not be underestimated, and can be keenly felt when flicking through the revamped Digital Marketplace sales pages.

Of the £4.57bn of cloud purchases that have made via the G-Cloud procurement framework since its launch in 2012, £3.7bn worth of them have been made by central government departments.

The fact Whitehall is mandated to think cloud-first is always the first thing IT market watchers point to when asked to give an account as to why the wider public sector have been so (comparatively) late to the G-Cloud party.

Public sector IT chiefs often cite the policy as being instrumental in helping secure buy-in from their senior leadership teams for any digital transformation plans they are plotting.

But, according to the government procurement chiefs at the Crown Commercial Service (CCS) and the digital transformation whizz-kids at the Government Digital Service (GDS), the policy is now under review, and set for a revamp.

In what way remains to be see. Although – in a statement to Computer Weekly – CCS suggested the changes will be heavily slanted towards supporting the growing appetite within the public sector for hybrid cloud deployments.

The trouble with curbing cloud-first behaviours

The obvious concern in all this is that, if the cloud-first mandate is revoked completely, central government IT chiefs might start falling back into bad procurement habits, whereby cloud becomes an afterthought and on-premise rules supreme again.

Maybe that is an extreme projection, but there are signs elsewhere that some of the behaviours that G-Cloud, in particular, was introduced to curb could be starting to surface again.

One only has to look at how the percentage of deals being awarded to SMEs via G-Cloud has started to slide of late, which has fed speculation a new oligopoly of big tech suppliers is starting to form, who will –  in time – dominate the government IT procurement landscape.

Where G-Cloud is concerned, there are also rumblings of discontent among suppliers who populate the framework that it is becoming increasingly side-lined for a number of reasons.

There are semi-regular grumbles from suppliers that suggestions they have made to CCS or GDS about changes they would like made to the framework being ignored, or not being acted on as quickly as they would like.

Putting users first

Some of these are to do  with making the framework less onerous and admin-heavy for SMEs to use, while others are geared towards making the whole cloud purchasing experience easier for buyers overall.

Either way, suppliers fear this perceived lack of action has prompted buyers to take matters into their own hands by setting up cloud procurement frameworks of their own because G-Cloud is no longer meeting their needs.

And that’s an argument that is likely to get louder following the news that CCS is considering launching an additional cloud hosting and services framework, where contracts of up to 5 years in length could be up for grabs.

A lot of innovation can happen over the course of five years, which leads to the logical assumption that any organisation entering into a contract that long might find itself at something of a technological disadvantage as time goes on.

While the public cloud community has stopped publicly making such a song and dance about price cuts, the fact is the cost of using these services continues to go down over time for users because of economies of scale.

However, if you’re locked-in to a five year contract, will they necessarily feel the benefit of that? Or will they be committed to paying the same price they did at the start of the contract all the way through? If so, in what universe would that represent good value for money?

A lot can change between the consultation and go-live stages of any framework, but there are concerns that this is another sign the government is intent on falling back into its old ways of working where procurement is concerned.

Government cloud comes of age

Although, another way of looking at all this is a sign that the cloud-first policy and G-Cloud have served their purpose. Together they have conspired to make public sector buyers feel so comfortable and confident with using cloud, they feel ready to go it alone and launch frameworks of their own.

Or, as their cloud strategies have matured, it has become apparent that for some workloads a two-year contract term works fine, but there are others where a longer-term deal might be a better, more convenient fit.

It is not out of the realms of possibility. It is worth noting the shift from public cloud-first to a policy that accommodates hybrid deployments is in keeping with the messaging a lot of the major cloud providers are putting out now, which is very different to what it was back in 2012-2013.

Around that time, Amazon Web Services (AWS) was of the view that enterprises will want to close down their datacentres, and move all their applications and workloads to the public cloud.

The company still tows a similar line today, but there is an admission in there now that some enterprises will need to operate a hybrid cloud model and retain some of their IT assets on-premise for some time to come.  And the forthcoming change to the government’s cloud-first policy might simply be its way of acknowledging the same trend.


April 5, 2019  8:41 AM

Curbing cloud sprawl to keep IT costs down

Caroline Donnelly Profile: Caroline Donnelly

In this guest post from Henrik Nilsson, vice president for Europe, Middle East and Africa (EMEA) at machine learning-based IT cost optimisation software supplier Apptio, offers enterprises some advice on what they can do to stop cloud sprawl in its tracks and keep costs down.

On the surface, it seems there are plenty of reasons for businesses to jump head-first into the cloud: agile working practices, the ability to scale resources, and boost the resiliency of their infrastructure, for example.

However, enthusiasm for the newest technology creates a tendency for business leaders to make investments without analysis to support strategic decision-making. Cloud is not immune.

Before long, cloud costs can escalate out of control. Usage can go through the roof, or business value from cloud-based applications can plummet and accountability is replaced by a “Wild West” approach to resource use, whereby whoever needs it first, gets to use it.

In this type of scenario, CIOs should take a step back and consider how to harness the power of the cloud to align with the wider objectives of the business.

Managing cloud sprawl can be the hardest part of aligning cloud usage to a business strategy. Cloud sprawl is the unintentional proliferation of spending on cloud, and is often caused by a lack of visibility into resources and communication between business units. Different departments across organisations want to take advantage of the benefits of cloud, but need to understand how this impacts budgets and the goals of the business.

To successfully master the multiple layers of cloud costs , IT and finance leaders need to see the full picture of their expenditure. They need to utilise data, drive transparent communication, and continuously optimise to stop cloud sprawl, achieve predictable spending, and build an informed cloud migration strategy.

Strategic cloud use

Using consumption, usage and cost data to make cloud purchasing decisions is the first step to stopping cloud sprawl at its root. Migration decisions should not be based on assumptions or emotion.

The cost to migrate workloads can be very high, so businesses need to understand not just the cost to move, but also how moving to the cloud will impact costs for networking, storage, risk management and security, and labour or retraining of an existing IT team. They also need to evaluate the total value achieved by this migration and align decisions with the strategic needs of the business.

A key driver of cloud sprawl is the assumption that cloud is the solution to any given business need. Not every department needs to migrate every part of its operation. In some instances, on-premises might be the right decision. A hybrid approach is considered by many to be the best balance – one survey suggested that a 70/30 split of cloud to on-prem was the ideal mix. This enables certain mission-critical applications to remain, while the majority of computing power may be moved to the public cloud.

Visibility through division

When cloud is carved up among stakeholders (from marketing to human resources to IT itself) it can be hard to get a clear picture of usage and costs. Multiple clouds are used for different needs, and over time usage creeps up on a ‘just in case’ basis, even where demand isn’t present.

To get a handle on this, the ever-expanding cloud demands of a business need to be calculated and then amalgamated into one single source of truth. A cohesive cloud approach is necessary across departments, or there is no hope of maximising the potential benefits.

Ideally, there needs to be a centralised system of record, whereby all cloud data (as well as its cost) can be viewed in a transparent format, and clearly articulated to any part of the business. Without this, cloud usage becomes siloed between departments or spread out across various applications and software, as well as compute power and storage. This makes it nearly impossible to have a clear picture of how much is being paid for and used – or equally, how much is needed.

Once a strong base has been established to visualise cloud usage, and different departments can make investment and purchasing decisions, optimisation is key. This may be something as simple as looking at whether a particular instance would be more efficiently run under a pay-as-you-use model versus a reserved spend or calculating the value that has been gained from migrating depreciated servers to the cloud. This can then inform similar future decisions.

Optimising in this way ensures that cloud usage isn’t allowed to spiral out of control. Cloud is a necessary modern technology to fuel innovation, but businesses need to reign in waste in spend and resources to eliminate cloud sprawl and ensure the right cloud investment decisions are being made to support broader business strategies.


March 20, 2019  12:32 PM

What the Myspace data loss debacle tells us about how the internet values creative content

Caroline Donnelly Profile: Caroline Donnelly

The news that Myspace lost 12 years of user-generated content during a botched server migration has prompted a lot of debate this week, which Caroline Donnelly picks over here. 

When details of Myspace’s server migration snafu broke free from Reddit, where the incident has been discussed and known about for the best part of a year, the overriding reaction on Twitter was one of disbelief.

In no particular order, there was the shock at the size of the data loss, which equates to 12 years of lost uploads (or 50 million missing songs), and the fact these all disappeared because Myspace didn’t back the files up before embarking on its “server migration”. Or its backup procedures failed.

There was also a fair amount of surprise expressed that Myspace is still a thing that exists online, which might explain why it took so long for the rest of the internet to realise what had gone down there.

And now they have, the response online has been (predictably) snarky. Myspace, for its part, issued a brief statement apologising for any inconvenience caused by its large-scale data deletion, but that’s been the extent of its public response to the whole thing.

That in itself is quite interesting. The fact a company can lose 12 years of user data, and shrug it off so nonchalantly. Obviously it would be a completely different state of affairs if it was medical records or financial data the company had accidentally scrubbed from its servers.

Digital legacy destruction

What the situation does serve to highlight though is how precarious our digital legacies are for one thing. In amongst all the nostalgia-laden Twitter jokes from people who used Myspace, back when it was at the height of its social networking power, there was also a smattering of genuinely distraught posts from people dismayed at what they had lost.

Individuals who had been prolific Myspace songwriters over the years, who had lost sizeable data dumps of content. In many cases these people had  trusted Myspace at its word when it said it was working on restoring access to their files,  as complaints about playback issues on the site first started to surface back in early 2018.

These are people who have spent a long time curating content that made it possible for Myspace to double-down on its efforts to become a music streaming site, long after the people who first flocked to the site for social networking had hot-footed it to Facebook and Twitter.

I guess the incident should act as a timely reminder that uploading your data to a social networking site is not the same as backing it up, and if you don’t have other copies of this content stored somewhere else, that’s on you.

It’s not exactly helpful, though, and it doesn’t make the situation any less gutting for the people whose data has been lost forever.

There also seems to be an attitude pervading all this that because it’s just “creative content” that has gone, what’s the harm? I mean, can’t you just make more?

It’s a curious take that highlights – perhaps – how little value society places on creative content that’s made freely available online, while also ignoring the time and effort it takes for people to make this stuff. Also a lot of it is off its time, making it impossible for its makers to recreate it.

Analysing Myspace’s response

There has been speculation that Myspace’s nonchalant attitude to losing so much of its musical content could be down to the fact the deletion was more strategic than accidental.

This is a view put forward by ex-Kickstarter, CTO Andy Baio, who said the firm might be blaming the data loss on a botched server migration because it sounds better than it admitting it “can’t be bothered with the effort and cost of migrating and hosting 50 million old MP3s,” in a Twitter post.

And there could be something in that. It is, perhaps, telling that it has taken so long for details of the data loss to be made public, given Myspace users claim they first started seeing notifications about the botched server migration around eight months ago.

That was about five or six months after reports of problems accessing old songs and video content began to circulate on the web forum, Reddit, with Myspace claiming – at the time – that it was in the throes of a “maintenance project” that might cause content playback issues.

Meanwhile, the site’s news pages do not appear to have been updated since 2015, its Twitter feed last shared an update in 2017, and Computer Weekly’s requests for comment and clarification have been met with radio silence.

The site looks pretty much in the throes of a prolonged and staggered wind-down, which – in turn – shows the dangers of assuming the web entities we entrust our content to today will still be around tomorrow.


March 1, 2019  3:28 PM

Kubernetes management made easier for enterprises

Caroline Donnelly Profile: Caroline Donnelly

In this guest post, Rob Greenwood, CTO at Manchester-based cloud and DevOps consultancy, Steamhaus, sets out why the emergence of Amazon’s managed Kubernetes service is such good news for the enterprise. 

In just four years since its conception, Kubernetes has become the de facto tool for deploying and managing containers within public cloud environments at scale. It has won near universal backing from all major cloud players and more than half of the Fortune 100 have already adopted it too.

While alternatives exist, namely in the form of Docker Swarm and Amazon ECS, Kubernetes is by far the most dominant. Adoption has been so pervasive that, even in private cloud environments, the likes of Mesosphere and VMware have fully embraced the technology.

Perhaps the most significant development though, was AWS’s general release of its managed Kubernetes service (EKS) in June 2018. Amazon is well known for listening to its customers and the desire to see Amazon provide a managed service for Kubernetes was palpable – it was one of the most popularly requested features in AWS history.

Despite developing its own alternative in ECS, AWS decided to go with the will of crowd on this one and offer full support. This was a wise move when you consider that 63% of Kubernetes workloads were on AWS when Amazon announced it EKS (although by the time it was made generally available this was said to have fallen to 57%).

Not having this container service for Kubernetes in place was also creating a few headaches for its users. As Deepak Singh, director at AWS container services, explained: “While AWS is a popular place to run Kubernetes, there’s still a lot of manual configuration that customers need to manage their Kubernetes clusters… This all requires a good deal of operational expertise and effort, and customers asked us to make this easier.”

The benefits of curing the Kubernetes management issues

If truth be told, this created a lot of work for teams like my own – which you might think sounds like a good thing. But building and managing Kubernetes clusters is not really the way we would like to use a client’s budget, especially if an alternative exists.

With a managed service now available, we can focus on optimising the overall architecture, migrating applications and enabling our clients to adopt DevOps methodology instead.

Being a multi-cloud platform, organisations could have, in theory, just moved their Kubernetes clusters to another public cloud provider that would support it. But that would have involved even more time and expense and, perhaps more importantly, you would be moving away from many of the features that AWS offers.

With AWS now offering to take care of the build and management side of Kubernetes, however, those difficult decisions don’t need to be made. With all the major public cloud platforms now supporting the technology, the providers will take care of the build and management. This means improvements will be made to ensure we have better integration, scaling and availability. As time goes on, this will only get better and, from the client’s perspective, the assumption is that this will also lead to significant cost savings later down the line.

You could reflect on the current situation and say it’s not good for one technology to be so dominant, but I’m really struggling to see the negatives. It is cheaper for organisations, AWS retains its customers base and the cloud architects get to focus on the job they’ve been brought in to do.

Of course, as an industry we can’t rule out the possibility that new innovations may come along and change the game once again. But, for the time being, the debate is over: we’re working with Kubernetes. And this should end some needless back and forth, allowing us to focus on better ways to introduce new features, scale and secure public cloud infrastructure.


January 11, 2019  4:11 PM

Improbable vs. Unity: Why enterprise cloud users should take notice

Caroline Donnelly Profile: Caroline Donnelly
cloud, Unity

Enterprise cloud users may have written off the spate between gaming companies Improbable and Unity as nothing more than industry in-fighting, but there are some cautionary tales emerging from the dispute they would do well to take note of, argues Caroline Donnelly.

The bun fight that has broken out over the last day or so between UK-based gaming company Improbable and its long-time technology partner Unity has (most likely) gone unnoticed by the majority of enterprise cloud users.

After all, what does a terms and conditions (T&Cs)-related dispute between two gaming firms have to do with them?

But, within the claims and counter claims being bandied about online by both parties (and other assorted gaming industry stakeholders) are some important points that should give enterprises, in the throes of a cloud migration, some degree of pause.

In amongst all the bickering (which we will get on to shortly) there are cautionary notes to be found about why it is so important for enterprises not to rush blindly into the cloud, and to take steps to protect themselves from undue risk. To ensure they won’t be left high and dry if, for example, their preferred cloud partner should alter their service terms without due warning.

From a provider standpoint too, the case also highlights a problem lots of tech firms run into, where their products end up being used in ways they hadn’t quite planned on that might – in turn – negatively affect their commercial interests or corporate ethics.

Improbable vs. Unity: The enterprise angle

For those who haven’t been following (or simply have no clue who either of these two firms are), Improbable is the creator of a game development platform called SpatialOS that some gaming firms pair with Unity’s graphics engine to render their creations and bring them to life.

But, since Thursday 10 January 2019, the two companies have been embroiled in an online war of words. This started with the emergence of a blog by Improbable that claimed any game built using the Unity engine and SpatialOS is now in breach of Unity’s recently tweaked T&Cs.

This could have serious ramifications for developers who have games in production or under development that make use of the combined Unity-Improbable platform, the blog post warned.

“This is an action by Unity that has immediately done harm to projects across the industry, including those of extremely vulnerable or small-scale developers and damaged major projects in development over many years.”

Concerns were raised, in response to the blog, on social media that this might lead to some in production games being pulled, the release dates of those in development being massively delayed, and some development studios going under as a result.

Unity later responded with a blog post of its own, where it moved to address some of these concerns by declaring that any existing projects that rely on the combined might of Unity and Improbable  to run will be unaffected by the dispute.

When users go rogue

The post also goes on to call into question Improbable’s take on the situation, before accusing the firm of making “unauthorised and improper use” of Unity’s technology and branding while developing, selling and marketing its own wares ,which is why it is allegedly in breach of  Unity’s T&Cs.

“If you want to run your Unity-based game-server, on your own servers, or a cloud provider that [gives] you instances to run your own server for your game, you are covered by our end-user license agreement [EULA],” the Unity blog post continued.

“However, if a third-party service wants to run the Unity Runtime in the cloud with their additional software development kit [SDK], we consider this a platform. In these cases, we require the service to be an approved Unity platform partner.

“These partnerships enable broad and robust platform support so developers can be successful. We enter into these partnerships all the time. This kind of partnership is what we have continuously worked towards with Improbable,” the blog post concludes.

Now, the story does not stop there. Since Unity’s blog post went live, Improbable has released a follow-up post. It has also separately announced a collaboration with Unity rival Epic Games that will see the pair establish a $25m fund to help support developers that need to migrate their projects “left in limbo” by the Unity dispute. There will undoubtedly be more to come before the week is out.

Calls for a cloud code of conduct

While all that plays out, though, there is an acknowledgement made in the second blog post from Improbable about how the overall growth in online gaming platforms is putting the livelihoods of developers in a precarious position, and increasingly at the mercy of others.

Namely the ever-broadening ecosystem of platform providers they have to rely upon to get their games out there. And there are definite parallels to be drawn here for enterprises that are in the throes of building out their cloud-based software and infrastructure footprints too.

“As we move towards more online, more complex, more rapidly-evolving worlds, we will become increasingly interdependent on a plethora of platforms that will end up having enormous power over developers. The games we want to make are too hard and too expensive to make alone,” the blog post reads.

“In the near future, as more and more people transition from entertainment to earning a real income playing games, a platform going down or changing its Terms of Service could have devastating repercussions on a scale much worse than today.”

The company then goes on to make a case for the creation of a “code of conduct” that would offer developers a degree of protection in disputes like this, by laying down some rules about what is (and what is not) permissible behaviour for suppliers within the ecosystem to indulge in.

There are similar efforts afoot within the enterprise cloud space focused on this, led by various governing bodies and trade associations. Yet still reports of providers introducing unexpected price hikes or shutting down services with little notice still occur. So one wonders if a renewed, more coordinated push on this front might be worthwhile within the B2B space as well?


November 26, 2018  8:47 AM

Enterprise DevOps: How to build a high-performing technology team

Caroline Donnelly Profile: Caroline Donnelly
cloud, Database, DevOps

In this guest post, DevOps consultant and researcher Dr. Nicole Forsgren, PhD, tells enterprises why execution is key to building high-performing technology teams within their organisations.

I often meet with enterprise executives and leadership teams, and they all want to know what it takes to make a high-performing technology team.

They’ve read my book Accelerate: The Science of Lean Software and DevOps and my latest research in the Accelerate: State of DevOps 2018 Report (both co-authored with Jez Humble and Gene Kim), but they always ask: “If I could narrow it down to just one piece of advice to really give an organisation an edge, and help them become a high performer, what would it be?”

My first (and right) answer is that there is no one piece of advice that will apply to every organisation, since each is different, with their own unique challenges.

The key to success is to identify the things, such as technology, process or culture, currently holding the organisation back, and work to improve those until they are no longer a constraint. And then repeat the process.

This model of continuous improvement, which is actually identifying strategy, is the most effective way to accelerate an organisational or technology transformation.

However, in looking over this year’s research and checking back in with some of my leadership teams, I realised that one piece of advice would be applicable to everyone, and that is: “High performance is available to everyone, but only if they are willing to put in the work; it all comes down to execution.”

Yes, that’s right – basic execution. While revolutionary ideas and superior strategy are important and unique to your business, without the ability to execute and deliver these to market, your amazing ideas will remain trapped.

Either trapped in a slide deck or in your engineering team, without ever reaching the customer where they can truly make a difference.

If you still aren’t convinced, let’s look at some examples: cloud computing and database change management.

Cloud computing: Done right

Moving to the cloud is often an important initiative for companies today. It is also often something technical executives talk about with frustration: an important initiative that hasn’t delivered the promised value. Why is that? Lack of execution.

In the 2018 Accelerate State of DevOps Report, we asked respondents – almost 1,900 technical experts – if their products or applications were in the cloud. Later in the survey, we asked details about these products and applications they supported. Specifically, these covered five technical characteristics regarding their cloud usage:

  • On-demand self-service. Consumers can provision computing resources as needed, automatically, without any human interaction required.
  • Broad network access. Capabilities are widely available and can be accessed through heterogeneous platforms (e.g., mobile phones, tablets, laptops, and workstations).
  • Resource pooling. Provider resources are pooled in a multi-tenant model, with physical and virtual resources dynamically assigned and reassigned on-demand. The customer generally has no direct control over the exact location of provided resources, but may specify location at a higher level of abstraction (e.g., country, state, or datacentre).
  • Rapid elasticity. Capabilities can be elastically provisioned and released to rapidly scale outward or inward commensurate with demand. Consumer capabilities available for provisioning appear to be unlimited and can be appropriated in any quantity at any time.
  • Measured service. Cloud systems automatically control and optimise resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported for transparency.

The research revealed 22% of respondents (who said they were in the cloud) also said they were following all five of these essential practices. Why is this important? Because these practices are what make cloud computing, according to the National Institute of Standards and Technology’s (NIST) own definition.

And so, it follows, if teams are not executing on all five of them, they aren’t actually doing cloud. This is not necessarily the fault of the technical teams, but the leadership team, who set initiatives for the organisation and define targets.

Now, why does this matter? We found teams that met all five characteristics of cloud computing were 23 times more likely to be elite performers. This means they are able to develop and deliver software with speed, stability, and reliability — getting their ideas to their end users.

So, if you want to reap the benefits of the cloud and leverage it to really drive value to your customers, you have to execute.

Database: A hidden constraint

Database changes are often a major source of risk and delay when performing deployments so in this year’s research we also investigated the role of database change management.

Our analysis found that integrating database work into the software delivery process positively contributed to continuous delivery and a significant predictor of delivering software with speed, stability, and reliability.

Again, these are key to getting ideas and features (or even just security patches) to end users. But what do we mean by database change management? There are four key practices included in doing this right:

  • Communication. Notify upcoming database and schema changes with developers, testers, and the people that maintain your database.
  • Including teams. This goes a step beyond discussion to really including the teams involved in software delivery in discussions and designs. Involving all players is what DevOps is all about, and it will make these changes more successful.
  • Comprehensive configuration management. This means including your database changes as scripts in version control and managing them just as you do your other application code changes.
  • Visibility. Making sure everyone in the technical organisation has visibility to the progress of pending database changes is important.

To restate a point from above, when teams follow these practices, database changes don’t slow software teams down or cause problems when they perform code deployments.

These may look similar to, well, DevOps: integrating functionality throughout the pipeline and shifting left, and that’s exactly what it is.

The great news about this is we can take things we have learned about integrating and shifting left on other parts of our work (like IT operations and security) and apply them to the database.

Our analysis this year also found that database is equally important for teams at every stage of their journey, whether they were low, medium, or high performers.

We see this in our conversations in the industry: data is a hard problem and one that eventually needs to be tackled… and one that needs to be tackled with correct practices.

Particularly in data- and transaction-heavy environments, teams and organisations can leverage their data to drive value to their customers – but only when they execute on the key practices necessary.

Execution: Both simple and hard

For anyone who skips to the bottom of an article looking for the summary, here you go: execute on the basics and don’t skip steps. I acknowledge this is often difficult, but the data is definitive: teams and organisations that don’t fully execute don’t realise the benefits. The great news is this: excellence is available to everyone. Good luck!


November 23, 2018  11:43 AM

Going green: Boosting the sustainability of Europe’s booming datacentre market

Caroline Donnelly Profile: Caroline Donnelly
Colocation, datacentre

In this guest post, Jens Struckmeier, founder and CTO of cloud service provider Cloud&Heat Technologies, shares his views on the ecological and technological challenges caused by Europe’s booming datacentre market.

People have become accustomed to many digital conveniences, thanks to the emergence of Netflix, Spotify, Alexa and YouTube. What many may not know is that with every crime series they stream online, they generate data streams that also gnaw on our environment due to their energy consumption.

Most of the data generated in this way flows through the cloud, which is made up of high-performance datacentres that relieve local devices of most of the computing work. These clouds now account for almost 90 percent of global data traffic. More and better cloud services also mean more and more powerful data centres. These in turn consume a lot of energy and generate a lot of waste heat – this is not a good development from an ecological and economic point of view.

Information and Communication Technology (ICT) with its datacentres now account for three-to-four percent of the world’s electric power consumption, estimates the London research department of Savills. Furthermore, from a global perspective, it is thought they consume more than 416 terawatt hours (TWh) of electricity a year – about as much as the whole of France. And that’s just the tip of the iceberg.

The Internet of Things, autonomous vehicles, the growing use of artificial intelligence (AI), the digitalisation of public administration, and the fully automated networked factories of industry 4.0 are likely to generate data floods far beyond today’s levels.

According to Cisco, global data traffic will almost double to 278 exabytes per month by 2021. This means that even more datacentres will be created or upgraded, with a trend towards particularly large, so-called hyperscale datacentres with thousands of servers.

Why datacentre location matters

In competitive terms, the rural regions of Northern Europe benefit from these trends. In cool regions such as Scandinavia, power is available in large quantities and at low cost.

Additionally, the trend towards hyperscale datacentres means the demand for decentralised datacentres near locations where the processed data is also needed is growing. If the next cloud is physically only a few miles away, there are only short delays between action and reaction, and the risk of failure is also reduced.

With a volume of over £20 billion, the UK is Europe’s largest target market for datacentre equipment. When all construction and infrastructure expenses are added together, Savills estimates that 41% of all investments in the UK were made between 2007 and 2017.

The Knowledge Transfer Network (KTN) estimates annual growth at around 8%. There are currently 412 facilities of different sizes, followed by Germany with 405 and France with 259. For comparison: 2,176 data centres are concentrated in the home country of the Internet Economy, the USA.

In a comparison of European metropolises, London is the capital of data centres. The location has twice the computing capacity of Frankfurt, the number two on the market. According to Savills, however, the pole position could change after the Brexit. Either way, London is likely to remain a magnet as an important financial and technology hub, with the location benefiting from existing ecosystems and major hubs such as Slough.

Environmental pressure on the rise

However, as the European leader in terms of location, the UK also has to deal particularly intensively with the energy and environmental challenges in this sector.

Overall, the larger datacentres consume around 2.47 terawatt hours a year, claims techUK. This corresponds to about 0.76 percent of total energy production in the UK. In London alone, capacities with a power consumption of 384 megawatts were installed in 2016, which corresponds roughly to the output of a medium-sized gas-fired power plant such as the Lynn Power Station in Norfolk.

In view of the strong ecological and economic challenges associated with this development, around 130 data centre operators have joined the Climate Change Agreement (CCA) for datacentres.

In this agreement, they voluntarily commit themselves to improving the power usage effectiveness (PUE) of their facilities by 15 percent by 2023.

PUE stands for the ratio between the total energy fed into the grid and the energy consumption of the computer infrastructure itself. In the UK, the average PUE is around 1.9: almost twice as much energy is used for cooling the data centres as for actual operation.

Smarter power use

There are several ways to use this energy more intelligently: while older, smaller datacentres are often air- or water-cooled, resource-conscious operators rely on thermally more effective hot water-cooling systems.

The computer racks are not cooled with mechanically tempered water or air, but with water that is 40 degrees warm, for example, and the cooling process makes it five degrees warmer, for example. In the northern parts of Europe, where the outside temperature hardly ever rises above 40 degrees, the water can then quite simply be cooled down to the inlet temperature outside.

Because water or air do not have to be cooled down mechanically, the energy savings are significant: Compared to the cold-water version, hot water cooling consumes up to 77 percent less electricity, depending on the selected inlet temperature and system technology.

The benefits of datacentre heat reuse

Modern technological approaches make it possible to utilise waste heat from datacentres that has previously been cooled away in a costly and time-consuming process. Engineers link the cooling circuits of the racks directly with the heating systems of office or residential buildings.

Power guzzlers can thus become heating and power stations for cities, which in future will not only be climate-neutral but also climate-positive. This not only saves energy and money, but also significantly reduces carbon dioxide emissions.

If all datacentres in the UK were to use this technology, up to 530,000 tons of carbon dioxide could be saved annually. This would correspond to 7,500 football pitches full of trees or a forest the size of Ipswich. 

Green datacentres and Sadiq Khan’s environmental goals

Many market analysts and scientists are convinced that these solutions will also have a political resonance, as they open fascinating new scope for local solutions and regional providers.

This is ensured by internet law and data protection regulations, but also by technological necessities. It is foreseeable, for example, that concepts such as edge cloud computing, fog computing and the latency constraints mentioned will soon make mobile and local micro supercomputers take over some of the tasks of traditional mainframe computing centres.

Market participants who are prepared to face all these trends in good time are well advised. Digitalisation cannot be stopped – instead, it is necessary to actively shape a sustainable, green digital future.

The special role of datacentres in this digital change is becoming more and more relevant for society in general.

The ambitious goals recently formulated by London Mayor Sadiq Khan to reduce environmental pollution and carbon dioxide emissions will be a real challenge for operators.

By 2020, for example, all real estate will use only renewable energy. By 2050, London is to become a city that no longer emits any additional carbon dioxide. Datacentre operators will inevitably have to rely on sustainable, energy-efficient technologies.


November 9, 2018  2:18 PM

Common cloud myths, and how to bust them

Caroline Donnelly Profile: Caroline Donnelly
cloud, Enterprise, Public Cloud

In this guest post, Neil Briscoe, CTO of hybrid cloud platform provider 6point6 Cloud Gateway, sets about busting some common cloud myths for enterprise IT buyers

The cloud provides endless opportunity for businesses to become more agile, efficient and – ultimately – more profitable. However, the hype around cloud has made firms so desperate for the action, they’ve become blinded to the fact it might not be the right move for every company.

Many firms get sucked into the benefits of the cloud, without truly understanding what the cloud even is. I mean, would you buy a car without knowing how to drive it?

The eight fallacies of distributed computing guided many leaders into making the right decision for their business when adopting new systems. However, as the cloud continues to evolve, it can be hard to keep up.

With this in mind, here are some of commonly recurring myths that need to be busted about cloud, so businesses can make sound decisions on what is right for them.

Myth one  – You have to go all-in on cloud

As with any hyped innovation, cloud is being positioned by many as a ‘fix all solution’ and somewhere that businesses can migrate all their systems to in one swoop.

In reality, many services are at different stages of their life cycle, and not suitable for full migration, and running legacy systems are no bad thing.

It’s important that each system is fit for purpose, so shop around and see exactly which one fits with your strategic roadmap. By trying to fit a square peg in a round hole, you open yourself up to all sorts of security and operational issues.

Myth two – Cloud is cheaper than on-premise

Datacentres are notoriously expensive but the hidden costs in maintaining a cloud infrastructure can be deceptively high. Don’t get distracted by the desirably low initial CAPEX and onboarding costs for migrating to the cloud, as the ongoing OPEX can shoot you in the foot long-term.

Public cloud service providers like AWS are now being much more transparent with their Total Cost of Ownership (TCO) calculator, and the numbers are quite accurate as long as you have discipline. This in turn is reducing the need for enterprises to invest in large capital expenditures and means they only pay for what you need and use.

By having a long-term plan and understanding where, why and how the cloud will benefit your business, this will – in-turn – reduce your costs. Migrating the whole datacentre will not.

Myth three – Cloud is more secure than on-premise

On-premise datacentres are clear. You have a door to go in and out of, and you can have a reasonable grasp of where your external perimeter is, and then secure it. In the cloud, there are infinite doors and windows for a malicious actor to exploit, if not secured properly.

Cloud providers give you the tools to secure your data and infrastructure probably better than you can with on-premise datacentre tools, as long as they’re used correctly.

It’s very easy to build a service and make it completely public-facing without having any resilience or security at all. Don’t assume that all cloud providers will create such high levels of security in the architecture “by default”. Sometimes you must dig deep and truly understand what is happening under the hood.

Myth four – One cloud provider is all you need

Don’t be afraid to go multi-cloud. Each system is different, and you have the power to choose the best-of-breed system for all workloads, even if this means using more than one provider.

As cloud is relatively new, people are still experimenting and figuring out what suits them best. By going multi-cloud, enterprises aren’t restricted to one particular operator and can get best-for-the-job services without sacrificing on agility. Never sacrifice on agility. You can have your cake and eat it.

Myth five – It is better to build, not buy

Building a private network and connectivity platform in the cloud sounds desirable; you build the best platform to serve your requirements without answering to an external vendor. However, building your own network isn’t as easy as 1,2,3 and once you’ve built it, maintaining it is the hardest step.

Talent is in high demand, so ensuring you can keep your talent in house to keep developing and improving your network without letting them escape is an ongoing challenge. It’s a task to ensure continuity.

By “buying”, it’s possible to remove the headache of connectivity, security and network agility of a potentially complex architecture and will allow enterprises to focus on the actual digital transformation itself without draining resources for, what is becoming, a commodity IT item.

In closing, while cloud isn’t a new innovation, enterprises and suppliers alike are continuously developing and understanding how it can benefit business.

Enterprises need to de-sensitise themselves from the hype and investigate exactly how and why the cloud will improve efficiency, competitiveness and reduce costs.

Cloud is changing every day and everyone’s experiences are different, but keeping your eyes and ears open to the numerous benefits but also the threats it can create will ensure that enterprises use the cloud effectively.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: