Ahead in the Clouds


August 13, 2019  8:45 AM

Multi-cloud: Reaping the benefits

Caroline Donnelly Profile: Caroline Donnelly

In this guest post, Will Grannis, founder and director of the Google Office of the CTO, sets out how adopting a multi-cloud strategy can help enterprises navigate IT challenges around value, risk and cost.

Innovation in the public cloud continues at a staggeringly fast pace. Cloud service providers (CSPs) are continually bringing new services to market, offering a greater depth of choice to enterprise customers. The conventional approach of picking a single cloud vendor and ‘locking-in’ to its infrastructure is effective in the context of such rapid change.

Multi-cloud, however, has emerged as a means for capitalising on the freedom of choice the public cloud offers, not to mention, the differentiated technology each supplier in this space can provide. While the meaning of multi-cloud appears self-evident, in reality it’s more about shifting the role of the organisation’s IT function.

IT needs to shift its focus towards consuming ‘as-a-service’ and having the tools in place to architect across different cloud platforms, and then re-architect over time as business requirements change. In essence, a multi-cloud strategy is a service-first strategy.

Multi-cloud means service-first

For most businesses, this as-a-service approach has become the norm. According to Gartner, by 2021, more than 75% of midsize and large organisations will have adopted a multi-cloud and/or hybrid IT strategy.

The advantages of having a multi-cloud strategy are closely aligned with businesses’ broader IT goals of revenue acceleration, improved agility and time to market, as well as cost reduction. Reduced supplier lock-in, increased scope for cost optimisation, greater resilience and more geographic options all serve to provide a stronger basis for operating applications and workloads at scale.

A greater choice of geographical and virtual locations is important for enterprises that have strict requirements about where data can be stored and processed. The EU’s General Data Protection Regulation (GDPR), for example, included the ‘right to data portability’, specifying that personal (e.g. employee or customer) data must be easily transferred from location to location.

In fact, TechUK’s Cloud2020 Vision Report recommends that organisations ingrain data portability and interoperability into their systems. This means that as more options come online, they are then able to re-evaluate their decisions as necessary.

The aim of a multi-cloud strategy should be to build and maintain capabilities to assess an IT workload and decide where to place it, balancing out a variety of factors. Considerations for placement include how workload location can help IT maximise the value delivered to the business, risks associated with a particular deployment choice, costs of each placement and how well each choice fits with its surrounding architecture.

Multi-cloud as the route to innovation

Adopting multi-cloud means the enterprise is able to embrace the new technologies offered by each CSP it uses. Not only does this create cross-cloud resilience, it also gives the ability to run workloads in specific locations that are not offered by every CSP.

Furthermore, it ensures organisations can take advantage of the latest offerings around emerging technologies like machine learning and the Internet of Things.

This level of flexibility must also be supported by interoperability built into the fabric of the multi-cloud’s design. Using a containerised approach, based on open source technology such as Kubernetes, enterprises can ensure that they are able to fluidly move data between cloud platforms and back into their private environments, in order to make compliance and best-fit service provisioning effortless.

Having this level of interoperability brings significant efficiency benefits. In a single cloud architecture, IT teams needed to be able to write programmes specifically for whichever cloud platform they were using.

Organisations are forced to have several teams of developers with siloed skill sets that cannot be applied to more than one technology stack.

Write once, run anywhere with containers

With containers, they have the cross-platform compatibility to write once, and run anywhere. This is an approach is supported by Google Cloud’s Anthos multi-cloud management platform.

Anthos is an open source based technology, incorporating Kubernetes, Istio, and Knative, that lets teams build and manage applications in existing on-premises environment or in the public cloud of their choice.

As the use of multiple cloud services within a single enterprise IT environment becomes the reality, enabling public and private clouds to work in harmony needs to be an organisational priority.

Finding a right-fit environment for each workload means IT can accelerate application development with performance, cost and resilience at its foundation, helping unlock the true range of possibilities a multi-cloud world has to offer.

August 7, 2019  3:00 PM

Digging into the cloud security arguments of the Capital One data breach

Caroline Donnelly Profile: Caroline Donnelly

In this guest post, Dob Todorov, CEO and chief cloud officer, HeleCloud, sets out why it is wrong to declare cloud not fit for business use in the wake of the Capital One data breach.

In two separate statements last week, Capital One announced a data breach involving a large amount of customer data, including credit card information and personal information, stored on the Amazon Web Services (AWS) Simple Storage Service (S3).

The data breach, understandably, sparked a flurry of media interest, resulting in a broad set of statements and interpretations being communicated from many different individuals. Yet, one claim that received an unfair amount of momentum around the Capital One data breach was that it highlights cloud is not secure enough to support businesses.

Here it is important to separate fact from opinion. The truth is, security incidents (including data and policy breaches) happen every day, both in the cloud and conventional datacentre environments. While some get detected and reported on, like the recent Capital One or British Airways breaches, most incidents, unfortunately, go unnoticed and unaccounted for.

The cloud provides businesses with an unprecedented level of security through the visibility, auditability, and access control to all infrastructure components and cloud-native applications.

Cloud is secure and fit for business

With more than fifty industry and regulatory certifications and accreditations, it is worth calling out the AWS public cloud platform for an unrivalled level of security standards – far beyond that offered by any on-premise solution. So, what went wrong for Capital One?

Such a powerful and natively secure platform still requires organisations to architect solutions on it, configure it and manage it securely.  While the security of the cloud is the responsibility of platform providers like AWS, security in the cloud is the responsibility of the users.

For example, based on the available information, it appears that while Capital One had encrypted its data within an AWS S3 bucket to protect confidentiality using the default settings, it made the false assumption that this would protect the personal information against any type of unauthorised access.

What’s more, it appears that specific AWS S3 access control policies were not configured properly, thus allowing either anonymous access from the internet or by using an application with wider than required access permissions. Having the right access policies in place is crucial to protecting resources on AWS S3.

Diary of the Capital One data breach

While the data breach was uncovered by Capital One in July 2019, the data was first accessed by an unauthorised source four months earlier, in March 2019. Given the visibility that AWS provides, including real-time monitoring, Capital One should have detected and contained the incident in real-time. Such capabilities do not exist outside the AWS platform, so having the capabilities and not using them is unacceptable.

In a statement released by the firm, Capital One recognised the limitations of their specific implementation and confirmed the mistakes and omissions made were due to the configuration of the AWS platform.

Such mistakes would have led to a similar outcome event if their systems were in conventional datacentres. The firm stated: “This type of vulnerability is not specific to the cloud. The elements of infrastructure involved are common to both cloud and on-premises datacentre environments.”

Rooting out the weakest link

When it comes to security, businesses are only as strong as the weakest link in the system. The root cause can vary from the configuration of technologies to processes that support insecure practices, to something as simple as human error due to a lack of knowledge, skills, experience.

Of course, there are also cases where humans have deliberately caused incidents for economical gain. However, whether in the cloud or on-premises, all aspects must be taken into consideration and no area left ignored or underinvested when architecting for system security.

Five steps to a secure cloud

To ensure that this does not happen, there are five security and compliance principles that experts insist every business follows. Firstly, all security and compliance efforts require a holistic approach – people, processes and technologies.

Each of these three areas plays a significant part in the processing and protection of data, thus all three must be considered when evaluating your level of security.

This brings us on to the second principle: minimise human impact. Whether deliberate or accidental, most security breaches within businesses are caused by humans. Businesses should invest in more automation to improve the security of systems.

Thirdly, detection is extremely important. Not knowing that an incident has occurred does not eliminate its impact on the business or customers. Sadly, many remain unaware of the incidents that have taken place within their businesses.

The fourth principle seems simple but is often overlooked: security is an ongoing requirement, one that requires maintenance for the whole life span of a system. Lastly, encryption is not always the answer to data protection. Encryption helps to solve a very specific problem: confidentiality. Simply encrypting data, regardless of the algorithms and keys used, does not necessarily make the system more secure.

Cloud is both a very powerful and secure IT delivery platform. To ensure that they are benefiting from these capabilities, businesses must configure their chosen cloud platform to their needs.

Despite cloud skills still not being at the levels they need to be in the UK, many specialist partners exist to ensure that the businesses have the knowledge and experience needed to become more secure than anywhere else via the cloud.

 


August 6, 2019  11:09 AM

Hype vs. reality: Why some organisations are opting to de-cloud

Caroline Donnelly Profile: Caroline Donnelly

In this guest post, Justin Day, CEO of hybrid cloud connectivity platform provider Cloud Gateway shares his thoughts one why some enterprises are choosing to pull back from off-premise life and de-cloud their IT.

The cloud promised to transform businesses; bringing speed, security and scale instantaneously. However, the hype around cloud and its benefits has led businesses to become blind to their requirements; moulding their needs to fit the cloud rather than the cloud to fit their needs.

In some cases this has led to organisations migrating a whole business structure to the cloud, which has caused many to suffer big losses.

Thankfully, company leaders are starting to open their eyes to a hybrid approach; understanding where the cloud can best support them and what elements are better suited to staying on-premise.

As such, many enterprises are now moving away from the cloud – also known as de-clouding (or cloud repatriation as some have coined it) – to create a cloud strategy that fits their business objectives and goals.

Costing up the cloud

Service providers shouted from the rooftops about cloud’s cost savings and benefits compared, to owning and operating datacentres. And, while it’s true that (typically) on premise datacentres are expensive, these costings are clear in comparison to the hidden costs of the cloud.

Cloud providers hide high costs with desirably low on-boarding costs and capital expenditure (CAPEX) to cover up the ongoing operating expenditure (OPEX). This is where enterprises get caught out, as it’s not migrating the whole datacentre that will reduce business costs, but using what you need in both the datacentre and in the cloud.

For data in the cloud, it’s imperative that all enterprises are implementing a sturdy cloud governance programme.

Businesses are becoming more savvy in understanding what they need, and Dropbox is a perfect example of this; in two years, the company saved $74.6m off its operational expenses since it ‘de-clouded’ from AWS and into its own datacentres.

Multi-cloud for extra security

Having all business services in the cloud also opens a range of security issues as not every cloud service will provide the right security to protect every aspect of the business. With such varying security requirements across a business, one cloud provider won’t be sufficient, and a multi-cloud approach is best to ensure security for sensitive data.

In contrast, a datacentre provides clear security processes. For example, it is easy to see where the door opens and where the door closes. Currently, with security risks rife, companies are rightly increasingly conscious about their security measures.

Going on-premise, where a company can retain control of their data, provides the satisfaction that both customer and company data is protected.

The term agility has lost its true meaning in recent years, with many cloud providers promising to provide an agile network they cannot deliver. Many lock enterprises into their network and don’t allow for any flexibility, which can be costly and inefficient.

Having a hybrid approach of both on-premise and in the cloud will give companies the ability and confidence to react quickly to customer needs.

De-cloud to de-risk

Furthermore, it reduces and manages risk by avoiding complete downtime if one operator goes down. It’s no longer enough to leave a customer waiting three days before coming back with a solution, or providing a new service for them, so housing services both on premise and in the cloud is hugely beneficial for keeping customers happy.

Migrating to the cloud is often perceived as easy, cheap and efficient. However, with the internet coming under more pressure with ever-increasing traffic from users, there’s only so much pressure it can withhold. And the complexities caused by internet-failure can be very damaging for businesses if not mitigated properly.

The future is absolutely a hybrid model and companies need to take advantage of both cloud and on-premise offerings to make sure they can best serve their customers as well as protect their business in both monetary value and security.


July 24, 2019  12:45 PM

Why the government’s cloud-first policy review should be applauded

Caroline Donnelly Profile: Caroline Donnelly

In this guest post, HPE UK MD Marc Waters sets out why the government’s decision to place its cloud-first policy under review makes sense, as the hybrid IT consumption model continues to take hold in the enterprise.

It was good to read the Crown Commercial Service (CCS) and Government Digital Service (GDS) are reviewing the public cloud-first policy that has guided public sector technology procurement since its introduction in 2013.

The policy has been a success in advancing cloud adoption, and the provision of digital public services. However, it is now widely accepted that a ‘one size fits all’ approach, is not only too simplistic, it is also too restrictive and expensive.

It is also worth noting the move mirrors a decision by the US government at the start of this year to move its Federal Cloud Computing strategy from ‘Cloud First’ to ‘Cloud Smart’.

Ultimately it is about using the right tools for the right workload and having the ability to flex your technology mix. For the public sector, achieving the right cloud mix will help deliver better and more efficient citizen services.

The Hewlett Packard Enterprise (HPE) belief that the future of enterprise IT is hybrid is now widely accepted. Organisations are looking to blend their use of public cloud, with the security, control and cost benefit of a private cloud, and the same holds true for the public sector.

Solving data challenges in the cloud-first era

Another factor to consider as UK government looks toward the future is the importance, impact and incredible growth of data. Future planning requires smart thinking of how data is captured, stored and analysed.

As we all become increasingly mobile and connected, valuable data is being created at the edge. The edge is outside the datacentre and is where digital interaction happens.

Authorities need to be able to manage ‘hot’ data at the edge and use it to make instant, automated decisions, such as improving congestion through smart motorways or using facial-recognition to identify high-risk individuals.

This is in addition to managing ‘cold’ data which is used to analyse patterns and predict future trends, pertaining – for example – to the provision of healthcare and social housing services.

Different data sets benefit from different, connected, data solutions. In a hybrid cloud environment, for example, transaction processing can be managed in the public cloud with data stored privately ensuring control and avoiding the charges to upload data into a public cloud.

Retaining government data in a private cloud removes the lock-in of the charges levied by public cloud providers to take back control of government data. Which just gives more flexibility and options.

Combining a public and private cloud strategy enables customers to demand cloud value for money, not just now, but ongoing. So as a technologist and a taxpayer I see the review by CCS as a hugely positive step forward for the UK government.

Speaking at the Times CEO Summit in London recently Baroness Martha Lane-Fox, who helped to establish the Government Digital Service during the coalition government, noted the digitisation of the State was a ‘job half done’. If that’s the case, at HPE we stand ready to work with the public sector to get the other half done.


June 12, 2019  8:30 AM

Cloud migration: A step-by-step guide

Caroline Donnelly Profile: Caroline Donnelly

In this guest post, Paul Mercina, director of product management, at datacentre hardware and maintenance provider ParkPlace Technologies offers a step-by-step guide about how, what and when to move to the cloud.

 The benefits of using cloud are multiple, but the process of migrating a company’s IT systems off-premise (while simultaneously ensuring ‘business as usual’ for staff, customers and the supply chain) is not without its challenges.

While investing in the cloud will result in less on-site hardware and fewer applications for IT managers to manage, this may not necessarily translate into less work to do.

Cloud computing depends on a significant amount of oversight to ensure suppliers are meeting service level agreements, keeping to budget and cloud sprawl is kept to a minimum.

This vital work requires a different skill set, so you will need to consider upskilling and retraining staff to manage their evolving roles.

Developing a robust cloud migration strategy alongside this work will be a must, and there are few things to bear in mind when seeking to create one.

Cloud migration: Preparation is everything

As the global cloud market matures, CIOs are increasingly presenting compelling business cases for cloud adoption. Moving all your IT systems to the cloud instantly may have strong appeal, but in reality, this is unrealistic. Not everything can or should be moved, and you will also need to consider the order of migration and impact on business and staff.

Considering the unique needs of your organisation will be critical to developing a plan that unlocks the benefits of the cloud without compromising security, daily business activities, existing legacy systems or wasting budget.

Many applications and services are still not optimised for virtual environments, let alone the cloud. Regardless of how ambitious a company’s cloud strategy is, it’s likely you will have a significant datacentre footprint remaining to account for important data and applications.

Supporting these systems can be an ongoing challenge, particularly as organisations place more importance, budget and resource into the cloud.

Cloud migration: The importance of interim planning

Mapping a cloud migration strategy against long-, mid- and short-term goals can be helpful. The long term plan may be to move 80% of your applications and data storage to the cloud; however in the short term you will need to consider how you will maintain accessibility and security of existing data, hardware and applications while cloud migration takes place.

Third party suppliers can help maintain legacy systems and hardware during the transition to ease disruption and ensure business continuity.

In line with this, cloud migration will inevitably involve the retirement of some hardware. From a security perspective it’s imperative to ensure any stored data is secured to avoid exposing your organisation to the risk of data breaches.

Many organisations underestimate hard drive-related security risks or assume incorrectly that routine software management methods provide adequate protection.

Cloud migration: Meeting in the middle

Moving to the cloud often creates integration challenges, leaving IT managers to find ways to successfully marry up on-premise hardware with cloud-hosted systems.

In many cases, this involves making sure the network can handle smooth data transmissions between various information sources. However, getting cloud and non-cloud systems to work with one another can be incredibly difficult, involving complex projects that are not only difficult to manage, but also complicated by having fewer resources for the internal facility.

Cloud migration: Budget management challenges

With more budget being transferred off site for cloud systems and other IT outsourcing services, many IT managers are left with less to spend on their on-site IT infrastructure.

The financial pressure mounts as more corporate datacentres need to take on cloud attributes to keep up with broad technology strategies. Finding ways to improve IT cost efficiency is vital to addressing internal data centre maintenance challenges.

Cloud migration: Post-project cost management

Following a cloud migration, retained legacy IT systems age, so it is worth investigating if enlisting the help of a third-party maintenance provider might be of use.

A third-party maintenance provider can give IT managers the services they need, in many cases, for almost half the cost. The end result is that IT teams can have more resources to spend on the internal datacentre and are better prepared to support the systems that are still running in the on-site environment.

While hardware maintenance plans may not solve every problem, they provide a consistent source of fiscal and operational relief that makes it easier for IT teams to manage their data storage issues as they arise.


May 28, 2019  10:58 AM

The people vs. Amazon: Weighing the risks of its stance on facial recognition and climate change

Caroline Donnelly Profile: Caroline Donnelly

Amazon’s annual shareholder meeting appears to have highlighted a disconnect between what its staff and senior management think its stance on facial recognition tech and climate change should be.

When in need of a steer on what the business priorities are for any publicly-listed technology supplier, the annual shareholders’ meeting is usually a good place to start.

The topics up for discussion usually provide bystanders with some insight into what issues are top of mind for the board of directors, and the people they represent: the shareholders.

From that point of view, the events that went down at Amazon’s shareholder meeting on Wednesday 22 May 2019 paint a (potentially) eye-opening picture of the retail giant’s future direction of travel.

Of the 11 proposals up for discussion at the event, there were a couple of hot button issues. Both were raised by shareholders, with the backing of the company’s staff.

The first relates to whether Amazon should be selling its facial recognition software, Rekognition, to government agencies. This is on the back of long-held concerns that technologies of this kind could be put to use in harmful ways that infringe on people’s privacy rights and civil liberties.

Shareholders, with the support of 450 Amazon staff, asked meeting participants to vote on whether sales of the software should be temporarily halted. That is until independent verification can be sought that confirms it does not contribute to “actual or potential” violations of people’s private and civil rights.

The second equally as contentious discussion point relates to climate change. Or, more specifically, what Amazon is doing to prepare and prevent it, and drive down its own consumption of fossil fuels. Shareholders (along with 7,600 Amazon employees) asked the company in this proposal to urgently create a public report on the aforementioned topics.

Just say no

And in response Amazon’s shareholders and board of directors voted no. They voted no to a halt on sales of Rekognition. And no to providing shareholders, staff and the wider world with an enhanced view of what it’s doing to tackle climate change.

The company is not expected to publish a complete breakdown of the voting percentages on these issues for a while, so it is impossible to say how close to the 50% threshold these proposals were to winning approval at the meeting.

On the whole, though, the results are not exactly surprising. In the pre-meeting proxy statement, Amazon’s board of directors said it would advise shareholders to reject all of the proposals put up for discussion, not just those pertaining to facial recognition sales and climate change.

Now, imagine for a minute you are a shareholder or you work at Amazon, and you’ve made an effort to make your displeasure about how the company is operating known. And, particularly where the facial recognition proposal is concerned, you get effectively told you are worrying about nothing.

In the proxy statement, where Amazon gives an account as to why it is rejecting the proposal, it says the company has never received a single report about Rekognition being used in a harmful manner, while acknowledging there is potential for any technology – not just facial recognition – to be misused.

Therefore: “We do not believe that the potential for customers to misuse results generated by Amazon Rekognition should prevent us from making that technology available to our customers.”

Commercially motivated votes?

A cynically-minded person might wonder if its stance on this matter is commercially motivated, given two of its biggest competitors, Microsoft and Google, are taking a more measured approach to how and who they market their facial recognition technology to.

Microsoft, for example,  turned down a request to sell its technology to a Californian law enforcement agency because it had been predominantly trained on images of white males.  Therefore, there is a risk it could lead to women and minority groups being disproportionately targeted when used to carry out facial scans.

Google, meanwhile, publicly declared back in December 2018 that it would not be selling its facial recognition APIs, over concerns about how the technology could be abused.

Incidentally, Amazon’s shareholder concerns about the implications of selling Rekognition to government agencies were recently echoed in an open letter, signed by various silicon valley AI experts. They include representatives from Microsoft, Google, and Facebook, who also called on the firm to halt sales of the technology to that sector too.

Against that backdrop, it could be surmised that Amazon is trying to make the most of the market opportunity for its technology until the regulatory landscape catches up and slows things down? Or, perhaps, it is just supremely confident in the technology’s abilities and lack of bias?  Who can say?

Climate change: DENIED

Meanwhile, the company’s very public rejection of its shareholder and staff-backed climate change proposal seems, at least to this outsider, a potentially dicey move.

The company’s defence on this front is that it appreciates the very real threat climate change poses to its operations, the wider world and how its operations may contribute towards that. And, to this end, it has already implemented a number of measures to support its transformation into a carbon neutral, renewably-powered and more environmentally-friendly company as a nod to that.

But, where the general public is concerned, how much awareness is there about that? Amazon has a website where it publishes details of its progress towards becoming a more sustainable entity, sure, but it’s also very publicly just turned down an opportunity to be even more transparent on that.

Climate change is an issue of global importance and one people want to see Amazon stand up and take a lead on, and even some of its own staff don’t think it’s doing a good enough right now. For proof of that, one only has to cast a glance at some of the blog posts the Amazon Employees Climate Justice group have shared on the blogging platform, Medium, of late.

Amazon vs. Google: A tale of two approaches

From a competitive standpoint, its rejection of these proposals could end up being something of an Achilles heel for Amazon when it comes to retaining its lead in the public cloud market in the years to come.

When you speak to Google customers about why they decided to go with them over some of the other runners and riders in the public cloud, the search giant’s stance on sustainability regularly crops up as a reason. Not in every conversation, admittedly, but certainly in enough of them for it to be declared a trend.

From an energy efficiency and environmental standpoint, it has been operating as a carbon neutral company for more than a decade, and claims to be the world’s largest purchaser of renewable energy.

It is also worth noting that Google responded to a staff revolt of its own last year by exiting the race for the highly controversial $10bn cloud contract the US Department of Defense is currently entertaining bids for.

Amazon and Microsoft are the last ones standing in the battle for that deal, after Google dropped out on the grounds it couldn’t square its involvement in the contract with its own corporate stance on ethical AI use.

There is a reference made within the proxy statement that, given the level of staff indignation over some of these issues, the company could run into staff and talent retention problems at a later date.

And that is a genuine threat. For many individuals, working for a company whose beliefs align with your own is very important and when you are working in a fast-growing, competitive industry where skills shortages are apparent, finding somewhere new to work where that is possible isn’t such an impossible dream.

And with more than 8,000 of its employees coming together to campaign on both these issues, that could potentially pave the way for a sizeable brain drain in the future if they don’t see the change in behaviour they want from Amazon in the months and years to come.


May 9, 2019  3:39 PM

Has the UK government’s cloud-first policy served its purpose?

Caroline Donnelly Profile: Caroline Donnelly

The government has confirmed its long-standing public cloud-first policy is under review, and that it is seeking to launch an alternative procurement framework to G-Cloud. But why?

It is hard not to read too much into all the change occurring in the public sector cloud procurement space at the moment, and all too easy to assume the worst.

First of all there is the news, exclusively broken by Computer Weekly, that the government’s long-standing cloud-first policy is under review, six years after it was first introduced.

For the uninitiated, the policy effectively mandates that all central government departments should take a public cloud-first approach on any new technology purchases. No such mandate exists for the rest of the public sector, but they are strongly advised to do the same.

The policy itself was ushered in around the same sort of time as the G-Cloud procurement framework launched. Both were hailed by the austerity-focused coalition government in power at the time as key to accelerating the take-up of cloud services within Whitehall and the rest of the public sector.

Encouraging the public sector – as a whole – to ramp up their use of cloud (while winding down their reliance on on-premise datacentres) would bring cost-savings and scalability benefits, it was claimed.

Additionally, the online marketplace-like nature of G-Cloud was designed to give SMEs and large enterprises the same degree of visibility to public sector IT buyers. Meanwhile, its insistence on two-year contract terms would safeguard users against getting locked into costly IT contracts that lasted way longer than the technology they were buying would remain good value for money.

Revoking the revolution

The impact the cloud-first policy has had on the IT buying habits of central government should not be underestimated, and can be keenly felt when flicking through the revamped Digital Marketplace sales pages.

Of the £4.57bn of cloud purchases that have made via the G-Cloud procurement framework since its launch in 2012, £3.7bn worth of them have been made by central government departments.

The fact Whitehall is mandated to think cloud-first is always the first thing IT market watchers point to when asked to give an account as to why the wider public sector have been so (comparatively) late to the G-Cloud party.

Public sector IT chiefs often cite the policy as being instrumental in helping secure buy-in from their senior leadership teams for any digital transformation plans they are plotting.

But, according to the government procurement chiefs at the Crown Commercial Service (CCS) and the digital transformation whizz-kids at the Government Digital Service (GDS), the policy is now under review, and set for a revamp.

In what way remains to be see. Although – in a statement to Computer Weekly – CCS suggested the changes will be heavily slanted towards supporting the growing appetite within the public sector for hybrid cloud deployments.

The trouble with curbing cloud-first behaviours

The obvious concern in all this is that, if the cloud-first mandate is revoked completely, central government IT chiefs might start falling back into bad procurement habits, whereby cloud becomes an afterthought and on-premise rules supreme again.

Maybe that is an extreme projection, but there are signs elsewhere that some of the behaviours that G-Cloud, in particular, was introduced to curb could be starting to surface again.

One only has to look at how the percentage of deals being awarded to SMEs via G-Cloud has started to slide of late, which has fed speculation a new oligopoly of big tech suppliers is starting to form, who will –  in time – dominate the government IT procurement landscape.

Where G-Cloud is concerned, there are also rumblings of discontent among suppliers who populate the framework that it is becoming increasingly side-lined for a number of reasons.

There are semi-regular grumbles from suppliers that suggestions they have made to CCS or GDS about changes they would like made to the framework being ignored, or not being acted on as quickly as they would like.

Putting users first

Some of these are to do  with making the framework less onerous and admin-heavy for SMEs to use, while others are geared towards making the whole cloud purchasing experience easier for buyers overall.

Either way, suppliers fear this perceived lack of action has prompted buyers to take matters into their own hands by setting up cloud procurement frameworks of their own because G-Cloud is no longer meeting their needs.

And that’s an argument that is likely to get louder following the news that CCS is considering launching an additional cloud hosting and services framework, where contracts of up to 5 years in length could be up for grabs.

A lot of innovation can happen over the course of five years, which leads to the logical assumption that any organisation entering into a contract that long might find itself at something of a technological disadvantage as time goes on.

While the public cloud community has stopped publicly making such a song and dance about price cuts, the fact is the cost of using these services continues to go down over time for users because of economies of scale.

However, if you’re locked-in to a five year contract, will they necessarily feel the benefit of that? Or will they be committed to paying the same price they did at the start of the contract all the way through? If so, in what universe would that represent good value for money?

A lot can change between the consultation and go-live stages of any framework, but there are concerns that this is another sign the government is intent on falling back into its old ways of working where procurement is concerned.

Government cloud comes of age

Although, another way of looking at all this is a sign that the cloud-first policy and G-Cloud have served their purpose. Together they have conspired to make public sector buyers feel so comfortable and confident with using cloud, they feel ready to go it alone and launch frameworks of their own.

Or, as their cloud strategies have matured, it has become apparent that for some workloads a two-year contract term works fine, but there are others where a longer-term deal might be a better, more convenient fit.

It is not out of the realms of possibility. It is worth noting the shift from public cloud-first to a policy that accommodates hybrid deployments is in keeping with the messaging a lot of the major cloud providers are putting out now, which is very different to what it was back in 2012-2013.

Around that time, Amazon Web Services (AWS) was of the view that enterprises will want to close down their datacentres, and move all their applications and workloads to the public cloud.

The company still tows a similar line today, but there is an admission in there now that some enterprises will need to operate a hybrid cloud model and retain some of their IT assets on-premise for some time to come.  And the forthcoming change to the government’s cloud-first policy might simply be its way of acknowledging the same trend.


April 5, 2019  8:41 AM

Curbing cloud sprawl to keep IT costs down

Caroline Donnelly Profile: Caroline Donnelly

In this guest post from Henrik Nilsson, vice president for Europe, Middle East and Africa (EMEA) at machine learning-based IT cost optimisation software supplier Apptio, offers enterprises some advice on what they can do to stop cloud sprawl in its tracks and keep costs down.

On the surface, it seems there are plenty of reasons for businesses to jump head-first into the cloud: agile working practices, the ability to scale resources, and boost the resiliency of their infrastructure, for example.

However, enthusiasm for the newest technology creates a tendency for business leaders to make investments without analysis to support strategic decision-making. Cloud is not immune.

Before long, cloud costs can escalate out of control. Usage can go through the roof, or business value from cloud-based applications can plummet and accountability is replaced by a “Wild West” approach to resource use, whereby whoever needs it first, gets to use it.

In this type of scenario, CIOs should take a step back and consider how to harness the power of the cloud to align with the wider objectives of the business.

Managing cloud sprawl can be the hardest part of aligning cloud usage to a business strategy. Cloud sprawl is the unintentional proliferation of spending on cloud, and is often caused by a lack of visibility into resources and communication between business units. Different departments across organisations want to take advantage of the benefits of cloud, but need to understand how this impacts budgets and the goals of the business.

To successfully master the multiple layers of cloud costs , IT and finance leaders need to see the full picture of their expenditure. They need to utilise data, drive transparent communication, and continuously optimise to stop cloud sprawl, achieve predictable spending, and build an informed cloud migration strategy.

Strategic cloud use

Using consumption, usage and cost data to make cloud purchasing decisions is the first step to stopping cloud sprawl at its root. Migration decisions should not be based on assumptions or emotion.

The cost to migrate workloads can be very high, so businesses need to understand not just the cost to move, but also how moving to the cloud will impact costs for networking, storage, risk management and security, and labour or retraining of an existing IT team. They also need to evaluate the total value achieved by this migration and align decisions with the strategic needs of the business.

A key driver of cloud sprawl is the assumption that cloud is the solution to any given business need. Not every department needs to migrate every part of its operation. In some instances, on-premises might be the right decision. A hybrid approach is considered by many to be the best balance – one survey suggested that a 70/30 split of cloud to on-prem was the ideal mix. This enables certain mission-critical applications to remain, while the majority of computing power may be moved to the public cloud.

Visibility through division

When cloud is carved up among stakeholders (from marketing to human resources to IT itself) it can be hard to get a clear picture of usage and costs. Multiple clouds are used for different needs, and over time usage creeps up on a ‘just in case’ basis, even where demand isn’t present.

To get a handle on this, the ever-expanding cloud demands of a business need to be calculated and then amalgamated into one single source of truth. A cohesive cloud approach is necessary across departments, or there is no hope of maximising the potential benefits.

Ideally, there needs to be a centralised system of record, whereby all cloud data (as well as its cost) can be viewed in a transparent format, and clearly articulated to any part of the business. Without this, cloud usage becomes siloed between departments or spread out across various applications and software, as well as compute power and storage. This makes it nearly impossible to have a clear picture of how much is being paid for and used – or equally, how much is needed.

Once a strong base has been established to visualise cloud usage, and different departments can make investment and purchasing decisions, optimisation is key. This may be something as simple as looking at whether a particular instance would be more efficiently run under a pay-as-you-use model versus a reserved spend or calculating the value that has been gained from migrating depreciated servers to the cloud. This can then inform similar future decisions.

Optimising in this way ensures that cloud usage isn’t allowed to spiral out of control. Cloud is a necessary modern technology to fuel innovation, but businesses need to reign in waste in spend and resources to eliminate cloud sprawl and ensure the right cloud investment decisions are being made to support broader business strategies.


March 20, 2019  12:32 PM

What the Myspace data loss debacle tells us about how the internet values creative content

Caroline Donnelly Profile: Caroline Donnelly

The news that Myspace lost 12 years of user-generated content during a botched server migration has prompted a lot of debate this week, which Caroline Donnelly picks over here. 

When details of Myspace’s server migration snafu broke free from Reddit, where the incident has been discussed and known about for the best part of a year, the overriding reaction on Twitter was one of disbelief.

In no particular order, there was the shock at the size of the data loss, which equates to 12 years of lost uploads (or 50 million missing songs), and the fact these all disappeared because Myspace didn’t back the files up before embarking on its “server migration”. Or its backup procedures failed.

There was also a fair amount of surprise expressed that Myspace is still a thing that exists online, which might explain why it took so long for the rest of the internet to realise what had gone down there.

And now they have, the response online has been (predictably) snarky. Myspace, for its part, issued a brief statement apologising for any inconvenience caused by its large-scale data deletion, but that’s been the extent of its public response to the whole thing.

That in itself is quite interesting. The fact a company can lose 12 years of user data, and shrug it off so nonchalantly. Obviously it would be a completely different state of affairs if it was medical records or financial data the company had accidentally scrubbed from its servers.

Digital legacy destruction

What the situation does serve to highlight though is how precarious our digital legacies are for one thing. In amongst all the nostalgia-laden Twitter jokes from people who used Myspace, back when it was at the height of its social networking power, there was also a smattering of genuinely distraught posts from people dismayed at what they had lost.

Individuals who had been prolific Myspace songwriters over the years, who had lost sizeable data dumps of content. In many cases these people had  trusted Myspace at its word when it said it was working on restoring access to their files,  as complaints about playback issues on the site first started to surface back in early 2018.

These are people who have spent a long time curating content that made it possible for Myspace to double-down on its efforts to become a music streaming site, long after the people who first flocked to the site for social networking had hot-footed it to Facebook and Twitter.

I guess the incident should act as a timely reminder that uploading your data to a social networking site is not the same as backing it up, and if you don’t have other copies of this content stored somewhere else, that’s on you.

It’s not exactly helpful, though, and it doesn’t make the situation any less gutting for the people whose data has been lost forever.

There also seems to be an attitude pervading all this that because it’s just “creative content” that has gone, what’s the harm? I mean, can’t you just make more?

It’s a curious take that highlights – perhaps – how little value society places on creative content that’s made freely available online, while also ignoring the time and effort it takes for people to make this stuff. Also a lot of it is off its time, making it impossible for its makers to recreate it.

Analysing Myspace’s response

There has been speculation that Myspace’s nonchalant attitude to losing so much of its musical content could be down to the fact the deletion was more strategic than accidental.

This is a view put forward by ex-Kickstarter, CTO Andy Baio, who said the firm might be blaming the data loss on a botched server migration because it sounds better than it admitting it “can’t be bothered with the effort and cost of migrating and hosting 50 million old MP3s,” in a Twitter post.

And there could be something in that. It is, perhaps, telling that it has taken so long for details of the data loss to be made public, given Myspace users claim they first started seeing notifications about the botched server migration around eight months ago.

That was about five or six months after reports of problems accessing old songs and video content began to circulate on the web forum, Reddit, with Myspace claiming – at the time – that it was in the throes of a “maintenance project” that might cause content playback issues.

Meanwhile, the site’s news pages do not appear to have been updated since 2015, its Twitter feed last shared an update in 2017, and Computer Weekly’s requests for comment and clarification have been met with radio silence.

The site looks pretty much in the throes of a prolonged and staggered wind-down, which – in turn – shows the dangers of assuming the web entities we entrust our content to today will still be around tomorrow.


March 1, 2019  3:28 PM

Kubernetes management made easier for enterprises

Caroline Donnelly Profile: Caroline Donnelly

In this guest post, Rob Greenwood, CTO at Manchester-based cloud and DevOps consultancy, Steamhaus, sets out why the emergence of Amazon’s managed Kubernetes service is such good news for the enterprise. 

In just four years since its conception, Kubernetes has become the de facto tool for deploying and managing containers within public cloud environments at scale. It has won near universal backing from all major cloud players and more than half of the Fortune 100 have already adopted it too.

While alternatives exist, namely in the form of Docker Swarm and Amazon ECS, Kubernetes is by far the most dominant. Adoption has been so pervasive that, even in private cloud environments, the likes of Mesosphere and VMware have fully embraced the technology.

Perhaps the most significant development though, was AWS’s general release of its managed Kubernetes service (EKS) in June 2018. Amazon is well known for listening to its customers and the desire to see Amazon provide a managed service for Kubernetes was palpable – it was one of the most popularly requested features in AWS history.

Despite developing its own alternative in ECS, AWS decided to go with the will of crowd on this one and offer full support. This was a wise move when you consider that 63% of Kubernetes workloads were on AWS when Amazon announced it EKS (although by the time it was made generally available this was said to have fallen to 57%).

Not having this container service for Kubernetes in place was also creating a few headaches for its users. As Deepak Singh, director at AWS container services, explained: “While AWS is a popular place to run Kubernetes, there’s still a lot of manual configuration that customers need to manage their Kubernetes clusters… This all requires a good deal of operational expertise and effort, and customers asked us to make this easier.”

The benefits of curing the Kubernetes management issues

If truth be told, this created a lot of work for teams like my own – which you might think sounds like a good thing. But building and managing Kubernetes clusters is not really the way we would like to use a client’s budget, especially if an alternative exists.

With a managed service now available, we can focus on optimising the overall architecture, migrating applications and enabling our clients to adopt DevOps methodology instead.

Being a multi-cloud platform, organisations could have, in theory, just moved their Kubernetes clusters to another public cloud provider that would support it. But that would have involved even more time and expense and, perhaps more importantly, you would be moving away from many of the features that AWS offers.

With AWS now offering to take care of the build and management side of Kubernetes, however, those difficult decisions don’t need to be made. With all the major public cloud platforms now supporting the technology, the providers will take care of the build and management. This means improvements will be made to ensure we have better integration, scaling and availability. As time goes on, this will only get better and, from the client’s perspective, the assumption is that this will also lead to significant cost savings later down the line.

You could reflect on the current situation and say it’s not good for one technology to be so dominant, but I’m really struggling to see the negatives. It is cheaper for organisations, AWS retains its customers base and the cloud architects get to focus on the job they’ve been brought in to do.

Of course, as an industry we can’t rule out the possibility that new innovations may come along and change the game once again. But, for the time being, the debate is over: we’re working with Kubernetes. And this should end some needless back and forth, allowing us to focus on better ways to introduce new features, scale and secure public cloud infrastructure.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: