Ahead in the Clouds


June 22, 2018  3:23 PM

Why next-generation IaaS is likely to be open source

Caroline Donnelly Profile: Caroline Donnelly

In this guest post, Chip Childers, CTO of open source platform-as-a-service Cloud Foundry, makes the case for why the future of public cloud and IaaS won’t be proprietary.

Once upon a time, ‘Infrastructure-as-a-Service’ basically meant services provided by the likes of Amazon Web Services (AWS), Microsoft Azure or Google Cloud Platform (GCP), but things are changing as more open source offerings enter the fray.

This is partly down to Kubernetes, which has done much to popularise container technology, helped by its association with Docker and others, which has ushered in a period of explosive innovation in the ‘container platform’ space. This is where Kubernetes stands out, and today it could hold the key to the future of IaaS.

A history of cloud computing and IaaS

History in technology, as in everything else, matters. And so does context. A year in tech can seem like a lifetime, and it’s really quite incredible to think back to how things have changed in just a few short years since the dawn of IaaS.

Back then the technology industry was trying to deal with the inexorable rise of AWS, and the growing risk of a monopoly emerging in the world of infrastructure provision.

In a bid to counteract Amazon’s head start, hosting providers started to make the move from traditional hosting services to cloud (or cloud-like) services. We also began to see the emergence of cloud-like automation platforms that could potentially be used by both enterprise and service providers.

Open source projects such as OpenStack touted the potential of a ‘free and open cloud’, and standards bodies began to develop common cloud provider APIs.

As a follow on to this, API abstraction libraries started to emerge, with the aim of making things easier for developers working with cloud providers who did not just want to rely on a few key players.

It was around this time that many of the cloud’s blue-sky thinkers first began to envisage the age of commoditised computing. Theories were posited that claimed we were just around the corner from providers trading capacity with each other and regular price arbitrage.

Those predictions proved to be premature. At that time, and in the years hence, computing capacity simply wasn’t ready to become a commodity that providers could pass between each other – the implementation differences were simply too great.

That was then – but things are certainly looking different today. Although we still have some major implementation differences between cloud service providers, including the types of capabilities they’re offering, we’re seeing the way forward to an eventual commoditisation of computing infrastructure.

While even the basic compute services remain different enough to avoid categorisation as a commodity, this no longer seems to matter in the way that it once did.

That change has largely come about because of the ‘managed Kubernetes clusters’ used by most public cloud providers now.

The shift has also been happening in the private sector, with many private cloud software vendors adopting either a ‘Kubernetes-first’ architecture or a ‘with Kubernetes’ product offering.

As Kubernetes continues its seemingly unstoppable move towards ubiquity, Linux containers now look likely to become the currency of commodified compute.

There are still implementation differences of course, with cloud providers differing in the details of how they offer Kubernetes services, but the path towards consistency now seems a lot clearer than it did a decade ago.

This more consistent approach to compute now seems as inevitable as the future of IaaS, made possible by the open source approach of Kubernetes.

June 18, 2018  11:24 AM

SaaS vs IaaS vs on-premise: What if moving to the cloud is not the answer?

Caroline Donnelly Profile: Caroline Donnelly

In this guest post, Richard Blanford, managing director of G-cloud-listed IT services provider Fordway, advises companies to weigh up the pros and cons of using SaaS and IaaS before going all-in on the cloud.

For many organisations, moving to the cloud makes complete business sense. If I set up a business now, I would run all my IT in the cloud.

But that doesn’t mean a ‘cloud for everything’ policy will work for every company. Most of us are not starting from scratch or working with a small number of relatively straightforward applications. Therefore, we need to consider carefully whether all our applications can be transferred efficiently and cost-effectively to the cloud.

The first step should always be to look for an appropriate SaaS service. This should provide:

  1. A suitable application that can be configured (where and if necessary) and data imported to provide comparable or better functionality to existing applications at a suitable price, paid on a metered basis, ideally per user/per month. It will ideally offer tools and communities so you can benefit from the experiences of those who’ve already implemented it.
  2. A supporting infrastructure which is no longer your responsibility with an appropriate, fit for purpose SLA that meets your business and operational requirements.
  3. The ability to easily and cost-effectively access, import and export data to other applications for analysis and business reporting.

Once you’ve identified such a service, and confirmed it offers the right resilience at the right cost, you can simply consume it, whilst monitoring to ensure you receive what you’ve signed up for.

The best analogy for SaaS billing models is that it is like turning on a tap to obtain water, rather than going to a well to collect it, before readying it for consumption through purification, for example. Good SaaS provides what you need when you need it, and you’re billed for what you use.

Assessing the cloud-use cases

Cloud is also cost-effective for non-live environments where you pay as you use. This includes disaster recovery (DR), where all servers can be held in a suspended state without incurring charges until needed, and test and development environments, where you only pay when your code runs.

All you need to provide is management. Just be aware that different cloud providers’ Platform as a Service (PaaS) offerings have different APIs, so there’s some element of provider lock-in that may come into play.

It’s more difficult to find appropriate SaaS offerings for niche applications and those that need to be customised to align with business processes. Many application providers are developing their own SaaS strategy, but these typically only support the latest version of the software, and many cannot accept customised applications or third-party add-ons.

This can be a particular problem for local authorities and the NHS, who use highly customised applications for services such as parking management, waste collection and medication prescription and management.

We’ve investigated many ‘SaaS’ offers for our customers, and all too often the vendor will simply park and maintain a dedicated version of the user’s software on a public cloud service while charging a premium price.

SaaS vs. IaaS

If SaaS is not an option, there is also IaaS to consider. You simply move your application (as-is or with minor enhancements) to operate from a cloud provider’s infrastructure. This frees you from the need to own, operate and manage the infrastructure hosting it, although you need to retain licences and support from the application provider.

There are two provisos with IaaS. First, each provider has different design rules, and you need to work through their menu of choices to find the right solution for your organisation. This requires a thorough understanding of your environment, such as how many fixed and mobile IP addresses are needed, whose DNS service will be used, how much data will go in and out of the cloud etc.

Think of it as choosing a separate hub, spokes and rim for a bicycle wheel rather than simply buying a complete wheel.

The devil’s in the detail

Many organisations don’t specify their IT to this level of detail, as once they’ve bought their infrastructure they use all the capacity available. In a cloud environment, however, everything is metered, and – unless the service can be specified extremely tightly – it may not work out cheaper than in-house provision. For example, you can reduce costs by reserving instances, but you are then locked in for one or three years, with a significant cancellation charge. A similar issue arises when buying spot instances, and these can be shut down with no notice, so aren’t suitable for business critical services.

Secondly, the cloud provider only provides hosting, including host and hypervisor patching and proactive infrastructure security monitoring. Any other patching (plus resilience, back-up, security and application support and maintenance inside the instance) need to be provided in-house or by contracting third parties. Any scaling up or down has to be done using the cloud provider’s tools, and this quickly becomes complex when most organisations have on average 40 applications. In short, managing IaaS is effectively a full-time job.

Much of this complexity can be hidden by using a managed IaaS service, where a provider handles everything from operating system provision and authentication to patching and back-up, and charges for an agreed number of instances per month. Managed IaaS services effectively offer your legacy application back to you as SaaS.

This complexity should not deter you if you are determined to transfer your applications to cloud. However, it is important to go in with your eyes open, or to find an expert to go in with you. Alternatively, if SaaS is not available and IaaS sounds like too much work at present, there is a solution: configure your IT as a private cloud. You can then continue to run it in-house with everything in place to move it to SaaS when a suitable solution becomes available.


June 8, 2018  9:32 AM

With big data comes big responsibility

Caroline Donnelly Profile: Caroline Donnelly

In this guest post, Darren Watkins, managing director at colocation provider VIRTUS Data Centres, on how to help organisations take back control over their big data.

When the story broke in March that 50 million Facebook profiles were harvested for British research firm Cambridge Analytica in a major breach, questions about the misuse of personal data once again hit the headlines.

There has been a barrage of promises in response from European politicians and tech executives, vowing to do their best to tighten up controls over our data and even introduce new laws to punish blatant malpractice.

Facebook itself has been contrite, with public apologies from CEO Mark Zuckerberg and most recently the announcement of a bounty program which will reward people who find cases of data abuse on its platforms.

Using big data for good

Incidents like this undoubtedly fuel public wariness about how commercial organisations use their data, but – on the flip side – those in the technology industry know that data capture can be of enormous benefit.

From improving healthcare to powering shopping, travel and even how we fall in love, ‘big data’ is all around us and it’s here to stay. Indeed the Open Data Institute’s (ODI) recent YouGov poll of British adults revealed nearly half of people said they would share their own data – without restrictions – about their background and health preferences if it helped advance academic understanding of areas such as medicine or psychology.

However, for any organisation that operates with data there is a fine line to tread between innovation and responsibility.

There’s a big move at the moment for companies to embrace the ethos of responsible innovation. For some, this means creating products and services designed to meet humanitarian needs. Facebook’s partnership with UNICEF to better map disaster areas is a great example of this. For others it means everyone in the IT industry should move away from looking at their work from a purely technical point of view and ask how their developments may affect end-users and society as a whole.

When it comes to data applications, responsible innovation is a commitment to balancing the need to deliver compelling, engaging, products and services, with the need to make sure data is stored, processed, managed and used properly. This process starts way away from the headlines, or the CEO statements.

Preparation is key

To avoid falling victim to breaches or scandals, companies must ensure they have the right ‘building blocks’ in place. And this starts with data security.

Simple hacking is where big data shows big weakness, thanks to the millions of people whose personal details can be put at risk with any single security breach. The scope of the problem has grown considerably in a short time. It wasn’t too long ago that a few thousand data sets being put at risk by a hack was a major problem. But in September 2017, Yahoo confirmed it did not manage to secure the real names, date of birth, and telephone numbers of 500 million people. That’s data loss on an unimaginable scale, and for the public, that’s scary stuff.

This, together with the computing power needed for big data applications, puts increasing pressure on organisations’ IT and data centre strategies, and this is the challenge which most urgently needs to be overcome. Indeed, it’s not an exaggeration to say that data centre strategy could be crucial to big data’s ultimate success or failure.

For even the biggest organisations, the cost of having (and maintaining) a wholly-owned datacentre can be prohibitively high. But security concerns can mean a wholesale move to cheap, standard, cloud platforms in a hybrid model – where security may not be as advanced – also isn’t an option.

Instead, the savviest firms are turning to colocation options for their data storage needs, recognising that moving into a shared environment means that IT can more easily expand and grow, without compromising security or performance.

It’s by starting here, in the ‘weeds’ of the data centre, that companies can ensure they’ve got firm control over their biggest asset – the data they have and how they use it.

As the public grows more wary of data breaches, the pressure will (and already has) come to bear on the business community to pay more attention to securing, storing and using data in the right way. Countermeasures that used to be optional are in the process of becoming standard and increased pressure is being put on company’s IT systems and processes. For us, it’s in the datacentre where companies can take firm control – avoid the scandals and make sure that innovation is done right.


May 14, 2018  12:28 PM

Kubecon 2018: The rise and rise of Kubernetes

Caroline Donnelly Profile: Caroline Donnelly

In this guest post, Jon Topper, CTO of DevOps and cloud infrastructure consultancy The Scale Factory, on how the growing maturity of Kubernetes dominated this year’s Kubecon.

KubeCon + CloudNativeCon Europe, a three day, multi-track conference run by the Cloud Native Computing Foundation (CNCF), took place in Copenhagen earlier this month, welcoming over 4,300 adopters and technologists from leading open source and cloud native communities.

The increase in popularity of container orchestrator software Kubernetes was a defining theme of this year’s show, as it moves beyond being an early adopter technology, to one that end-user organisations are putting into production in anger.

What is Kubernetes?

Kubernetes provides automation tooling for deploying, scaling, and managing containerised applications. It’s based on technology used internally by Google, solving a number of operational challenges that may have previously required manual intervention or home-grown tools.

This year the Kubernetes project graduated from the CNCF incubator, demonstrating that it is now ready for adoption beyond the early adopter communities where it has seen most use thus far.

Many of the conference sessions at the show reinforced the maturity message, with content relating to grown-up considerations such as security and compliance, as well as keynotes covering some interesting real-life use cases.

We heard from engineers at CERN, who run 210 Kubernetes clusters on 320,000 cores so that 3,300 users can process particle data from the Large Hadron Collider and other experiments.

Through the use of cluster federation, they can scale their processing out to multiple public clouds to deal with periods of high demand. Using Kubernetes to solve these problems means they can spend more time on physics and data processing than on worrying about distributed systems.

This kind of benefit was reiterated in a demonstration by David Aronchick and Vishnu Kannan from Google, who showed off Kubeflow.

This project provides primitives to make it easy to build machine learning (ML) workflows on top of Kubernetes. Their goal is to make it possible for people to train and interact with ML models without having to understand how to build and deploy the code themselves.

In a hallway conversation at the show with a member of the Kubernetes Apps Special Interest Group (sig-apps), I learned there are teams across the ecosystem working on providing higher order tooling on top of Kubernetes to make application deployment of all kinds much easier.

It will eventually be the case that many platform users won’t interact directly with APIs or command line tools at all.

Commodity computing

This ongoing commodification of underlying infrastructure is a trend that Simon Wardley spoke about in his Friday keynote. He showed how – over time – things that we’ve historically had to build ourselves (such as datacentres) have become commoditised.

But spending time and energy on building a datacentre doesn’t give us a strategic advantage, so it makes sense to buy that service as a product from someone who specialises in such things.

Of course, this is the Amazon Web Services (AWS) model. These days we create managed databases with their RDS product instead of building our own clusters of MySQL.

At an “Introducing Amazon EKS” session, AWS employees described how their upcoming Kubernetes-as-a-Service product will work.

Amazon is fully bought into the Kubernetes ecosystem and will provide an entirely upstream-compatible deployment of Kubernetes, taking care of operating the control servers for you. The release date for this product was described, with some hand-waving, as “soon”.

In working group discussions and on the hallway track, it sounded as though “soon” might be further away than some of us might like – there are still a number of known issues with running upstream Kubernetes on AWS that will need to be solved.

When the product was announced at AWS re:Invent last year Amazon boasted (based on the results of a CNCF survey) that 63% of Kubernetes workloads were deployed on AWS.

At this conference, they came with new figures stating that the number had dropped to 57% – could that be because both Google Cloud and Microsoft’s Azure already offer such a service?

Wardley concluded his keynote by suggesting that containerisation is just part of a much bigger picture where serverless wins out. Maybe AWS is just spending more time on their serverless platform Lambda than on their Kubernetes play?

Regardless of where we end up, it’s certainly an exciting time to be working in the cloud native landscape. I left the conference having increased my knowledge about a lot of new things, and with a sense that there’s still more to learn. It’s clear the cloud native approach to application delivery is here to stay – and that we’ll be helping many businesses on this journey in the years to come.


May 3, 2018  10:37 AM

Remote control: Making virtual desktops and applications work in the enterprise

Caroline Donnelly Profile: Caroline Donnelly
VDI

In this guest post, Jack Zubarev, president of application virtualisation software provider Parallels, sets what IT departments can do to get the most out of their virtual desktop deployments.

The advent of cloud computing and ubiquitous, fast internet connectivity means office desktops with locally installed applications are becoming a thing of the past for many employees.

Remote application and desktop delivery can provide an easy way to manage, distribute and maintain business applications. Virtualised applications run on a server, while end users view and interact with them over a network via a remote display protocol. Remote applications can also be completely integrated with the user’s desktop so that they appear and behave like local applications.

Application delivery is more vital than ever, but can also be challenging. End-users want consistent performance with a seamless experience, while IT is focused on efficient and effective management and security at a reasonable cost.

Some specific application delivery challenges include performance and being able to deliver reliable and responsive application availability over a variety of network connections; allowing bring your own device (BYOD) users to securely access applications and data from anywhere, at any time, on any device.

Meanwhile, legacy applications can be centrally managed and delivered alongside modern applications simultaneously on the same device; while ensuring application and data security on devices that remotely access virtual resources and maintaining regulatory compliance and privacy.

The benefit of virtual desktops

Remote application and desktop delivery provides many tangible business benefits, including performance improvement and a reduction in downtime; and can help simplify management and updates tasks, and – of course – reduce costs.

The centralised management system offered by application delivery enables you to effectively monitor and manage the entire infrastructure from a single dashboard. Even when installing new components or configuring a multisite environment, there’s no need to log in to other remote servers.

Managing everything centrally gives you more control and reduced hardware means fewer people are needed to manage it. You don’t have to deal with updates, patches, and other maintenance problems. This simplified IT infrastructure makes IT jobs easier.

Another benefit of application delivery is that you can deliver any Windows application to any remote device. For instance, legacy ERP can be remotely published to iPads, Android tablets, or even Chromebooks. It provides a seamless and consistent end-user experience across all devices.

As applications are installed on the application delivery server and remotely published to client devices, businesses can save significant amounts on hardware and software purchases, as well as licensing and operational costs.

Finding the right system

Look for a system that is easy to deploy and use. After all, the goal is to simplify the management and application delivery to employees – not to make it more complex with another layer.

Ideally, it should work in any environment (on-premise, in the public, private cloud or hybrid cloud) and feature pre-configured components, and a straightforward set-up process, so installation is not too labour-intensive.

The best solutions can cost effectively transform any Windows application into a cloud application that is accessible from web browsers as well as any device, including Android, iPad, iPhone, Mac, Windows, Chromebook, and Raspberry Pi.

A clientless workspace is also important, so users can access published applications and virtual desktops using any HTML5-compatible browser, such as Chrome, Firefox, Internet Explorer (and Edge), or Safari.

Your application and desktop delivery solution should also provide support Citrix, VMware, and Microsoft Hyper-V, so sysadmins can build a VDI solution, using a wide range of technologies, to suit their organisations’ technology and cost requirements.


May 1, 2018  8:52 AM

DevOps top 5 tips: Securing management buy-in

Caroline Donnelly Profile: Caroline Donnelly
cloud, datacentre, DevOps

In this guest post, Pavan Belagatti, a DevOps influencer working at automation software provider Shippable, shares his top five tips for securing senior management buy-in for your team’s agile ambitions.

With the ever-changing technology landscape and growing market competition, being able to adapt is important to give your organisation a competitive advantage and – ultimately – succeed.

While you might have a flawless product, customer requirements are constantly evolving, and if you don’t listen to what they want, there comes the problem. Instead, they’ll simply find a provider that does.

From waterfall to DevOps

Change is the only constant, and it applies to every organisation. In response, the software industry has evolved from following the waterfall model of software development to adopting DevOps methodologies, where the emphasis is on shorter release cycles, on-going integration, and continuous delivery.

And while there are many successful case studies on how companies are applying DevOps to their software delivery and deployment methods, there are many that are still yet to join the bandwagon.

They may be afraid to change and adapt, but risk seeing their growth stagnate if they are resistant to change. If you are a software engineer in such a company, what can you do to make senior management change their minds?

Here are five steps that you can use to persuade your manager and management to adopt a DevOps mindset.

1.  Offer some wider reading on DevOps

Compile a reading list of articles about corporations that have benefited from adopting DevOps, including industry-specific examples, and links to research reports that detail how organisations that have gone down this route have benefited

2. Solve a small, but meaningful problem with DevOps

Find a place where you think your software development is visibly lacking and try to improve it. If you have demonstrable proof employing agile-like methodologies helped fix it, senior management might be more inclined to start experimenting with it on other projects.

In this vein, try not to talk about abstract stuff (“we need to optimise the release cycle”). Instead, create a process with a continuous integration (CI) tool and show them how it works in practice. For instance, if the CI automatically tests the application for errors on every push, and the release to production has shortened from 2 hours to 15 minutes, make that known.

3. Measure the success

Find a way (beforehand) to measure the effect of your efforts (using KPIs, for example) and perform these measurements. That will guide the organisation’s efforts, and provide concrete evidence to your peers or managers to convince them that DevOps is the way to go.

4. Pinpoint business processes that can be improved with DevOps

Model how they can be enhanced and calculate an ROI based on this input versus the “legacy” way of doing things. You can start implementing some practices and create success first. Once that’s done it should be easier to convince the manager to go further.

5. Set out a strategy for introducing DevOps to your organisation

This should feature concrete suggestions to help you create a small but convincing implementation plan that sets out the benefits of adopting this approach to software development and deployment.

Ask your manager where they think the software project in question is heading and let them know you are keen on pushing the DevOps agenda, alongside carrying out your day-to-day duties. And, prepare a presentation for the rest of the organisation, and share what you think the business has to gain from embracing DevOps.


April 10, 2018  9:04 AM

Using cloud to help public transport operators put passengers first

Caroline Donnelly Profile: Caroline Donnelly

In this guest post, Craig Stewart, vice president of product management at cloud integration provider SnapLogic, spells out why the more traditional transport operators need to take a leaf out of their cloud-native competitors’ books to stay relevant and in-touch with changing customer demands. 

The phrase, “you wait ages for a bus, and then three come along at once,” might sound cliché, but speaks volumes about the inherent difficulties that come from running a mass transportation network.

A late train here, a spot of traffic there, and the delicate balance between all the moving parts collapse. As a result, we miss our connecting train, wait 20 minutes for a bus, and then our flight gets cancelled.

In days gone by, these kind of issues were just something most travellers had to put up with, as they had no other option to get around. And so, with little external pressure, there was no need for innovation, and the back-end data management systems underpinning the transport network stagnated.

Things have changed, however. Disruptors like Uber have changed the transportation game and prompted customers to revise up their expectations.

As such, they want real-time travel status updates, alternative route suggestions if things do go awry, and the ability to talk to staff who have a full view of the situation, rather than people who are as clueless about the situation as the travellers.

As it stands, the information infrastructure of UK mass transport networks is not fit for the task, and in dire need of renovation.

Imitate to innovate

Imitation is the sincerest form of flattery, and for traditional transport operators, competing in the new mobility landscape means taking note of what the disruptors are up to.

They need to offer the flexibility that customers now expect, and move away from the rigid approaches to timetables and scheduling of the past.

The transport jargon for this transition is mobility-as-a-service (MaaS). It’s certainly not a reality for traditional transport operators currently, but it’s a hot industry topic that is set to be a key area of focus for the public transport industry over the next few years.

Achieving a new MaaS model will require an acceleration in both technology and mindset, particularly when it comes to better understanding of the needs and expectations of the customer.

Some transport operators have already made encouraging first steps in this direction. Transport for London (TfL), for example, has its Unified API, which allows third-party developers to access its data to create value-added apps and services.

TfL’s website even goes as far as to state: “We encourage software developers to use this data to present customer travel information in innovative ways”.

This attitude, however, is currently the exception rather than the rule, and transport operators need to do more like this to make the traveller’s experience as convenient as possible.

This is something retailers have been striving for with their customer-focused omnichannel initiatives for many years. A similar approach, combining robust CRM systems with big data platforms such as Azure and analytics tools, may help transport providers match service provision to customer needs.

Allowing transport operators to more efficiently manage and operate their assets may also help deliver cost-savings for an industry with historically low margins.

Data sharing for customer caring

As alluded to above, local public transport is a shared endeavour, involving multiple local authorities and various bus and rail operators all working together across a region.

In their quest for a more flexible service structure, the use of and sharing of customer data will be paramount.

It’s all well and good if one operator has a vividly accurate portrait of customers and their needs, shifting their operations accordingly, but this data has to be available across the length of a person’s journey, even if it crosses other operators’ services.

The sharing of data securely across departments and with partners is a key stepping stone in shifting transport’s perception of passengers from mere users to customers.

Although responsibility for a passenger may end when they leave the operator’s service, ensuring consistent quality across the full journey is imperative for the shifting business models of the traditional transport sector.

Cloud as a stepping stone to customer-centric care

Essentially, what public transport operators require is a comprehensive digital transformation initiative that changes how they manage operations and approach their customers.

With so many new systems to deploy and data to integrate, there’s no real alternative option except using cloud to achieve the flexibility and scalability needed to make these changes within a set time frame.

What’s more, only the cloud will ensure future technology advances, particularly around machine learning and the Internet of Things, can be quickly deployed to keep tomorrow’s innovations on track, and the disruptors at bay.

Like many industries before it, transport is guilty of growing complacent in its seemingly privileged position. Now it needs to start treating its passengers more like people and less like cargo, and shape its business for them, rather than the other way around.


April 4, 2018  2:59 PM

Multi-cloud: What enterprises need to know

Caroline Donnelly Profile: Caroline Donnelly

In this guest post, Allan Brearley, cloud practice lead at IT services consultancy ECS, takes a look at what enterprises need to bear in mind before they take the plunge on multi-cloud.

There’s no doubt that enterprise interest in multi-cloud deployments is on the rise, as organisations look for ways to run their applications and workloads in the cloud environments where it makes most sense to.

When enterprises move workloads on demand, they can achieve cost advantages, avoid supplier lock-in, support fiduciary responsibility, and reduce risk, but there is a flip side to this coin.

Enterprises in the single-cloud camp counter that multi-cloud introduces unnecessary complexity, increasing management overhead and diluting the value of cloud to the lowest common denominator of commodity infrastructure.

In short, customers taking this approach fail to fully exploit all the advantages the cloud model can provider.

So who’s right?

Multi-cloud to avoid vendor lock-in

One of the most popular arguments we hear in favour of adopting a multi-cloud strategy is to avoid vendor lock-in.  After all, who wants to be tied to one supplier who notionally holds all the cards in the relationship when cloud has made it possible to run workloads and applications anywhere?

In my view, this is a partially-flawed argument, given that large enterprises have always been tied into inflexible multi-year contracts with many of the established hardware and software suppliers.

For some enterprises, multi-cloud means using different providers to take advantage of the best bits of each of their respective platforms.  For others, multi-cloud is about using commodity compute and storage services in an agnostic fashion.

Anyone who opts for full-fat multi-cloud will have access to an extensive choice of capabilities from each supplier, but this comes at a price.

Unless they limit their cloud adoption to the Infrastructure-as-a-Service (IaaS) level, only then will they unlock the benefits that come from having access to near infinite on-demand compute and storage capability, including reduced costs.

But this skimmed-milk version of multi-cloud increases the burden of managing numerous suppliers and technology stacks, limiting the ability to increase pace and agility and hampering the ability to unlock the truly transformative gains of cloud.

Managing a multi-cloud

Many enterprises underestimate the management and technical overheads – plus skillsets – involved in supporting a multi-cloud strategy.  Managing infrastructure, budgets, technology deployments and security across multiple vendors quickly progresses from a minor irritation to a full-blown migraine.

While there are a plethora of tools and platforms designed to address this, there is no single, silver bullet to magic away the pain.

As well as considering the in-house skills sets required for a multi-cloud model, another factor is the location of your data.  With the increasing focus on aggregating data and analysing it in real-time, the proximity of data impacts any decision on the use of single cloud vs multi-cloud vs hybrid.

The costs associated with data ingress and egress between on-premise and multiple cloud providers need to be carefully assessed and controlled.

Best of both

But don’t panic, there is another option. It is possible to secure the best of both worlds by opting for an agnostic approach with a single cloud provider and designing your own ‘get out of jail free’ card that makes it easy to one of its competitors. With a clear exit strategy in place at the outset, you can take advantage of the full capabilities of a single vendor and avoid significant costs.

The exit strategy needs to address the use of higher value services. For example, if you use Lambda or Kinesis in the Amazon Web Services cloud, what options would you have if you decide to move to Microsoft Azure, Google Cloud Platform or even back on-premise?

Always consider the application architecture and if you aim for a loosely coupled stateless design, it will be easier to port applications.

Assuming you extend the use of cloud resources beyond commodity IaaS into higher-level PaaS services, such as AWS’s Lambda, you will develop functions as reactions to events.

This pattern, although labelled as ‘serverless’ can be replicated in a non-serverless environment by making these functions available as microservices.

If you later decide to migrate away from a single provider in future, you can use Google Functions or a combination of microservices and containerisation, for example.

By maintaining a clear understanding of which proprietary services are being consumed, you will make it easier to re-host elsewhere while complying with regulations.

As with most enterprise decisions, there’s no clear-cut right or wrong answer on which leap into the cloud to take. The right approach for your organisation will reflect the need to balance the ambition to adopt new technology and increase the pace of change, with the desire to manage risk appropriately.


March 28, 2018  3:17 PM

Better cloud management: the Ops team vs. machines

Caroline Donnelly Profile: Caroline Donnelly

In this guest post, Steve Lowe, CTO of student accommodation finder Student.com, weighs up the risks and benefits of relying on machines to carry out Ops jobs within cloud infrastructures.

The “if it ain’t broke, don’t fix it” mantra is firmly ingrained in the minds of most engineers, and – in practice – it works when your resource is fixed and there is no upside to scaling down.

However, when you are paying by the hour (as is often the case with cloud) or the second, it can make a huge difference.

Assembling a team of engineers that are able to make quick, risk-based decisions using all the information at their disposal is not an easy feat at all.

Worryingly, within modern-day teams, there is often a tendency to think things will work themselves out in a few minutes. Or, if something is working fine, the best approach is to let it run and be safe than scale it down.

People vs. machines

In a high-pressure situation, even some of the best decision makers and quickest thinkers can be hard-pressed to come up with a viable solution within the required period of time.

This is where the emotionless machine wins hands down every time. Wider ranges of data sets can be joined together and analysed by machines that can use that data to make scaling decisions.

On top of that, these decisions are made in a fraction of the time a team of engineers would use. As such, if you tell your machine to follow your workloads, it will do just that.

Another benefit of relying on emotionless machines is their automation and reliability, meaning workloads can be easily repeated and delivered consistently to the required standard.

Here comes Kubernetes

As enticing as all this may sound, especially to organisations wanting to scale-up, the devil is in the implementation. For a number of years, a significant amount of work was required to implement auto-scaling clusters in a sensible way. This was especially the case with more complex systems.

The solution? Kubernetes. As long as your software is already running in containers, Kubernetes makes implementation much simpler and smoother. And autoscaling a Kubernetes cluster based on cluster metrics is a relatively straightforward task.

Kubernetes takes care of keeping services alive and load balancing the containers across the compute environment. And, finally, enterprises get the elastically-scaling compute resource they always dreamed of.

What to do with the Ops crew?

Now the machines have helped the organisation free up all this time, the question of what to do with the people in the Ops team now needs answering.

Well, with cyber attacks increasing in both number and sophistication, there’s never been a better time to move free resources into security. With a little training, the Ops team are perfectly positioned to focus on making sure systems are secure and robust enough to withstand attacks.

With the addition of Ops people with practical hands-on experience, the organisation will be better positioned to tackle future problems. From maintaining and testing patches to running penetration testing, the Ops people will add direct value to your company by keeping it safe, while the machines take care of the rest.


March 23, 2018  11:31 AM

Cloud computing: Past, present and future

Caroline Donnelly Profile: Caroline Donnelly
adoption, cloud, Virtualisation

In this guest post, Susan Bowen, vice president and general manager at managed service provider Cogeco Peer 1, takes a look at the history of cloud computing, and where enterprises are going with the technology.

The late 1960s were optimistic days for technology advancements, reflected headily in the release of Stanley Kubrick’s iconic 2001: A Space Odyssey and its infamously intelligent computer H.A.L. 9000.

Around this time the concept of an intergalactic computing network was first mooted by a certain J.C.R. Licklider, who headed up the Information Processing Techniques Office at the Pentagon’s Advanced Research Projects Agency (ARPA).

Charged with putting the US on the front foot, technologically, against the Soviets, ARPA was a hotbed of forward-thinking that pushed technology developments well beyond their existing limits.

Within this context, Licklider envisioned a global network that connected governments, institutions, corporations and individuals. He foresaw a future in which programs and data could be accessed from anywhere. Sounds a bit like cloud computing, doesn’t it?

This vision actually inspired the development of ARPANET, an early packet switching network, and the first to implement the protocol suite TCP/IP. In short, this was the technical foundation of the internet as we now know it.

It also laid the foundation for grid computing, an early forerunner of cloud, which linked together geographically dispersed computers to create a loosely coupled network. In turn this led to the development of utility computing which is closer to what Licklider originally envisioned.

It’s also closer to what we think of as the cloud today, with a service provider owning, operating and managing the computing infrastructure and resources, which are made available to users on an on-demand, pay-as-you-go basis.

From virtualisation to cloud

Dovetailing with this is the development of virtualisation which, along with increased bandwidth, has been a significant driving force for cloud computing.

In fact, virtualisation ultimately led to the emergence of operations like Salesforce.com in 1999, which pioneered the concept of delivering enterprise applications via a simple website.

These services firm paved the way for a wide range of software firms to deliver applications over the internet. But the emergence of Amazon Web Services in 2002 really set the cloud ball rolling, with its cloud-based services including storage and compute.

The launch of Amazon’s Elastic Compute cloud several years later was the first widely accessible cloud computing infrastructure service, allowing small companies to rent computers to run their own computer applications on.

It was a god send for many small businesses and start-ups helping them eschew costly in-house infrastructures and get to market quickly.

When cloud is not the answer

Of course, cloud went through a ‘hype’ phase and certainly there were unrealistic expectations and a tendency to see the cloud as a panacea that could solve everything.

This inevitably led to poor decision making when crafting a cloud adoption strategy, often characterised by a lack of understanding and under investment.

This led a few enterprises adopt an “all-in” approach to cloud, only to pull back as their migration plans progress, with few achieving their original objectives.

Today expectations are tempered by reality and in a sense the cloud has been demystified and its potential better understood.

For instance business discussions have moved from price-points to business outcomes.

Enterprises are also looking at the cloud on an application by application basis, considering what works best as well as assessing performance benchmarks, network requirements, their appetite for risk and whether cloud suppliers are  their competitors or not.

In short they understand the agility and flexibility of the cloud definitely makes it a powerful business tool, but they want to understand the nut and bolts too.

The evolution of cloud computing

Nearly every major enterprise has virtualised its server estate, which is a big driver for cloud adoption.

At the same time the cloud business model has evolved into three categories, each with varying entry costs and advantages: infrastructure-as-a-service, software-as-a-service, and platform-as-a-service. Within this context private cloud, colocation and hosted services remain firmly part of the mix today.

While cloud has its benefits, enterprises are running into issues, as their migrations continue.

For instance, the high cost of some software-as-a-service offerings has caught some enterprises out, while the risk of lock-in with infrastructure-as-a-service providers is also a concern for some. As the first flush of cloud computing dissipates, enterprises are becoming more cautious about being tied into a service.

The scene is now set for the future of cloud, with a focus on hybrid and multi-cloud deployments. In five years’ time, terminology like hybrid-IT, multi cloud and managed services, will a thing of the past.

It will be a given, and there won’t just be one cloud solution, or one data solution, it will be about how clouds connect and about maximising how networks are used.

Hybrid solutions should mean that workloads will automatically move to the most optimised and cost-effective environment, based on performance needs, security, data residency, application workload characteristics, end-user demand and traffic.

This could be in the public cloud, or private cloud, or on-premise, or a mix of all. It could be an internal corporate network or the public internet, but end-users they won’t know and they don’t care either.

There is an application architecture refactoring that is required to make this happen with existing architectures shifting from classic three-tiers to event driven.

Cloud providers are already pushing for these features to be widely adopted for cloud-native applications. And as enterprises evolve to adapt, hyperscale public clouds, and service providers are adapting to reach back into on-premise data centres.

This trend will continue over the next five to ten years with the consumption nature of the cloud growing in uptake.

While we wouldn’t call this the intergalactic computing network as ARPA’s Licklider originally envisioned, it’s certainly moving closer to a global network that will connect governments, institutions, corporations and individuals.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: