Ahead in the Clouds


January 9, 2018  4:09 PM

Meltdown and Spectre: Making a case for greater public cloud use

Caroline Donnelly Profile: Caroline Donnelly
Amazon, Cloud Security, Google, Meltdown, Microsoft, Public Cloud

The Meltdown and Spectre CPU vulnerabilities constitute the greatest test yet of the public cloud provider community’s data security claims, says Caroline Donnelly, while providing enterprise IT departments with plenty to get their teeth into.

When details of the Meltdown and Spectre processor flaws emerged last week, the threat they posed to public cloud users was the stuff of enterprise IT department nightmares.

According to the researchers who uncovered the bugs, both could be used by malicious individuals to extract and read data stored in the memory of applications. And – in the case of the cloud – make it possible for customers on the same platform to access each other’s data.

The big hitters of the public cloud community (Amazon Web Services, Google and Microsoft, to name a few) mobilised quickly in the face of this theoretical threat – to patch their systems and assure users.

First mover advantage on Meltdown

Google, whose Project Zero team are among the research groups credited with bringing Meltdown and Spectre to the world’s attention, said its efforts to mitigate the threats began as far back as June 2017.

It is anyone’s guess how much warning Amazon and Microsoft were given to prep their defences, but both confirmed their systems were in the throes of being patched (or already were) within hours of the flaws first becoming public knowledge.

That’s a speed of response few (if any) private-enterprise datacentre operators could ever hope to achieve. It is also fair to assume not many would have been afforded the luxury of a seven-month head start on addressing these vulnerabilities either.

Amazon, Google and Microsoft have all previously made the point that few enterprises can afford to spend as much as they do on cloud security.

For that reason, they claim data stored in their clouds is better protected than the information enterprises leave languishing in private datacentres.

It’s a declaration that is difficult to argue against, particularly when you consider the high calibre of info-security staff the tech giants tend to attract too.

What the response of the Big Three to Meltdown and Spectre shows the enterprise (particularly the firms still holding out on cloud) is that Amazon, Google and Microsoft’s cloud security claims are not all talk.

These companies had their patches created, tested and deployed by the time most enterprise datacentre operators were still probably trying to tell Meltdown and Spectre apart.

Monitoring the Meltdown

Outside of the cloud, enterprise IT departments still need to make sure their on-premise servers, PCs, and Apple Mac devices are patched and protected from Meltdown and Spectre.

“Unfortunately for IT staff, this is not a ‘one and done’ solution, as there are knock-on impacts from patching the vulnerabilities,” Craig Lodzinski, developing technologies lead at IT infrastructure provider Softcat, tells Ahead in the Clouds (AitC).

Indeed,  industry estimates suggest the first Meltdown patch, for example, could degrade CPU performance by as much as 30%.

“I’d envisage the ripple effect will continue for months, with IT staff having to tune, redeploy resources and assess performance, as well as responding to myriad tickets concerning application and hardware performance, both real and psychosomatic,” he added.

For cloud users, any unexpected change to their organisation’s CPU performance and usage patterns is concerning, particularly if it could cause the cost of their cloud bills to rise.

Some users may decide a bigger cloud bill is a small price to pay for the peace of mind that comes from knowing their data is protected from Meltdown and Spectre. Others, however, may feel it should be considered a cost of doing business by the provider, and covered by them accordingly.

In a statement to AitC, Amazon said no “meaningful” impact on CPU performance had been observed since its patching efforts commenced.

“There may end up being cases that are workload or OS specific that experience more of a performance impact. In those isolated cases, we will work with customers to mitigate any impact,” the AWS spokesperson added.

Patching things up

Google also claims to have seen a “negligible” impact on the performance of the workloads running on its cloud infrastructure, and described the patching process in a blog post as being “uneventful”.

Even so, the search giant is urging users to approach any online reports of performance degradations users claim to have seen with a pinch of salt.

“We designed and tested our mitigations for this issue to have minimal performance impact, and the rollout has been uneventful,” it added.

At least for the time being, with Lodzinski suggesting the fallout from Meltdown and Spectre will be playing out in the enterprise for a long time to come.

“With no attacks in the wild (at time of writing), what the future holds is anyone’s guess,” he said. “What we do know is that good patching and strong cyber-security practices by IT teams are the cornerstone in mounting a strong defence,” he added.

January 4, 2018  3:32 PM

OpenStack vs the rest of the public cloud: Where next from here?

Caroline Donnelly Profile: Caroline Donnelly
AWS, Google, Microsoft, OpenStack, Private Cloud, Public Cloud

In this guest post, Rob Greenwood, technical director at Manchester-based cloud and DevOps consultancy Steamhaus weighs up the pros and cons of using OpenStack over other public cloud platforms.

Just over a year ago, Cisco confirmed the closure of its OpenStack-based public cloud. Shortly after, several smaller datacentres and cloud providers, whose architecture relied on it, closed down, fell into administration or downsized their use of it.

These types of announcements always trigger predictions the OpenStack is on its way out, despite the fact it still powers more than 60 public cloud datacentres around the world.

Regardless of whether you’re a champion of the platform or not, there are some attractive OpenStack alternatives available on the market right now. Namely, Microsoft Azure, Google Cloud Platform and Amazon Web Services (AWS), who’ve all gained a significant share of the public cloud market since OpenStack arrived on the scene.

Each of these platforms have their pros and cons, particularly when it comes to issues such as vendor lock-in, scalability and technical support.

Locked in for life?

The proprietary vendor vs open source debate has raged for some time, with OpenStack advocates claiming their platform offers greater freedom than using any of its public cloud contemporaries.

Personally, I think people get too hung up on the vendor lock-in issue, because, in reality, you’ll always be tied to something – whether that’s a vendor or not. For example, your platform will often be tied to a piece of code regardless of where it’s hosted. And whenever you want to move from one platform to another, there will always be some work required in that migration.

Regardless of your preference however, you should always be tracking the way the market is moving and be prepared to respond accordingly.

The OpenStack ease of use issue

While you do get flexibility with OpenStack, you’re also dealing with lots of moving parts which require skilled engineers who understand its inner workings. Unfortunately, these people are few and far between.

Unless organisations have a highly-skilled engineering team in place, it can quickly become a minefield. It requires a lot of effort to keep stable and debug, and the learning curve is steep.

With that in mind, organisations need to decide if investing significant amounts of capital into building out and maintaining their own kit is something they want to do, especially when compared to the opex public cloud model.

Where next for OpenStack?

Over the last few years, Kubernetes has quickly become the established orchestration tool for managing containers. That layer of abstraction means we can control our containers regardless of the environment they are hosted in – be that in Google, AWS, Azure, on-premise or a combination of all four.

Serverless offerings, such as AWS’ Lambda, are also making it easier to deploy and scale applications without having to worry about the supporting infrastructure.

Both these solutions are changing the cloud landscape and reducing the reasons you would maintain your own OpenStack environment.

The momentum certainly seems to be moving in the direction of public cloud. That said, I’m not joining the doomsayers – those individuals who seem to relish in the demise of any technology.

I don’t believe this is the beginning of the end for OpenStack, and I’ve no doubt it will find its own niche. The Global Passport, an OpenStack initiative designed to bring disparate OpenStack clouds together, may help it carve out a new path.

But, eight years have passed since OpenStack was hailed as a revolution, so it’s time to acknowledge the market has changed — and appealing alternatives exist.


December 14, 2017  10:51 AM

G-Cloud 10 delayed: The pros and cons for public sector cloud users

Caroline Donnelly Profile: Caroline Donnelly
Cloud Computing, G-Cloud, Public sector

In this guest post, Rob Anderson, principal analyst for central government at market watcher GlobalData, takes a look at the implications of delaying the launch of G-Cloud 10 by up to a year.

Making in-roads into the public sector can be a precarious task for IT providers, and does not get any easier when the procurement rules of engagement seem to keep changing.

When the Crown Commercial Service (CCS) recently confirmed the ninth iteration of G-Cloud could remain in place for up to 12 months, it is understandable why supplier concerns were raised.

According to CCS, the extension will give it more time to radically transform the framework for the benefit of both users and suppliers.

Similar extensions were announced in October for both Digital Outcomes and Specialists 2 (DOS 2) and Cyber Security Services 2 (Cyber 2).

Yet eighteen months ago, suppliers were told CCS (alongside the Government Digital Service) were undertaking a new discovery process and that G-Cloud 9 design would go ’back-to-basics’ to deliver improvements for all.

A number of tweaks were made, including the removal of one lot, the elimination of overlapping iterations and an increase in the management levy. These changes, however, were not enough to stimulate significant growth in throughput.

CCS claims, with some degree of credibility, the frameworks have helped drive up cloud adoption and expanded the pool of SME suppliers public sector organisations can buy from, but more of the same will not further increase the volume or value of transactions.

To that end, delaying the rollout of G-Cloud 10 is at step that should be welcomed.

G-Cloud 10 delays: The pros and cons

Where some will see the hiatus as a benefit (if it results in a more attractive and flexible route to market) others will be unhappy with the change in G-Cloud release cadence. In a market accustomed to regular Digital Marketplace updates, a year’s delay may seem like an eternity.

There is also a danger delaying G-Cloud 10 will potentially slow the momentum of SMEs building their businesses to reach the 33% market share the government proclaims to want.

Of greater concern for CCS and suppliers is the continuing antipathy of the wider public sector to all of The Digital Marketplace;  around 90% of spend has been by Whitehall departments and metropolitan councils.

To be recognised as a true aggregator of commodity IT products and services, the buying agency needs to attract more buyers from a broad cross-section of public service delivery organisations.

CCS is also under pressure from its Cabinet Office paymasters, who are still awaiting the magnitude of savings long-since promised from the rolling review of frameworks.

Yet the unit continues to adopt a Field of Dreams “build it and they will come” mindset, rather than listening to user needs and adapting accordingly.

One must hope the CCS uses the breathing space to address those needs to create an enhanced Digital Marketplace and a firm foundation for the broader Crown Marketplace to be built upon.

For whatever the future holds for CCS, the outcome will have a profound effect on both public sector buyers and suppliers.


December 13, 2017  10:32 AM

Combining AI, automation & cloud is key to improving workplace productivity

Caroline Donnelly Profile: Caroline Donnelly
ai, Automation, cloud, Machine learning, SAP

In this guest post, Darren Roos, president of SAP ERP Cloud, explains why automation, AI, machine learning and cloud are key to improving workplace productivity.

If you take productivity at its most basic – as a measure of how much output you get for how much you put in – then it follows that to increase it you must remove as much unnecessary strain as possible from the input cycle.

Over the past decade, cloud computing has established itself as a key tool for helping companies streamline workflows. And, by allowing multiple users to work remotely on the same data at once, boost productivity.

But, just as productivity is effectively a measure of cause and effect, there’s been a negative – or at least stultifying – impact as a result of this enthusiastic uptake of cloud technology too.

Managing the process of digitisation through the cloud, particularly when done at increasingly large scale, has become unsustainably complex. Processes become jumbled, admin-heavy, and offset by unavoidable human error.

The somewhat ironic result is of course a slowdown in productivity. There’s a growing sense, however, that more can be done with this technology – and that we’re currently looking at just the tip of the cloud computing iceberg.

AI, automation and cloud?

Automation, artificial intelligence and machine learning have dominated both dominated the tech press over the past year, whether for beating humans at board games or stirring up fears of job replacement.

They’re actually already helping to streamline people’s daily lives in more ways than they might realise – whether at a one-click online checkout or in sending auto-generated email responses. When automation is applied well, the user won’t notice its presence at all.

And there’s no reason why the same logic shouldn’t apply to how organisations are managing their resources. In fact, it stands to reason that they absolutely should be looking at where automation can be applied.

Automation, bolstered by AI and machine learning, presents an opportunity to minimise the menial, repetitive, administrative tasks that can so often eat up an employee’s day. The result is more free time that can be put to use tackling more complex, thoughtful tasks that require actual intelligence.

Manual tasks have been automated since the industrial revolution, as humans find new roles and ways of working that are built on automation. While automation isn’t new, its scale and intelligence is.

What might you achieve with your day if you didn’t need to mindlessly wade through sales order data, capacity planning outlines, or cumbersome financial operations? The opportunity is immense, and it all comes back to productivity as people get more done during their work hours – and at a smaller overall cost to the business.

Combining the cloud

Beyond the mitigation of the immediate productivity issue, though, there are further benefits to be reaped from the integration of AI and cloud-based services.

Real-time insights, and the ability of computers to learn from them over time, open up the possibility of AI-informed service improvements.

Analysing enormous amounts of data generated on a daily basis, noticing patterns, and responding to them with suggestions or automatic adjustments is bread and butter for the AI and machine learning programs of today.

And they’re ripe for integration with the flexible, agile business approaches that have become commonplace through widespread cloud adoption. In fact, you might go as far as to argue that the two make ideal – if not essential – bedfellows for this exact reason.

In a hyper-connected world, it’s beyond the realms of acceptability for organisations to fall back on established, static practices.

Conventional wisdom indicates that failing to keep up with the pace of technological change will lead to both irrelevance and a decline to the point of non-existence. In fact, mere irrelevance may soon be considered getting off lightly.

It’s a Catch-22 situation: meeting increased, perhaps more ambitious, productivity targets will be key to survival. But to do that, employees need to be able to make the most of the time they spend at work.

The technology industry, at its heart, is driven by a desire to solve problems. I can’t think of one more valuable – or surmountable – for it to take on today.


December 8, 2017  12:00 PM

Cloud bursting: Examining the enterprise use cases and benefits

Caroline Donnelly Profile: Caroline Donnelly
cloud, Cloud bursting, IaaS, Infrastructure, SaaS

In this guest blog post, Naser Ali, head of solutions marketing for EMEA at Hitachi Vantara, explores the use cases and benefits for using cloud bursting in the enterprise.

When NASA needed to process petabytes of data from its Orbiting Carbon Observatory 2 (OCO-2), it expected to wait 100 days and spend $200,000 on new datacentre hardware to make it work.

Instead it used the Amazon Web Services (AWS) cloud to achieve the same thing for $7,000 in less than six days, allowing NASA engineers to gain deeper insights into the Earth’s carbon uptake.

This ephemeral use of cloud computing, known as ‘cloud bursting’ offers the scientific community an effective, affordable solution for processing high-volume data spikes. But can it be applied in the more earthly business world? Indeed it can. In fact, we’re working with two large global banks that are doing just that.

Banks are the perfect use case for cloud bursting because they process large volumes of data about things like interest rate returns and risk weighted assets. This data isn’t personal, but requires frequent short bursts of compute power to meet compliance reporting requirements.

Most banks can’t justify the capital expense of investing in on-premise capacity to process data that only comes in intermittently knowing that hardware will mostly remain idle. On the other hand, running a 24/7 technology stack in the cloud racks up too much operating expense. Cloud bursting not only strikes the perfect balance between storage capacity and cost, it gives data scientists a space to ‘play’ and gain new insights.

Cloud bursting in practice

Put simply, the basic steps data science teams take when cloud bursting are: move data into a cloud, spin up some computing power, spin up some storage, process a tranche of data, shut it down and take the results home. But how does this look in practice? One banking customer’s cloud bursting approach follows these steps:

  • First it uses our data integration platform to automatically ship and load data to object storage in an AWS cloud, processing it in batches.
  • Next it runs a script to fire up Carte, which is a simple web server for remotely executing data transformations (converting data from one format or structure into another) using Amazon Elastic Map Reduce (EMR) in Hadoop or the Amazon Redshift data warehouse.
  • Once the data is cleaned, transformed and loaded into the temporary cloud, the data scientists can then “play” with the data by running ‘what-if’ scenarios on things like risk and interest rate return rates.

The greatest benefit of cloud bursting is agility, but there are hard cost savings as well. This bank had been spending 1 million on hardware and software for its risk reporting Hadoop cluster. We took a subset of its data, 12 million rows, ran it in AWS for $2.20 and in Google for .50 cents. So you can process a pretty large chunk of data for very little cost and no upfront investment. Another cost advantage is the ability to take advantage of ‘spot pricing’ on cloud capacity. This new type of dynamic pricing saves customers money by letting them running some applications only when spot prices fall below a specified price point. It also allows users to quickly run large jobs by outbidding other customers for available capacity.

Critical success factors

Although I’ve used banks as an example, cloud bursting for data science is suitable for any industry with similar requirements. Here’s what our experience tells us that organisations need to be successful with this approach:

  • Agile data integration – choose a data integration tool that doesn’t need to be installed on every Hadoop cluster within the cloud. This is the key capability that allows you to spin up a cloud and process data in a matter of hours. Conversely, tools that require you to install software on multiple Hadoop nodes cancel out all the agility advantages of cloud bursting.
  • Strong DevOps – organisations that are good at DevOps, particularly continuous integration, will have a smoother ride when it comes to preparing data to be processed in cloud bursts.
  • Open standards environment – it’s very important that organisations abstract the data processing from the cloud environment and don’t get locked into a single cloud vendor. Open standards and open frameworks ensure that everything is moveable if necessary.
  • Plan for repatriation – what happens if an organisation is successful with cloud bursting but later needs to add secure data to its analysis? In this case the whole compute process and data would need to be repatriated back on premise. This requires having an analytics platform that supports a hybrid computing environment and ideally supports containers like Docker, which make it fast and easy to repatriate processes and data. This is essential in financial services, where regulatory compliance mandates that systems are in place to prepare for this potential situation.

One giant leap

Following in NASA’s lunar footsteps, banks’ first cloud bursting efforts have been encouraging, achieving savings of multimillion pounds annually. These have largely come from reducing total cost of ownership and increased agility.

Of course the ‘giant leap’ organisations make by applying such a practical and cost-effective way to process data is that they minimise their exposure to financial risk.

The Financial Conduct Authority (FCA) has mandated that banks significantly raise their cash reserves and understand their risk exposures to avoid repeating the scenario that brought down Lehman Brothers and led to the RBS bail-out.

In a typical scenario where a bank has investments in many different financial institutions and companies, it takes just one entity to go bust and take down the others. In our era of economic turbulence, this ability to play with data in a temporary space and game different scenarios could affect a company’s very survival.


December 4, 2017  8:34 AM

AWS Re:Invent 2017 reviewed: The four product announcements tech teams need to know about

Caroline Donnelly Profile: Caroline Donnelly
Amazon, AWS, Cloud Computing

In this guest post, Jon Topper, CTO of DevOps consultancy, The Scale Factory, flags his four favourite announcements from Amazon Web Services’ (AWS) week-long Re:Invent partner and customer conference, which took place last week in Las Vegas.

AWS Re:Invent has just wrapped up for 2017. Held in Las Vegas for over 40,000 people, the conference is an impressive piece of event management at scale, bringing together customers, partners and Amazon employees for five days of keynotes, workshops and networking.

I attended re:Invent 2016, and gave myself repetitive strain injury trying to live-tweet the keynotes, barely keeping up with the pace of new service and feature announcements.

This year’s show saw the cloud giant debut a total of 70 new products and services, and I’ve picked out four of my favourites, and discuss what they mean for enterprise technology teams.

Introducing Inter-region VPC peering

At the start of November, AWS announced it would now be possible for users of its Direct Connect service to route traffic to almost every AWS datacentre region on the planet from a single Direct Connect circuit.

On the back of this, I predicted other region restrictions would be lifted, and AWS came good on that expectation this week when it announced support for peering two virtual private clouds (VPCs) across regions at Re:Invent.

VPC peering is a mechanism by which two separate private clouds are linked together so traffic can pass between them. We use it to link staging and production networks to a central shared management network, for example, but other use cases include allowing vendors to join their networks with clients’ to enable a private exchange of traffic between them.

Until now, when working with customers who require a presence in multiple regions, we have to build and configure VPN networking infrastructure to support it, which also needs monitoring, patching and so forth.

With inter-region VPC peering, all that goes away: we’ll be able just to configure a relationship between two VPCs in different regions, and Amazon will take care of the networking for us, handling both security and availability themselves.

GuardDuty makes it debut at Re:Invent

AWS also debuted a new threat detection service for its public cloud offering, called GuardDuty, which customers can use to monitor traffic flow and API logs across their accounts.

This lets users establish a baseline for “normal” behaviour within their infrastructure, and watch for security anomalies. These are reported with a severity rating, and remediation for certain types of events can be automated using existing AWS tools.

Last year, AWS announced Shield, a managed DDoS protection service made available, for free, to all AWS customers, with CTO Werner Vogels acknowledging that this is something Amazon should have provided a long time ago.

AWS employees often say that security is job zero, and that if they don’t get security right, then there’s no point doing anything else. It’s no surprise therefore that we’re seeing more security focused product releases this year.

AWS GuardDuty is a welcome announcement for both customers and systems integrators. The incumbent vendors in this space offer clumsy solutions, based on past generations of on-premise hardware appliances.

These had the luxury of connecting to a network tap port where they could passively observe and report on traffic as it went by, without impacting on the network or host performance.

Since network taps aren’t available in the cloud, suppliers have had to resort to host-based agents that capture and forward packets to virtual appliances, affecting host performance and bandwidth bills.

AWS GuardDuty lives in the fabric of the cloud itself, and other vendors will find it hard to compete with this level of access.

It’s likely that over time, existing security vendors will pivot their business model further towards becoming AWS partners, adding value to Amazon services rather than providing their own – a move we’ve seen from traditional hosting providers such as Claranet and Rackspace over the years.

EKS: Kubernetes container support on AWS

In the last three years, Kubernetes has become the de facto industry standard for container orchestration, a major industry hot topic, and an important consideration in the running of microservices architectures.

This open source project was created by engineers at Google, who based their solution on their experiences of operating production workloads at the search giant.

Google has offered a hosted Kubernetes solution for some time as part of their public cloud offering, with Microsoft adding support for Azure earlier this year.

At Re:Invent 2017, AWS announced their managed Kubernetes offering, EKS  (AKA as Amazon Elastic Container Service for Kubernetes).

While this announcement shows AWS playing catch-up against the other providers, research by the Cloud Native Computing Foundation (CNCF) shows that 63% of Kubernetes workloads were already deployed to the AWS cloud, by people who were prepared to build and operate the orchestration software themselves.

EKS will now make this much easier, with Amazon taking care of the Kubernetes master cluster as a service; keeping it available, patched and appropriately scaled.

Like its parent service, ECS, this is a “bring your own node” solution: users will need to provide, manage and scale their own running cluster of worker instances.

EKS will take care of scheduling workloads onto them, and provide integration with other Amazon services such as those provided for identity management and load balancing.

Expanding the container proposition with Fargate

Alongside EKS, CEO Andy Jassy announced another new container service: AWS Fargate. Potentially much more game-changing, Fargate users won’t need to provide their own worker fleet – these too will be managed entirely by AWS, elevating the container as a first class primitive on the Amazon platform, on a par with EC2 instances. Initially supporting just ECS, Fargate will offer support for Kubernetes via EKS during 2018.

It’s an exciting time for AWS users – with the ability to adopt the latest in container scheduling technology. But, without the challenges of operating the ecosystem, enterprise tech teams can now spend more of their valuable time on generating business value.


November 17, 2017  2:31 PM

Cloud AI in the enterprise: Making the security case

Caroline Donnelly Profile: Caroline Donnelly
Artificial intelligence, cloud, Cloud Security

In this guest post, Ross Brewer, managing director and vice president for Europe, Middle East and Africa at cybersecurity software supplier LogRhythm, makes the enterprise case for using artificial intelligence (AI) in the fight against cybercrime.

Organisations face a growing number of increasingly complex and ever-evolving threats – and the most dangerous threats are often the hardest to uncover.

Take the insider threat or stolen credentials, for example. We’ve seen many high-profile attacks stem from the unauthorised use of legitimate user credentials, which can be extremely difficult to expose.

Organisations are under growing pressure to detect and mitigate threats like these as soon as they arise, and this is only going to increase when the much talked about General Data Protection Regulation (GDPR) comes into force on 25 May 2018.

Security teams have a critically important job here. They need to be able to protect company data, often without time, money or resources on their side. This means they simply cannot afford to spend time on extensive manual threat-hunting exercises or deploying and managing multiple, disparate security products.

Cloud AI for efficiency

The perimeter-based model of yesterday is insufficient for the mammoth task of protecting a company’s assets. Instead, we are starting to see a shift towards automation and the application of cloud-based Artificial Intelligence (AI), which is fast becoming critical in the fight against modern cyber threats.

In fact, a recent IDC report predicted that the AI software market would grow at a CAGR of over 39% by 2021, whilst separate research from the analyst firm stated that the future of AI requires the cloud as a foundation, with enterprise ‘cloud-first’ strategies becoming more prevalent over the same period.

The cloud is, without doubt, transforming security by enabling easy and rapid customer adoption, saving time and money, and providing companies with access to a class of AI-enabled analytics that are not otherwise technically practical or affordable to deploy on-premise.

Plug-and-play implementation lets security teams focus on their mission instead of spending valuable time implementing and maintaining a new tool.

What’s more, when deployed in the cloud, AI can benefit from collective intelligence and a broader perspective to maximise it. Imagine incorporating real-world insight into specific threats in real-time. This will advance the ability of AI-powered analytics to detect even the stealthiest or previously unknown threats more quickly, and with greater accuracy than ever before.

Using cloud AI to detect unseen threats

By combining a wide array of behavioural models to characterise shifts in how users interact with the IT environment, cloud-based AI technology is helping organisations pursue user-based threats, including signatureless and hidden threats.

Applying cloud-based AI throughout the threat lifecycle will automate and enhance entire categories of work, as well as enable increasingly faster and more effective detection of real threats. Take analytics, for example. Hackers are constantly evolving their tactics and techniques to evade existing protective and defensive measures, targeting new and existing vulnerabilities and unleashing attack methods that have never been seen before.

Cloud AI is beginning to play an important role in detecting these emerging threats. The technology is proactive and predictive, without the need for security and IT personnel to configure and tune systems, automatically learning what is normal and evolving to register even the most subtle changes in events and behaviour models that suggest a breach might be occurring.

Cloud-based AI essentially helps security analysts cut through the noise and detect serious threats earlier in their lifecycle so that they can immediately be neutralised. It provides rapid time-to-value through cloud delivery, and promises to eliminate or augment a considerable number of time-consuming manual threat detection and response exercises. This allows security teams to drive greater efficiency by focusing on the higher-value activities that require direct human touch.


November 13, 2017  7:48 AM

Cloud in healthcare: What is holding up adoption?

Caroline Donnelly Profile: Caroline Donnelly
cloud, Cloud adoption, NHS, Public sector

In this guest post, Darren Turner, general manager at healthcare-focused hosting provider Carelink, takes a look at why the healthcare sector has been slow to adopt cloud, and how NHS Digital’s burgeoning Health and Social Care Network (HSCN) could help speed things up.

With the NHS facing an estimated funding gap of £30bn by 2020, there is immense pressure on the health service to increase operational efficiency and cut costs.

Against this backdrop, one could make the argument that ramping up its use of cloud could help the NHS achieve some of these savings. However, research suggests – compared to other public sector organisations – the health service has been slow to adopt off-premise technologies.

So why has healthcare been slower than other sectors to embrace cloud?

Part of the reason can be traced back to the perceived security risks, particularly when it comes to public cloud providers, and constraints around location and sharing of Patient Identifiable Data (PID).

There is also, I think, a lack of trust in cloud performance and resilience, as well as concerns around the physical location of hardware. Furthermore, there is a shortage of the right skills, willingness to embrace the necessary cultural shift and the budget to cover the cost of migration.

Perhaps it’s the government’s cloud-first policy that presents the biggest hurdle. Introduced in May 2013, it advises all central government departments to prioritise the purchasing of cloud technologies over on-premise software and hardware during the procurement process.

Outside of central government, no mandate for adopting a cloud-first policy exists for local authorities or public sector healthcare providers, for example. Instead, they are merely “strongly advised” to adopt a similar thought process during procurements.

Even so, it could be argued that healthcare providers are feeling the pressure to put everything in the cloud or that public cloud is the only option, but that needn’t be the case. Instead, they should work with a trusted and technology agnostic supplier to identify the best solution or combination for their organisation’s specific needs.

Hybrid services for healthcare providers

While there are no hard and fast rules, at present we find a common approach among healthcare providers is to favour a hybrid architecture, where hardware servers hosting legacy or resource hungry applications are mixed with virtual machines running less intensive services on the cloud.

With a hybrid approach, organisations can realise the efficiencies of virtualisation, through increased utilisation of compute resources, while being able to more closely control the availability of those resources across the estate.

For larger estates, particularly those with high storage volumes, the cost of a private cloud platform can compare favourably to the cost of a hyperscale public cloud.

Healthcare providers should, therefore, work with their network and infrastructure supplier to explore this cost comparison to ensure they get the best value for their money.

Whether opting for private or public cloud, multi-cloud or a hybrid offering, when it comes to entrusting a supplier with an incredibly valuable and irreplaceable asset – data – healthcare providers need to be sure it’s secure.

Buyers should seek accredited suppliers with a proven track record in providing secure solutions and protecting mission critical environments, ideally in a healthcare environment.

Driving healthcare cloud adoption

The rollout of NHS Digital’s Health and Social Care Network (HSCN), the data network for health and social organisations that replaces N3, could spur cloud adoption in some parts of the health care sector, as it could make services easier to provision.

Health and social care organisations will increasingly be able to access a full range of HSCN compliant network connectivity and cloud services from one supplier, simplifying the procurement process.

With assurance that HSCN obligations and standards have been met, this will likely drive greater adoption of cloud services in the sector.

Indeed, we’re seeing cloud providers – ranging from the SME to hyperscale – setting up healthcare divisions and actively seeking suppliers to deliver HSCN connectivity.

Furthermore, HSCN could actually be the catalyst for driving cloud adoption as multi-agency collaboration develops, paving the way for healthcare organisations to deliver a more joined-up health and social care experience for the general public.


October 30, 2017  3:08 PM

Halloween horrors: When a datacentre migration takes a dark turn

Caroline Donnelly Profile: Caroline Donnelly
Cloud migration, Cloud Security, datacentre, Troubleshooting

In this guest post, Drew Nielsen, chief trust officer at cloud-based backup provider Druva, shares a scary and hairy tale about how shortfalls in any company’s upkeep and maintenance schedule can result in some nasty surprises when datacentre migration time arrives

Halloween is a time when scary stories are told and frightful fancies are shared. For IT professionals, it is no different. From stories about IT budgets getting hacked to pieces to gross violations of information security, and cautionary tales about how poor planning can lead to data disaster. Those of us who have worked in the technology sector long enough all have stories to chill the server room.

My own story involves a datacentre migration project. The CIO at the time had seen a set of servers fitted with additional blue LEDs, rather than the usual green and red ones. This seemed enough to hypnotise him into replacing a lot of ageing server infrastructure with new-fangled web server appliances. The units were bought, and the project was handed over for us to execute.

Now, some of you may already be getting a creeping sense of dread, based on not being able to specify the machines yourselves. Others of you may think that we were worrying unnecessarily and this is a fine approach to take. However, let’s continue the story…

Sizing up the datacentre migration

The migration involved moving more than 3,000 websites from the old equipment to the new infrastructure. With the amount of hardware required at the time, this was a sizeable undertaking.

Rather like redeveloping an ancient burial ground, we started by looking at the networking to connect up all these new machines. This is where the fear really started to take hold, as there were multiple networking infrastructures in place. Alongside Ethernet, there was Token Ring, HIPPI, ISDN and SPI all in place. Some of the networking and cables made up live networks, while some were cables left in place but unconnected. However, all five networks were not alone.

Like any large building, certain small mammals had seen fit to make their homes in the networking tunnels. With so much of the cabling left in place, these creatures had lived – and more importantly, died – within this cosy, warm environment. And due to the warmth, it had led to more than one sticky situation developing. What had originally been a blank slate for a great migration project rapidly became messy, slimy and convoluted. Finding skulls, dessicated husks and things in the cabling led to an aura of eldritch horror.

Our plucky team persevered and eventually put in place a new network, ready for servers to be racked. With the distinct aroma of dead rat still lingering, it was time to complete the job, but one further, scary surprise was still to come.

When the number of server appliances had been calculated, the right number of machines had been ordered, but this did not match the physical volume of machines would that fit in the racks. Consequently, it became impossible to fit the UPS devices into the racks alongside all the servers. Cue more frustration and an awful lot of curse words being uttered.

Avoiding your own datacentre migration disaster

So what can you learn from this tale of terror? Firstly, keep your datacentre planning up-to-date. This means keeping accurate lists of what you have, what you had, and how old kit was disposed of. Unless you like dealing with a tide of rodent corpses, working with facilities management to keep the whole environment clear is a must too.

Secondly, planned downtime can lead to more problems and these should be considered in your disaster recovery strategy. Assessing your data management processes in these circumstances might not be a top priority, compared to big unexpected errors or ransomware attacks, but a datacentre migration can always throw up unexpected situations.

A datacentre migration should not include points of no return early in the process. Equally, knowing how to get back to a “known good state” can be hugely valuable.

Thirdly, it can be worth looking at how much of this you do need to run yourself. The advent of public cloud and mobile working means that more data than ever is held outside the business. Your data management and disaster recovery strategy will have to evolve to keep up, or it will become a shambling zombie in its own right.


October 17, 2017  12:11 PM

Discussing DevOps: A toast to the T-shaped engineers

Caroline Donnelly Profile: Caroline Donnelly
cloud, DevOps, Digital transformation

In this guest post, Steve Lowe, CTO of student accommodation finder Student.com, introduces the concept of T-shaped engineers and the value they can bring to teams tasked with delivering DevOps-led digital transformation

In an increasingly complicated world, businesses are constantly looking at how they deliver safe, stable products to customers faster. They do this using automation to create repeatable processes , paving the way for iterative improvements throughout the software development lifecycle.

If you read articles on DevOps or agility, you will come across references to security working more closely with Dev or Ops, with Ops working more closely with Dev, or cross functional teams.

To me these are all the same thing. And the logical evolution is they will become the same team: a team of engineers with enough knowledge to solve problems end-to-end and invest themselves in the outcome. To achieve this, a cross-functional team is nice, but a team of T-shaped engineers is better.

Making the case for T-shaped engineers

It’s important to understand all of your engineers are already T-shaped. They will have different skills and experiences, including some not quite related to their primary role. This is certainly the case if they have previously worked at a startup.

Equally, if you take a look at your ‘go-to’ team members, they probably don’t just specialise in one technology, but also possess a good breadth of knowledge about other areas too.

There is a reason for this. When you have a good breadth of knowledge it allows you to solve problems in the best place, leading to simpler solutions and that are easier to support.

Now imagine instead of having a few ‘go-to’ team members, you’ve developed your whole team into T-shaped engineers. Engineers who can use a multitude of skills and technologies to solve a problem, and aren’t constrained by the idea of “this is what I know, therefore this how I’ll solve it”.

By developing T-shaped engineers, you end up with a better and more resilient team ‒ where holidays and sickness have less impact as someone can always pick up the work ‒ and (usually) an easier to maintain and manage technical solution.

Tapping up T-shaped talent

The real challenge, of course, is finding a large enough pool of T-shaped engineers. For some reason, in a world where solutions are almost always built using multiple technologies, we have developed extreme niches for our engineers.

And while that drives great depth of knowledge on a subject, most problems no longer require that depth of knowledge to solve ‒ or the only reason that depth of knowledge is required is because your engineers don’t know a simpler solution using different technology.

The only solution to this challenge that I’ve found so far is to ‘grow your own’. Find people willing to learn multiple technologies, make it the primary requirement for your hiring, encourage cross learning, and support your team with goals that reward developing a breadth of knowledge.

Scale and right size the team

But let’s suppose I want a bigger team, so I can go even faster, how do I scale when my team is cross-functional?

First, it’s important to make sure having a bigger team will actually make you go faster. If you have a small code base, more engineers might actually just get in each other’s way and you’ll have to work on better prioritisation to make the best use of your team.

Assuming you can bring in more engineers to make a difference, dividing and managing your team requires microservice architecture. If your system is engineered as a group of microservices, then you can separate your microservices into functional areas and then build teams that own functional areas.

There are several benefits around not dividing engineers by technical expertise. First, your team is still focused on an end-to-end solution. There is no more ‘Dev Complete’ ‒ it’s either ready for your customers or it needs more work.

Second, as long as you keep your interfaces between functional areas strong and backwardly compatible, different teams are free to solve their problems with the best technology for the purpose.

This gives nice separation and makes it easier to empower teams and reduce the coordination overhead. It also gives you a good answer as to how many engineers you need to run, develop and maintain your product.

They are simply a function of the number of functional areas that make sense for your product. For the smallest number of engineers, it’s the minimum number of functional areas that makes sense (usually one).

If it’s a question of going faster and how many engineers do you need, it’s simply a question of efficient team size and how many sensible functional areas you can divide your software into.

Why the timing is right

To understand why the timing is now right, and why this wasn’t always good practice, we need to step back and look at the overall picture.

Cloud services and APIs are everywhere, and lots of us are now integrated with a number of services. Infrastructure as Code is a reality and open source is now widely accepted as a must-have building block in most technology situations.

This combination means, to build a modern piece of software, depth of knowledge of any given technology is less important than breadth of knowledge in most cases.

Lots of the complexity is abstracted away, and building most of your solution will involve integrating a mix of open source and third-party services in new and interesting ways to solve a specific problem.

This allows engineers to focus on building a breadth of knowledge rather than a deep focus on one specific technology, so they can solve your problems efficiently.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: