Ahead in the Clouds

Page 1 of 812345...Last »

January 17, 2018  4:07 PM

Cloud anarchy in the UK: Here’s how to beat it

Caroline Donnelly Profile: Caroline Donnelly
cloud, Cloud adoption, Shadow IT

In this guest post, Allan Brearley, cloud practice lead at IT services consultancy ECS and Tesco Bank’s former head of transformation shares his thoughts on what enterprises can do to address the anarchy in their cloud deployments

Just over 40 years ago the Sex Pistols released their first single, Anarchy in the UK. Today we are experiencing anarchy of a different kind in some of the UK’s largest businesses.

Putting production workloads into the cloud has, on the face of it, never seemed so easy. The accessibility and ease of consumption of cloud-based services has unleashed a rush to the cloud by many large enterprises looking to take on their nimbler ‘born in the cloud’ competitors.

But there is growing evidence that many enterprises are struggling with cloud anarchy because they are not fully in control of their journey off-premise. And, without a comprehensive roadmap in place for enterprise-wide cloud adoption, cloud chaos ensues.

Shadow IT-induced anarchy

In a cloud equivalent of the Wild West, shadow IT is a major cause of cloud anarchy facing enterprises today.

It’s not unusual for employees to become frustrated by the IT department’s seemingly slow progress and subscribe to a  SaaS offering themselves without considering the impact this decision will have on the rest of the business.

Other problems arise when boards rush to embrace cloud without having defined a comprehensive vision and strategy that takes into account existing business processes, employees’ skills, company culture, and legacy IT infrastructure.

While a cloud-first approach might get off to a cracking start, without that clear company-wide vision and strategy, it is destined to lose momentum fast.

The chaotic environments resulting from these ad-hoc approaches have far-reaching consequences for an organisation’s corporate governance, purchasing, and IT service integration processes.

Good cloud governance

Where governance is concerned, it is unlikely there will be full visibility of what cloud services are being consumed where, and whether appropriate controls and standards are being met.

This problem is exaggerated in highly-regulated industries, such as financial services, where organisations are required to demonstrate they are: mitigating risk, managing IT security appropriately, managing audits and suppliers effectively, and putting appropriate controls in place to ensure compliance with regulations around data sovereignty and privacy such as the EU GDPR.

Financial services firms also need to demonstrate they are managing material outsource risks effectively, in order to comply with FCA regulations.

The uncontrolled purchase and use of SaaS or PaaS services without the appropriate level of IT engagement will also throw up a whole raft of integration, visibility and support headaches.

‘Technology snowflakes’ are another cause for concern. These occur when the same problem is being solved in different ways by different teams, which leads to IT support inefficiencies and additional costs.

Enterprises need to factor in some of the other financial implications of cloud anarchy too. These include a fragmented procurement process that make it difficult to cut the best deal, as well as questions over how teams consuming their own cloud services manage their budgets in the context of consumption-based services.

Embracing a cloud-shaped future

With a clear cloud strategy underpinned by appropriate controls, everyone will have the tools they need to innovate faster. The final piece in the puzzle is to ensure employees are fully engaged, and have the skills required to take advantage of this new approach and tools.

This requires building a company culture that embraces the cloud in a structured way, and promptly plugging any skills gaps in your employees’ knowledge.

With the Sex Pistols’ anthem still ringing in my ears, it occurs to me that Johnny Rotten was half right when he screamed the immortal lines: “Don’t know what I want, but I know how to get it”.

With cloud adoption, it’s important that everyone within the business pogos to the same tune – and that there is agreement up front on what is required.

Without a strong cloud vision and strategy, it’s impossible to know where you’re heading, how you’re going to get there, and when you’ve arrived.

January 15, 2018  11:43 AM

The art of finding (and fixing) cloud faults

Caroline Donnelly Profile: Caroline Donnelly
Cloud outages, Latency, performance

In this guest post, Ron Vermeulen, go-to-market manager for north-west Europe at IT services provider, Comparex, runs through the process of finding and fixing cloud faults

There is no doubt that cloud computing offers huge benefits to organisations, but CIOs must accept and manage the potential barriers to realising its value.

Service faults and latency issues can prove problematic, for example, when the application in question is business-critical. They can also cost organisations time and money, and have a negative impact on the end-user experience.

Pinpointing where a performance issue occurs in the first place can also be a challenge.

When on-premise IT infrastructure was de rigueur, it was far easier for organisations to find the source of the problem, which could be down to a misbehaving server in the datacentre, for instance.

It’s not so simple today, because ‘your’ public cloud server is now in someone else’s facility, and the difficulty is compounded because the glitch could be closer to home, rather than the fault of the service provider.

A cloud service might be performing fine, but a network problem could be causing issues at ‘home’. A managed service can often help to lessen this headache by identifying, on behalf of the organisation, where the problem lies in the first place.

Finding cloud faults and fixing them

Fixing the fault is the next hurdle. If the problem is with a supplier’s services (rather than in-house) then another complication is added.

Different Service Level Agreements (SLA) for fixing a fault are in place with all cloud suppliers; and managing the various terms and conditions is a mammoth task.

SLAs governing ‘time-to-repair’ can vary greatly – up to 30 hours in some cases. For a business-critical application this is an unacceptable timeframe.

Organisations can pay for a higher level of SLA to guarantee a rapid fix time, but this is rarely factored into their initial cloud costs. As such, organisations can end up paying more than expected just to keep the lights on.

The flexibility and agility of cloud still make it the first choice for lots of organisations, but when it comes to management, many IT teams have essentially relinquished control over support and maintenance.

It is critical organisations retain visibility across their IT infrastructure and ensure individual SLAs meet the specific needs of their organisation.

Take back control of cloud

Regaining control of a cloud deployment can be achieved by adding an overarching management layer that offers visibility, or engaging a managed service to help implement this.

This means, rather than relying on a vendor to analyse a support ticket, the analysis can begin at home.

Pinpointing an issue can be done in as little as 30 minutes using tools and services available today. This offers an even greater level of control – by introducing a sophisticated management layer or service can actually spot a problem before it happens – so issues can be fixed proactively.

This level of visibility into cloud is a ‘must have’, not just a ‘nice to have’ – particularly as convoluted IT infrastructures become commonplace.

The shift to multi- and hybrid-cloud installations, pointed out by Gartner, is one example of this increasing complexity. The cloud ‘stack’ no longer just encompasses software-, infrastructure- and platform services, but can be made up of six interlocking layers.

Ultimately, ‘out of sight, out of mind’ is not a viable approach to cloud. Ensuring seamless performance and round the clock availability can only be achieved by retaining visibility and control.


January 11, 2018  12:38 PM

Choosing the right public cloud provider: A checklist

Caroline Donnelly Profile: Caroline Donnelly

In this guest post, Gordon Grosse, director of technical services at IT support and datacentre managed service provider MCSA, offers advice on how to choose the right public cloud provider for your business.

It is common knowledge that moving to the cloud can cut business costs, streamline workflows, eliminate the need for hardware and reduce requirements for in-house IT personnel.

The cloud can also make it easier for people to work remotely, collaborate, backup their data and ensure business continuity.

But, with such a wide variety of cloud services and applications for businesses to choose from, how should companies set about sourcing ones that are right for them?

Part of the process should include looking beyond what the public cloud giants (Amazon, Google, and Microsoft) have to offer, because there are a whole host of lesser known names whose technologies may fit the bill.

Similarly, companies are under no obligation to use the services of a single provider. As such, they should consider adopting a hybrid or multi-cloud approach. This may prove beneficial from a cost, manageability, and business continuity perspective, for example.

Ways to pay in public cloud

Companies must also bear in mind that there are a number of ways to pay for cloud services, and these agreements need to take account of the fluctuations in your business.

Some cloud providers demand large upfront costs or make customers pay a premium should they exceed certain data limits.

The pricing scheme should be pay-as-you go from the outset, with the ability to add services as needed, and can be charged hourly, monthly, bi-annually or annually, depending on the vendor.

It is also worth requesting a copy of the service level agreement (SLA) to check what the pricing is based on, in case there are additional costs for adding or deleting files, in the case of cloud storage deployments.

Security assessments

Cloud security is a focus for all IT and business managers when weighing up the pros and cons of using public cloud. Before leaping in, it is advisable to get up to speed with the latest guidance on data security and governance and what measures are in place should an incident or outage occur.

The location and security of the provider’s datacentres, where your company’s information will be stored, is also a critical consideration for many organisations.

In addition to finding out how they would deal with malicious attacks, it’s important to know how a cloud provider protects its datacentres – and therefore your company data – from natural disasters, including fires, floods, earthquakes and storms.

Public cloud provider assurances

Any provider needs to offer technical support around the clock, 24/7. You need to ascertain what their average response and resolution time is and whether you’ll be speaking to knowledgeable engineers or customer service reps.

A formal agreement with stated SLAs is strongly advised, as it ensures you have full visibility of the risks to your business. It should also set out what measures are available to ensure business continuity, in the event of a system problem or failure.


January 9, 2018  4:09 PM

Meltdown and Spectre: Making a case for greater public cloud use

Caroline Donnelly Profile: Caroline Donnelly
Amazon, Cloud Security, Google, Meltdown, Microsoft, Public Cloud

The Meltdown and Spectre CPU vulnerabilities constitute the greatest test yet of the public cloud provider community’s data security claims, says Caroline Donnelly, while providing enterprise IT departments with plenty to get their teeth into.

When details of the Meltdown and Spectre processor flaws emerged last week, the threat they posed to public cloud users was the stuff of enterprise IT department nightmares.

According to the researchers who uncovered the bugs, both could be used by malicious individuals to extract and read data stored in the memory of applications. And – in the case of the cloud – make it possible for customers on the same platform to access each other’s data.

The big hitters of the public cloud community (Amazon Web Services, Google and Microsoft, to name a few) mobilised quickly in the face of this theoretical threat – to patch their systems and assure users.

First mover advantage on Meltdown

Google, whose Project Zero team are among the research groups credited with bringing Meltdown and Spectre to the world’s attention, said its efforts to mitigate the threats began as far back as June 2017.

It is anyone’s guess how much warning Amazon and Microsoft were given to prep their defences, but both confirmed their systems were in the throes of being patched (or already were) within hours of the flaws first becoming public knowledge.

That’s a speed of response few (if any) private-enterprise datacentre operators could ever hope to achieve. It is also fair to assume not many would have been afforded the luxury of a seven-month head start on addressing these vulnerabilities either.

Amazon, Google and Microsoft have all previously made the point that few enterprises can afford to spend as much as they do on cloud security.

For that reason, they claim data stored in their clouds is better protected than the information enterprises leave languishing in private datacentres.

It’s a declaration that is difficult to argue against, particularly when you consider the high calibre of info-security staff the tech giants tend to attract too.

What the response of the Big Three to Meltdown and Sceptre shows the enterprise (particularly the firms still holding out on cloud) is that Amazon, Google and Microsoft’s cloud security claims are not all talk.

These companies had their patches created, tested and deployed by the time most enterprise datacentre operators were still probably trying to tell Meltdown and Spectre apart.

Monitoring the Meltdown

Outside of the cloud, enterprise IT departments still need to make sure their on-premise servers, PCs, and Apple Mac devices are patched and protected from Meltdown and Spectre.

“Unfortunately for IT staff, this is not a ‘one and done’ solution, as there are knock-on impacts from patching the vulnerabilities,” Craig Lodzinski, developing technologies lead at IT infrastructure provider Softcat, tells Ahead in the Clouds (AitC).

Indeed,  industry estimates suggest the first Meltdown patch, for example, could degrade CPU performance by as much as 30%.

“I’d envisage the ripple effect will continue for months, with IT staff having to tune, redeploy resources and assess performance, as well as responding to myriad tickets concerning application and hardware performance, both real and psychosomatic,” he added.

For cloud users, any unexpected change to their organisation’s CPU performance and usage patterns is concerning, particularly if it could cause the cost of their cloud bills to rise.

Some users may decide a bigger cloud bill is a small price to pay for the peace of mind that comes from knowing their data is protected from Meltdown and Spectre. Others, however, may feel it should be considered a cost of doing business by the provider, and covered by them accordingly.

In a statement to AitC, Amazon said no “meaningful” impact on CPU performance had been observed since its patching efforts commenced.

“There may end up being cases that are workload or OS specific that experience more of a performance impact. In those isolated cases, we will work with customers to mitigate any impact,” the AWS spokesperson added.

Patching things up

Google also claims to have seen a “negligible” impact on the performance of the workloads running on its cloud infrastructure, and described the patching process in a blog post as being “uneventful”.

Even so, the search giant is urging users to approach any online reports of performance degradations users claim to have seen with a pinch of salt.

“We designed and tested our mitigations for this issue to have minimal performance impact, and the rollout has been uneventful,” it added.

At least for the time being, with Lodzinski suggesting the fallout from Meltdown and Spectre will be playing out in the enterprise for a long time to come.

“With no attacks in the wild (at time of writing), what the future holds is anyone’s guess,” he said. “What we do know is that good patching and strong cyber-security practices by IT teams are the cornerstone in mounting a strong defence,” he added.


January 4, 2018  3:32 PM

OpenStack vs the rest of the public cloud: Where next from here?

Caroline Donnelly Profile: Caroline Donnelly
AWS, Google, Microsoft, OpenStack, Private Cloud, Public Cloud

In this guest post, Rob Greenwood, technical director at Manchester-based cloud and DevOps consultancy Steamhaus weighs up the pros and cons of using OpenStack over other public cloud platforms.

Just over a year ago, Cisco confirmed the closure of its OpenStack-based public cloud. Shortly after, several smaller datacentres and cloud providers, whose architecture relied on it, closed down, fell into administration or downsized their use of it.

These types of announcements always trigger predictions the OpenStack is on its way out, despite the fact it still powers more than 60 public cloud datacentres around the world.

Regardless of whether you’re a champion of the platform or not, there are some attractive OpenStack alternatives available on the market right now. Namely, Microsoft Azure, Google Cloud Platform and Amazon Web Services (AWS), who’ve all gained a significant share of the public cloud market since OpenStack arrived on the scene.

Each of these platforms have their pros and cons, particularly when it comes to issues such as vendor lock-in, scalability and technical support.

Locked in for life?

The proprietary vendor vs open source debate has raged for some time, with OpenStack advocates claiming their platform offers greater freedom than using any of its public cloud contemporaries.

Personally, I think people get too hung up on the vendor lock-in issue, because, in reality, you’ll always be tied to something – whether that’s a vendor or not. For example, your platform will often be tied to a piece of code regardless of where it’s hosted. And whenever you want to move from one platform to another, there will always be some work required in that migration.

Regardless of your preference however, you should always be tracking the way the market is moving and be prepared to respond accordingly.

The OpenStack ease of use issue

While you do get flexibility with OpenStack, you’re also dealing with lots of moving parts which require skilled engineers who understand its inner workings. Unfortunately, these people are few and far between.

Unless organisations have a highly-skilled engineering team in place, it can quickly become a minefield. It requires a lot of effort to keep stable and debug, and the learning curve is steep.

With that in mind, organisations need to decide if investing significant amounts of capital into building out and maintaining their own kit is something they want to do, especially when compared to the opex public cloud model.

Where next for OpenStack?

Over the last few years, Kubernetes has quickly become the established orchestration tool for managing containers. That layer of abstraction means we can control our containers regardless of the environment they are hosted in – be that in Google, AWS, Azure, on-premise or a combination of all four.

Serverless offerings, such as AWS’ Lambda, are also making it easier to deploy and scale applications without having to worry about the supporting infrastructure.

Both these solutions are changing the cloud landscape and reducing the reasons you would maintain your own OpenStack environment.

The momentum certainly seems to be moving in the direction of public cloud. That said, I’m not joining the doomsayers – those individuals who seem to relish in the demise of any technology.

I don’t believe this is the beginning of the end for OpenStack, and I’ve no doubt it will find its own niche. The Global Passport, an OpenStack initiative designed to bring disparate OpenStack clouds together, may help it carve out a new path.

But, eight years have passed since OpenStack was hailed as a revolution, so it’s time to acknowledge the market has changed — and appealing alternatives exist.


December 14, 2017  10:51 AM

G-Cloud 10 delayed: The pros and cons for public sector cloud users

Caroline Donnelly Profile: Caroline Donnelly
Cloud Computing, G-Cloud, Public sector

In this guest post, Rob Anderson, principal analyst for central government at market watcher GlobalData, takes a look at the implications of delaying the launch of G-Cloud 10 by up to a year.

Making in-roads into the public sector can be a precarious task for IT providers, and does not get any easier when the procurement rules of engagement seem to keep changing.

When the Crown Commercial Service (CCS) recently confirmed the ninth iteration of G-Cloud could remain in place for up to 12 months, it is understandable why supplier concerns were raised.

According to CCS, the extension will give it more time to radically transform the framework for the benefit of both users and suppliers.

Similar extensions were announced in October for both Digital Outcomes and Specialists 2 (DOS 2) and Cyber Security Services 2 (Cyber 2).

Yet eighteen months ago, suppliers were told CCS (alongside the Government Digital Service) were undertaking a new discovery process and that G-Cloud 9 design would go ’back-to-basics’ to deliver improvements for all.

A number of tweaks were made, including the removal of one lot, the elimination of overlapping iterations and an increase in the management levy. These changes, however, were not enough to stimulate significant growth in throughput.

CCS claims, with some degree of credibility, the frameworks have helped drive up cloud adoption and expanded the pool of SME suppliers public sector organisations can buy from, but more of the same will not further increase the volume or value of transactions.

To that end, delaying the rollout of G-Cloud 10 is at step that should be welcomed.

G-Cloud 10 delays: The pros and cons

Where some will see the hiatus as a benefit (if it results in a more attractive and flexible route to market) others will be unhappy with the change in G-Cloud release cadence. In a market accustomed to regular Digital Marketplace updates, a year’s delay may seem like an eternity.

There is also a danger delaying G-Cloud 10 will potentially slow the momentum of SMEs building their businesses to reach the 33% market share the government proclaims to want.

Of greater concern for CCS and suppliers is the continuing antipathy of the wider public sector to all of The Digital Marketplace;  around 90% of spend has been by Whitehall departments and metropolitan councils.

To be recognised as a true aggregator of commodity IT products and services, the buying agency needs to attract more buyers from a broad cross-section of public service delivery organisations.

CCS is also under pressure from its Cabinet Office paymasters, who are still awaiting the magnitude of savings long-since promised from the rolling review of frameworks.

Yet the unit continues to adopt a Field of Dreams “build it and they will come” mindset, rather than listening to user needs and adapting accordingly.

One must hope the CCS uses the breathing space to address those needs to create an enhanced Digital Marketplace and a firm foundation for the broader Crown Marketplace to be built upon.

For whatever the future holds for CCS, the outcome will have a profound effect on both public sector buyers and suppliers.


December 13, 2017  10:32 AM

Combining AI, automation & cloud is key to improving workplace productivity

Caroline Donnelly Profile: Caroline Donnelly
ai, Automation, cloud, Machine learning, SAP

In this guest post, Darren Roos, president of SAP ERP Cloud, explains why automation, AI, machine learning and cloud are key to improving workplace productivity.

If you take productivity at its most basic – as a measure of how much output you get for how much you put in – then it follows that to increase it you must remove as much unnecessary strain as possible from the input cycle.

Over the past decade, cloud computing has established itself as a key tool for helping companies streamline workflows. And, by allowing multiple users to work remotely on the same data at once, boost productivity.

But, just as productivity is effectively a measure of cause and effect, there’s been a negative – or at least stultifying – impact as a result of this enthusiastic uptake of cloud technology too.

Managing the process of digitisation through the cloud, particularly when done at increasingly large scale, has become unsustainably complex. Processes become jumbled, admin-heavy, and offset by unavoidable human error.

The somewhat ironic result is of course a slowdown in productivity. There’s a growing sense, however, that more can be done with this technology – and that we’re currently looking at just the tip of the cloud computing iceberg.

AI, automation and cloud?

Automation, artificial intelligence and machine learning have dominated both dominated the tech press over the past year, whether for beating humans at board games or stirring up fears of job replacement.

They’re actually already helping to streamline people’s daily lives in more ways than they might realise – whether at a one-click online checkout or in sending auto-generated email responses. When automation is applied well, the user won’t notice its presence at all.

And there’s no reason why the same logic shouldn’t apply to how organisations are managing their resources. In fact, it stands to reason that they absolutely should be looking at where automation can be applied.

Automation, bolstered by AI and machine learning, presents an opportunity to minimise the menial, repetitive, administrative tasks that can so often eat up an employee’s day. The result is more free time that can be put to use tackling more complex, thoughtful tasks that require actual intelligence.

Manual tasks have been automated since the industrial revolution, as humans find new roles and ways of working that are built on automation. While automation isn’t new, its scale and intelligence is.

What might you achieve with your day if you didn’t need to mindlessly wade through sales order data, capacity planning outlines, or cumbersome financial operations? The opportunity is immense, and it all comes back to productivity as people get more done during their work hours – and at a smaller overall cost to the business.

Combining the cloud

Beyond the mitigation of the immediate productivity issue, though, there are further benefits to be reaped from the integration of AI and cloud-based services.

Real-time insights, and the ability of computers to learn from them over time, open up the possibility of AI-informed service improvements.

Analysing enormous amounts of data generated on a daily basis, noticing patterns, and responding to them with suggestions or automatic adjustments is bread and butter for the AI and machine learning programs of today.

And they’re ripe for integration with the flexible, agile business approaches that have become commonplace through widespread cloud adoption. In fact, you might go as far as to argue that the two make ideal – if not essential – bedfellows for this exact reason.

In a hyper-connected world, it’s beyond the realms of acceptability for organisations to fall back on established, static practices.

Conventional wisdom indicates that failing to keep up with the pace of technological change will lead to both irrelevance and a decline to the point of non-existence. In fact, mere irrelevance may soon be considered getting off lightly.

It’s a Catch-22 situation: meeting increased, perhaps more ambitious, productivity targets will be key to survival. But to do that, employees need to be able to make the most of the time they spend at work.

The technology industry, at its heart, is driven by a desire to solve problems. I can’t think of one more valuable – or surmountable – for it to take on today.


December 8, 2017  12:00 PM

Cloud bursting: Examining the enterprise use cases and benefits

Caroline Donnelly Profile: Caroline Donnelly
cloud, Cloud bursting, IaaS, Infrastructure, SaaS

In this guest blog post, Naser Ali, head of solutions marketing for EMEA at Hitachi Vantara, explores the use cases and benefits for using cloud bursting in the enterprise.

When NASA needed to process petabytes of data from its Orbiting Carbon Observatory 2 (OCO-2), it expected to wait 100 days and spend $200,000 on new datacentre hardware to make it work.

Instead it used the Amazon Web Services (AWS) cloud to achieve the same thing for $7,000 in less than six days, allowing NASA engineers to gain deeper insights into the Earth’s carbon uptake.

This ephemeral use of cloud computing, known as ‘cloud bursting’ offers the scientific community an effective, affordable solution for processing high-volume data spikes. But can it be applied in the more earthly business world? Indeed it can. In fact, we’re working with two large global banks that are doing just that.

Banks are the perfect use case for cloud bursting because they process large volumes of data about things like interest rate returns and risk weighted assets. This data isn’t personal, but requires frequent short bursts of compute power to meet compliance reporting requirements.

Most banks can’t justify the capital expense of investing in on-premise capacity to process data that only comes in intermittently knowing that hardware will mostly remain idle. On the other hand, running a 24/7 technology stack in the cloud racks up too much operating expense. Cloud bursting not only strikes the perfect balance between storage capacity and cost, it gives data scientists a space to ‘play’ and gain new insights.

Cloud bursting in practice

Put simply, the basic steps data science teams take when cloud bursting are: move data into a cloud, spin up some computing power, spin up some storage, process a tranche of data, shut it down and take the results home. But how does this look in practice? One banking customer’s cloud bursting approach follows these steps:

  • First it uses our data integration platform to automatically ship and load data to object storage in an AWS cloud, processing it in batches.
  • Next it runs a script to fire up Carte, which is a simple web server for remotely executing data transformations (converting data from one format or structure into another) using Amazon Elastic Map Reduce (EMR) in Hadoop or the Amazon Redshift data warehouse.
  • Once the data is cleaned, transformed and loaded into the temporary cloud, the data scientists can then “play” with the data by running ‘what-if’ scenarios on things like risk and interest rate return rates.

The greatest benefit of cloud bursting is agility, but there are hard cost savings as well. This bank had been spending 1 million on hardware and software for its risk reporting Hadoop cluster. We took a subset of its data, 12 million rows, ran it in AWS for $2.20 and in Google for .50 cents. So you can process a pretty large chunk of data for very little cost and no upfront investment. Another cost advantage is the ability to take advantage of ‘spot pricing’ on cloud capacity. This new type of dynamic pricing saves customers money by letting them running some applications only when spot prices fall below a specified price point. It also allows users to quickly run large jobs by outbidding other customers for available capacity.

Critical success factors

Although I’ve used banks as an example, cloud bursting for data science is suitable for any industry with similar requirements. Here’s what our experience tells us that organisations need to be successful with this approach:

  • Agile data integration – choose a data integration tool that doesn’t need to be installed on every Hadoop cluster within the cloud. This is the key capability that allows you to spin up a cloud and process data in a matter of hours. Conversely, tools that require you to install software on multiple Hadoop nodes cancel out all the agility advantages of cloud bursting.
  • Strong DevOps – organisations that are good at DevOps, particularly continuous integration, will have a smoother ride when it comes to preparing data to be processed in cloud bursts.
  • Open standards environment – it’s very important that organisations abstract the data processing from the cloud environment and don’t get locked into a single cloud vendor. Open standards and open frameworks ensure that everything is moveable if necessary.
  • Plan for repatriation – what happens if an organisation is successful with cloud bursting but later needs to add secure data to its analysis? In this case the whole compute process and data would need to be repatriated back on premise. This requires having an analytics platform that supports a hybrid computing environment and ideally supports containers like Docker, which make it fast and easy to repatriate processes and data. This is essential in financial services, where regulatory compliance mandates that systems are in place to prepare for this potential situation.

One giant leap

Following in NASA’s lunar footsteps, banks’ first cloud bursting efforts have been encouraging, achieving savings of multimillion pounds annually. These have largely come from reducing total cost of ownership and increased agility.

Of course the ‘giant leap’ organisations make by applying such a practical and cost-effective way to process data is that they minimise their exposure to financial risk.

The Financial Conduct Authority (FCA) has mandated that banks significantly raise their cash reserves and understand their risk exposures to avoid repeating the scenario that brought down Lehman Brothers and led to the RBS bail-out.

In a typical scenario where a bank has investments in many different financial institutions and companies, it takes just one entity to go bust and take down the others. In our era of economic turbulence, this ability to play with data in a temporary space and game different scenarios could affect a company’s very survival.


December 4, 2017  8:34 AM

AWS Re:Invent 2017 reviewed: The four product announcements tech teams need to know about

Caroline Donnelly Profile: Caroline Donnelly
Amazon, AWS, Cloud Computing

In this guest post, Jon Topper, CTO of DevOps consultancy, The Scale Factory, flags his four favourite announcements from Amazon Web Services’ (AWS) week-long Re:Invent partner and customer conference, which took place last week in Las Vegas.

AWS Re:Invent has just wrapped up for 2017. Held in Las Vegas for over 40,000 people, the conference is an impressive piece of event management at scale, bringing together customers, partners and Amazon employees for five days of keynotes, workshops and networking.

I attended re:Invent 2016, and gave myself repetitive strain injury trying to live-tweet the keynotes, barely keeping up with the pace of new service and feature announcements.

This year’s show saw the cloud giant debut a total of 70 new products and services, and I’ve picked out four of my favourites, and discuss what they mean for enterprise technology teams.

Introducing Inter-region VPC peering

At the start of November, AWS announced it would now be possible for users of its Direct Connect service to route traffic to almost every AWS datacentre region on the planet from a single Direct Connect circuit.

On the back of this, I predicted other region restrictions would be lifted, and AWS came good on that expectation this week when it announced support for peering two virtual private clouds (VPCs) across regions at Re:Invent.

VPC peering is a mechanism by which two separate private clouds are linked together so traffic can pass between them. We use it to link staging and production networks to a central shared management network, for example, but other use cases include allowing vendors to join their networks with clients’ to enable a private exchange of traffic between them.

Until now, when working with customers who require a presence in multiple regions, we have to build and configure VPN networking infrastructure to support it, which also needs monitoring, patching and so forth.

With inter-region VPC peering, all that goes away: we’ll be able just to configure a relationship between two VPCs in different regions, and Amazon will take care of the networking for us, handling both security and availability themselves.

GuardDuty makes it debut at Re:Invent

AWS also debuted a new threat detection service for its public cloud offering, called GuardDuty, which customers can use to monitor traffic flow and API logs across their accounts.

This lets users establish a baseline for “normal” behaviour within their infrastructure, and watch for security anomalies. These are reported with a severity rating, and remediation for certain types of events can be automated using existing AWS tools.

Last year, AWS announced Shield, a managed DDoS protection service made available, for free, to all AWS customers, with CTO Werner Vogels acknowledging that this is something Amazon should have provided a long time ago.

AWS employees often say that security is job zero, and that if they don’t get security right, then there’s no point doing anything else. It’s no surprise therefore that we’re seeing more security focused product releases this year.

AWS GuardDuty is a welcome announcement for both customers and systems integrators. The incumbent vendors in this space offer clumsy solutions, based on past generations of on-premise hardware appliances.

These had the luxury of connecting to a network tap port where they could passively observe and report on traffic as it went by, without impacting on the network or host performance.

Since network taps aren’t available in the cloud, suppliers have had to resort to host-based agents that capture and forward packets to virtual appliances, affecting host performance and bandwidth bills.

AWS GuardDuty lives in the fabric of the cloud itself, and other vendors will find it hard to compete with this level of access.

It’s likely that over time, existing security vendors will pivot their business model further towards becoming AWS partners, adding value to Amazon services rather than providing their own – a move we’ve seen from traditional hosting providers such as Claranet and Rackspace over the years.

EKS: Kubernetes container support on AWS

In the last three years, Kubernetes has become the de facto industry standard for container orchestration, a major industry hot topic, and an important consideration in the running of microservices architectures.

This open source project was created by engineers at Google, who based their solution on their experiences of operating production workloads at the search giant.

Google has offered a hosted Kubernetes solution for some time as part of their public cloud offering, with Microsoft adding support for Azure earlier this year.

At Re:Invent 2017, AWS announced their managed Kubernetes offering, EKS  (AKA as Amazon Elastic Container Service for Kubernetes).

While this announcement shows AWS playing catch-up against the other providers, research by the Cloud Native Computing Foundation (CNCF) shows that 63% of Kubernetes workloads were already deployed to the AWS cloud, by people who were prepared to build and operate the orchestration software themselves.

EKS will now make this much easier, with Amazon taking care of the Kubernetes master cluster as a service; keeping it available, patched and appropriately scaled.

Like its parent service, ECS, this is a “bring your own node” solution: users will need to provide, manage and scale their own running cluster of worker instances.

EKS will take care of scheduling workloads onto them, and provide integration with other Amazon services such as those provided for identity management and load balancing.

Expanding the container proposition with Fargate

Alongside EKS, CEO Andy Jassy announced another new container service: AWS Fargate. Potentially much more game-changing, Fargate users won’t need to provide their own worker fleet – these too will be managed entirely by AWS, elevating the container as a first class primitive on the Amazon platform, on a par with EC2 instances. Initially supporting just ECS, Fargate will offer support for Kubernetes via EKS during 2018.

It’s an exciting time for AWS users – with the ability to adopt the latest in container scheduling technology. But, without the challenges of operating the ecosystem, enterprise tech teams can now spend more of their valuable time on generating business value.


November 17, 2017  2:31 PM

Cloud AI in the enterprise: Making the security case

Caroline Donnelly Profile: Caroline Donnelly
Artificial intelligence, cloud, Cloud Security

In this guest post, Ross Brewer, managing director and vice president for Europe, Middle East and Africa at cybersecurity software supplier LogRhythm, makes the enterprise case for using artificial intelligence (AI) in the fight against cybercrime.

Organisations face a growing number of increasingly complex and ever-evolving threats – and the most dangerous threats are often the hardest to uncover.

Take the insider threat or stolen credentials, for example. We’ve seen many high-profile attacks stem from the unauthorised use of legitimate user credentials, which can be extremely difficult to expose.

Organisations are under growing pressure to detect and mitigate threats like these as soon as they arise, and this is only going to increase when the much talked about General Data Protection Regulation (GDPR) comes into force on 25 May 2018.

Security teams have a critically important job here. They need to be able to protect company data, often without time, money or resources on their side. This means they simply cannot afford to spend time on extensive manual threat-hunting exercises or deploying and managing multiple, disparate security products.

Cloud AI for efficiency

The perimeter-based model of yesterday is insufficient for the mammoth task of protecting a company’s assets. Instead, we are starting to see a shift towards automation and the application of cloud-based Artificial Intelligence (AI), which is fast becoming critical in the fight against modern cyber threats.

In fact, a recent IDC report predicted that the AI software market would grow at a CAGR of over 39% by 2021, whilst separate research from the analyst firm stated that the future of AI requires the cloud as a foundation, with enterprise ‘cloud-first’ strategies becoming more prevalent over the same period.

The cloud is, without doubt, transforming security by enabling easy and rapid customer adoption, saving time and money, and providing companies with access to a class of AI-enabled analytics that are not otherwise technically practical or affordable to deploy on-premise.

Plug-and-play implementation lets security teams focus on their mission instead of spending valuable time implementing and maintaining a new tool.

What’s more, when deployed in the cloud, AI can benefit from collective intelligence and a broader perspective to maximise it. Imagine incorporating real-world insight into specific threats in real-time. This will advance the ability of AI-powered analytics to detect even the stealthiest or previously unknown threats more quickly, and with greater accuracy than ever before.

Using cloud AI to detect unseen threats

By combining a wide array of behavioural models to characterise shifts in how users interact with the IT environment, cloud-based AI technology is helping organisations pursue user-based threats, including signatureless and hidden threats.

Applying cloud-based AI throughout the threat lifecycle will automate and enhance entire categories of work, as well as enable increasingly faster and more effective detection of real threats. Take analytics, for example. Hackers are constantly evolving their tactics and techniques to evade existing protective and defensive measures, targeting new and existing vulnerabilities and unleashing attack methods that have never been seen before.

Cloud AI is beginning to play an important role in detecting these emerging threats. The technology is proactive and predictive, without the need for security and IT personnel to configure and tune systems, automatically learning what is normal and evolving to register even the most subtle changes in events and behaviour models that suggest a breach might be occurring.

Cloud-based AI essentially helps security analysts cut through the noise and detect serious threats earlier in their lifecycle so that they can immediately be neutralised. It provides rapid time-to-value through cloud delivery, and promises to eliminate or augment a considerable number of time-consuming manual threat detection and response exercises. This allows security teams to drive greater efficiency by focusing on the higher-value activities that require direct human touch.


Page 1 of 812345...Last »

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: