Ahead in the Clouds

Page 2 of 912345...Last »

January 29, 2018  10:54 AM

Serverless vs. Microservices: What you need to know for cloud

Caroline Donnelly Profile: Caroline Donnelly
cloud, Microservices

In this guest post Neil Turvin, CEO at nearshore software development company Godel Technologies, shares his thoughts on how serverless computing is gaining ground in the delivery of cloud computing

Although still in its infancy, serverless computing shares some of the characteristics of microservices, but is very different in the way it delivers cloud computing. There are pros and cons to both, but serverless is becoming an increasingly attractive option for a number of reasons.

Pricing is the first differentiator. Serverless is pay-as-you-go, making it an attractive option for applications with infrequent requests or for startup organisations. It also reduces operational costs as infrastructure and virtual machines are handled by a service provider rather than directly on-premise, IaaS or PaaS.

Scalability is also outstanding on serverless architecture – by its nature it can handle spike loads more easily and can be automatically managed quickly and transparently. The only caveat is the maximum number of requests that can be processed making it unsuitable for high load systems.

That brings me to architectural complexities – which can be a mixed bag.

Serverless is even more granular than microservices, and provides a much higher degree of functionality. On the flip-side, that also creates much more complexity. It also has a number of other restrictions, such as a limited number of requests and operation duration, and supports fewer programming languages.

Its components are also often cloud provider specific, which can be problematic to change. Stateless by design, it must manage states strictly outside functions – meaning no more in-memory cache.

Furthermore, serverless functions can also act as scheduled jobs, event handlers etc, and not just as services.

Sorting through the serverless complexity

The level of granularity you get with serverless can also affect tooling and frameworks. This is because the higher the granularity, the more complicated integration testing becomes, making it more difficult to debug, troubleshoot and test.

Microservices by comparison is a mature approach and is well supported by tools and processes.

Time to market also comes into play. Due to the lightweight programming model and operational ease of a serverless architecture, time to market is greatly reduced for new features, which is a key driver for many businesses.

It also means prototypes can be quickly created for Internet of Things (IoT) solutions using Function as a Service (FaaS) for data processing to demonstrate a new technology to investors or clients.

Although microservices still provides a solid approach to service oriented architecture (SOA), serverless is gaining ground in event-based architecture and clearly has advantages in terms of reduced time to market, flexible pricing and reduced operational costs.

It’s unlikely for now that serverless will or should be the approach for every system – but watch this space as it matures. For now the best solution is a combination of both architectural approaches to help deliver and take advantage of the benefits the cloud brings to you and your customers.

January 24, 2018  3:48 PM

Unblocking the database bottleneck in enterprise DevOps deployments

Caroline Donnelly Profile: Caroline Donnelly
Automation, cloud, Database, DBA, DevOps

In this guest post, DevOps consultant and researcher Nicole Forsgren, PhD, shares her advice on what enterprises can do to overcome the database problem when scaling up their DevOps endeavours.  

High performing DevOps teams that have transformed their software delivery processes can deploy code faster, more reliably, and with higher quality than their low-performing peers.

They do this by tackling technical challenges with effective and efficient automation, adopting processes drawn from the lean and agile cannon, and fostering a culture that prioritises empathy and information flow.

A key to these successful transformations is fast feedback loops, and integrating key stakeholders early in the development and delivery process.

In the earliest iterations of DevOps, the first key stakeholder was IT operations: taking feedback about maintainability and scalability of your code. You could frame it this way: learn why IT operations so often put up barriers to code deploys, address that feedback sooner in the pipeline, and continue to work together.

Once dev and ops are humming along smoothly, many teams find they still hit bottlenecks in their deployments, particularly as they scale. And the trend I’m hearing more and more often is that this bottleneck is happening at the database.

Unfortunately this is not a challenge you can turn a blind-eye to. It’s not going away anytime soon and the tooling you are using to automate the build and deployment of your apps will not solve this problem.

No matter how fast you can get your application releases going, it’s more than likely they’ll be waiting for changes to the database before it can get shipped. There’s no time like the present to bring data into DevOps as the next move toward shifting left.

The devil’s in the database

Teams and organisations are humming along and delivering code. Things start to get more exciting as applications start to scale. Under the covers, however, the database is a shared resource across dev, test, and production.

In addition, the database release process is usually manual, making it a carefully orchestrated process that is slow, risky, and prone to errors. The database administrators (DBAs) guard this process carefully, and with good reason.

Early on, performance is acceptable, but as your application continues to grow and scale, this database release process gets more difficult. Or perhaps your application is already at scale, and you are shipping code faster, when suddenly you find the application code outpacing your database schema changes. “Brute force” only works so long before your DBAs are burning out and just can’t keep up.

When faced with more requests for changes to the database than are feasible to do or safe to do, the DBAs by protecting their resource and saying “no.” Suddenly, database changes can take weeks, when the competing software releases are using continuous delivery practices and pushing to production daily or even hourly.

Addressing the issue

Solving the database constraint in DevOps takes a few forms, and includes culture and tools.

Let’s start with culture. You’ll want to start shifting your database work upstream into the development phase and find problems before they get into production. This is similar to the “shift left” emphasis we’ve seen in other critical areas that are often left to the end of the delivery pipeline, like security.

To truly shift data left allow your engineers to follow the same process they use today for the app. Check your change scripts into source control, run automated builds, get notified when database changes ‘break the build’ and provide your engineers with the tooling they need to find and fix database problems long before they become an issue in production.

As a database professional, give your engineering teams the tools they need to treat database code just like app code. This will equip them to know when the database is the problem, and teach them where and how to look for it. This work is important and can be shared upstream. These practices elevate DBAs to valuable consultants, not downstream constraints.

This may feel foreign at first, as the role of the DBA expands from building tools to collaborating and consulting across whole teams of people.

Where they were once protective and working on technology alone or with only other DBAs, they are now open and consultative with others. Where their work with the database was once the last stop in the chain of application release automation, this work is (hopefully) triggered much earlier in the pipeline.

Done right, this approach frees the DBA team from reviewing every change in every change script, so they can spend their valuable time on more important tasks such as; patching and upgrading, performance tuning, data security, and capacity planning.

Expanding the reach of a DBA

Engineering teams usually start with just a few DBAs, so you have to scale the person — and this is done with technology and automation.

This helps teams and organisations work faster, protect their data, and increase productivity. Just as with any other part of the DevOps process, automation speeds up our work: As the application and the databases start to scale, DBAs will find themselves needing to scale their services.

Deploying 100 database servers is much faster (and more effective) with scripts compared to doing it manually.

Rolling database schema changes is a high-risk move; using tooling and technology helps mitigate this risk in two ways: First, by introducing and testing these changes faster in the application development pipeline, you discover errors sooner, allowing teams to find and fix changes before they hit production.

Second, using automation provides traceable steps and verification if any steps need to be repeated or reversed. Third, automation increases productivity by creating repeatable processes that allow you to manage production and database schema migration.

When you’re developing and delivering code with speed and stability, but you start to hit a database constraint, don’t panic. Just apply the same DevOps principles you used for software: focus on bridging culture with your database team, improving and streamlining process with your database team, and leveraging a smart investment in tooling and automation.

Author acknowledgements

Many thanks to Silvia Botros and Darren Pedroza for sharing their thoughts and experiences with databases in DevOps transformations. I would also like to thank Camille Fournier for reading an early draft of this post.

Also, for more detail, I suggest you check out Database Reliability Engineering, by Laine Campbell and Charity Majors. It is essential reading for those digging into databases and database administration in DevOps today.

 


January 17, 2018  4:07 PM

Cloud anarchy in the UK: Here’s how to beat it

Caroline Donnelly Profile: Caroline Donnelly
cloud, Cloud adoption, Shadow IT

In this guest post, Allan Brearley, cloud practice lead at IT services consultancy ECS and Tesco Bank’s former head of transformation shares his thoughts on what enterprises can do to address the anarchy in their cloud deployments

Just over 40 years ago the Sex Pistols released their first single, Anarchy in the UK. Today we are experiencing anarchy of a different kind in some of the UK’s largest businesses.

Putting production workloads into the cloud has, on the face of it, never seemed so easy. The accessibility and ease of consumption of cloud-based services has unleashed a rush to the cloud by many large enterprises looking to take on their nimbler ‘born in the cloud’ competitors.

But there is growing evidence that many enterprises are struggling with cloud anarchy because they are not fully in control of their journey off-premise. And, without a comprehensive roadmap in place for enterprise-wide cloud adoption, cloud chaos ensues.

Shadow IT-induced anarchy

In a cloud equivalent of the Wild West, shadow IT is a major cause of cloud anarchy facing enterprises today.

It’s not unusual for employees to become frustrated by the IT department’s seemingly slow progress and subscribe to a  SaaS offering themselves without considering the impact this decision will have on the rest of the business.

Other problems arise when boards rush to embrace cloud without having defined a comprehensive vision and strategy that takes into account existing business processes, employees’ skills, company culture, and legacy IT infrastructure.

While a cloud-first approach might get off to a cracking start, without that clear company-wide vision and strategy, it is destined to lose momentum fast.

The chaotic environments resulting from these ad-hoc approaches have far-reaching consequences for an organisation’s corporate governance, purchasing, and IT service integration processes.

Good cloud governance

Where governance is concerned, it is unlikely there will be full visibility of what cloud services are being consumed where, and whether appropriate controls and standards are being met.

This problem is exaggerated in highly-regulated industries, such as financial services, where organisations are required to demonstrate they are: mitigating risk, managing IT security appropriately, managing audits and suppliers effectively, and putting appropriate controls in place to ensure compliance with regulations around data sovereignty and privacy such as the EU GDPR.

Financial services firms also need to demonstrate they are managing material outsource risks effectively, in order to comply with FCA regulations.

The uncontrolled purchase and use of SaaS or PaaS services without the appropriate level of IT engagement will also throw up a whole raft of integration, visibility and support headaches.

‘Technology snowflakes’ are another cause for concern. These occur when the same problem is being solved in different ways by different teams, which leads to IT support inefficiencies and additional costs.

Enterprises need to factor in some of the other financial implications of cloud anarchy too. These include a fragmented procurement process that make it difficult to cut the best deal, as well as questions over how teams consuming their own cloud services manage their budgets in the context of consumption-based services.

Embracing a cloud-shaped future

With a clear cloud strategy underpinned by appropriate controls, everyone will have the tools they need to innovate faster. The final piece in the puzzle is to ensure employees are fully engaged, and have the skills required to take advantage of this new approach and tools.

This requires building a company culture that embraces the cloud in a structured way, and promptly plugging any skills gaps in your employees’ knowledge.

With the Sex Pistols’ anthem still ringing in my ears, it occurs to me that Johnny Rotten was half right when he screamed the immortal lines: “Don’t know what I want, but I know how to get it”.

With cloud adoption, it’s important that everyone within the business pogos to the same tune – and that there is agreement up front on what is required.

Without a strong cloud vision and strategy, it’s impossible to know where you’re heading, how you’re going to get there, and when you’ve arrived.


January 15, 2018  11:43 AM

The art of finding (and fixing) cloud faults

Caroline Donnelly Profile: Caroline Donnelly
Cloud outages, Latency, performance

In this guest post, Ron Vermeulen, go-to-market manager for north-west Europe at IT services provider, Comparex, runs through the process of finding and fixing cloud faults

There is no doubt that cloud computing offers huge benefits to organisations, but CIOs must accept and manage the potential barriers to realising its value.

Service faults and latency issues can prove problematic, for example, when the application in question is business-critical. They can also cost organisations time and money, and have a negative impact on the end-user experience.

Pinpointing where a performance issue occurs in the first place can also be a challenge.

When on-premise IT infrastructure was de rigueur, it was far easier for organisations to find the source of the problem, which could be down to a misbehaving server in the datacentre, for instance.

It’s not so simple today, because ‘your’ public cloud server is now in someone else’s facility, and the difficulty is compounded because the glitch could be closer to home, rather than the fault of the service provider.

A cloud service might be performing fine, but a network problem could be causing issues at ‘home’. A managed service can often help to lessen this headache by identifying, on behalf of the organisation, where the problem lies in the first place.

Finding cloud faults and fixing them

Fixing the fault is the next hurdle. If the problem is with a supplier’s services (rather than in-house) then another complication is added.

Different Service Level Agreements (SLA) for fixing a fault are in place with all cloud suppliers; and managing the various terms and conditions is a mammoth task.

SLAs governing ‘time-to-repair’ can vary greatly – up to 30 hours in some cases. For a business-critical application this is an unacceptable timeframe.

Organisations can pay for a higher level of SLA to guarantee a rapid fix time, but this is rarely factored into their initial cloud costs. As such, organisations can end up paying more than expected just to keep the lights on.

The flexibility and agility of cloud still make it the first choice for lots of organisations, but when it comes to management, many IT teams have essentially relinquished control over support and maintenance.

It is critical organisations retain visibility across their IT infrastructure and ensure individual SLAs meet the specific needs of their organisation.

Take back control of cloud

Regaining control of a cloud deployment can be achieved by adding an overarching management layer that offers visibility, or engaging a managed service to help implement this.

This means, rather than relying on a vendor to analyse a support ticket, the analysis can begin at home.

Pinpointing an issue can be done in as little as 30 minutes using tools and services available today. This offers an even greater level of control – by introducing a sophisticated management layer or service can actually spot a problem before it happens – so issues can be fixed proactively.

This level of visibility into cloud is a ‘must have’, not just a ‘nice to have’ – particularly as convoluted IT infrastructures become commonplace.

The shift to multi- and hybrid-cloud installations, pointed out by Gartner, is one example of this increasing complexity. The cloud ‘stack’ no longer just encompasses software-, infrastructure- and platform services, but can be made up of six interlocking layers.

Ultimately, ‘out of sight, out of mind’ is not a viable approach to cloud. Ensuring seamless performance and round the clock availability can only be achieved by retaining visibility and control.


January 11, 2018  12:38 PM

Choosing the right public cloud provider: A checklist

Caroline Donnelly Profile: Caroline Donnelly

In this guest post, Gordon Grosse, director of technical services at IT support and datacentre managed service provider MCSA, offers advice on how to choose the right public cloud provider for your business.

It is common knowledge that moving to the cloud can cut business costs, streamline workflows, eliminate the need for hardware and reduce requirements for in-house IT personnel.

The cloud can also make it easier for people to work remotely, collaborate, backup their data and ensure business continuity.

But, with such a wide variety of cloud services and applications for businesses to choose from, how should companies set about sourcing ones that are right for them?

Part of the process should include looking beyond what the public cloud giants (Amazon, Google, and Microsoft) have to offer, because there are a whole host of lesser known names whose technologies may fit the bill.

Similarly, companies are under no obligation to use the services of a single provider. As such, they should consider adopting a hybrid or multi-cloud approach. This may prove beneficial from a cost, manageability, and business continuity perspective, for example.

Ways to pay in public cloud

Companies must also bear in mind that there are a number of ways to pay for cloud services, and these agreements need to take account of the fluctuations in your business.

Some cloud providers demand large upfront costs or make customers pay a premium should they exceed certain data limits.

The pricing scheme should be pay-as-you go from the outset, with the ability to add services as needed, and can be charged hourly, monthly, bi-annually or annually, depending on the vendor.

It is also worth requesting a copy of the service level agreement (SLA) to check what the pricing is based on, in case there are additional costs for adding or deleting files, in the case of cloud storage deployments.

Security assessments

Cloud security is a focus for all IT and business managers when weighing up the pros and cons of using public cloud. Before leaping in, it is advisable to get up to speed with the latest guidance on data security and governance and what measures are in place should an incident or outage occur.

The location and security of the provider’s datacentres, where your company’s information will be stored, is also a critical consideration for many organisations.

In addition to finding out how they would deal with malicious attacks, it’s important to know how a cloud provider protects its datacentres – and therefore your company data – from natural disasters, including fires, floods, earthquakes and storms.

Public cloud provider assurances

Any provider needs to offer technical support around the clock, 24/7. You need to ascertain what their average response and resolution time is and whether you’ll be speaking to knowledgeable engineers or customer service reps.

A formal agreement with stated SLAs is strongly advised, as it ensures you have full visibility of the risks to your business. It should also set out what measures are available to ensure business continuity, in the event of a system problem or failure.


January 9, 2018  4:09 PM

Meltdown and Spectre: Making a case for greater public cloud use

Caroline Donnelly Profile: Caroline Donnelly
Amazon, Cloud Security, Google, Meltdown, Microsoft, Public Cloud

The Meltdown and Spectre CPU vulnerabilities constitute the greatest test yet of the public cloud provider community’s data security claims, says Caroline Donnelly, while providing enterprise IT departments with plenty to get their teeth into.

When details of the Meltdown and Spectre processor flaws emerged last week, the threat they posed to public cloud users was the stuff of enterprise IT department nightmares.

According to the researchers who uncovered the bugs, both could be used by malicious individuals to extract and read data stored in the memory of applications. And – in the case of the cloud – make it possible for customers on the same platform to access each other’s data.

The big hitters of the public cloud community (Amazon Web Services, Google and Microsoft, to name a few) mobilised quickly in the face of this theoretical threat – to patch their systems and assure users.

First mover advantage on Meltdown

Google, whose Project Zero team are among the research groups credited with bringing Meltdown and Spectre to the world’s attention, said its efforts to mitigate the threats began as far back as June 2017.

It is anyone’s guess how much warning Amazon and Microsoft were given to prep their defences, but both confirmed their systems were in the throes of being patched (or already were) within hours of the flaws first becoming public knowledge.

That’s a speed of response few (if any) private-enterprise datacentre operators could ever hope to achieve. It is also fair to assume not many would have been afforded the luxury of a seven-month head start on addressing these vulnerabilities either.

Amazon, Google and Microsoft have all previously made the point that few enterprises can afford to spend as much as they do on cloud security.

For that reason, they claim data stored in their clouds is better protected than the information enterprises leave languishing in private datacentres.

It’s a declaration that is difficult to argue against, particularly when you consider the high calibre of info-security staff the tech giants tend to attract too.

What the response of the Big Three to Meltdown and Sceptre shows the enterprise (particularly the firms still holding out on cloud) is that Amazon, Google and Microsoft’s cloud security claims are not all talk.

These companies had their patches created, tested and deployed by the time most enterprise datacentre operators were still probably trying to tell Meltdown and Spectre apart.

Monitoring the Meltdown

Outside of the cloud, enterprise IT departments still need to make sure their on-premise servers, PCs, and Apple Mac devices are patched and protected from Meltdown and Spectre.

“Unfortunately for IT staff, this is not a ‘one and done’ solution, as there are knock-on impacts from patching the vulnerabilities,” Craig Lodzinski, developing technologies lead at IT infrastructure provider Softcat, tells Ahead in the Clouds (AitC).

Indeed,  industry estimates suggest the first Meltdown patch, for example, could degrade CPU performance by as much as 30%.

“I’d envisage the ripple effect will continue for months, with IT staff having to tune, redeploy resources and assess performance, as well as responding to myriad tickets concerning application and hardware performance, both real and psychosomatic,” he added.

For cloud users, any unexpected change to their organisation’s CPU performance and usage patterns is concerning, particularly if it could cause the cost of their cloud bills to rise.

Some users may decide a bigger cloud bill is a small price to pay for the peace of mind that comes from knowing their data is protected from Meltdown and Spectre. Others, however, may feel it should be considered a cost of doing business by the provider, and covered by them accordingly.

In a statement to AitC, Amazon said no “meaningful” impact on CPU performance had been observed since its patching efforts commenced.

“There may end up being cases that are workload or OS specific that experience more of a performance impact. In those isolated cases, we will work with customers to mitigate any impact,” the AWS spokesperson added.

Patching things up

Google also claims to have seen a “negligible” impact on the performance of the workloads running on its cloud infrastructure, and described the patching process in a blog post as being “uneventful”.

Even so, the search giant is urging users to approach any online reports of performance degradations users claim to have seen with a pinch of salt.

“We designed and tested our mitigations for this issue to have minimal performance impact, and the rollout has been uneventful,” it added.

At least for the time being, with Lodzinski suggesting the fallout from Meltdown and Spectre will be playing out in the enterprise for a long time to come.

“With no attacks in the wild (at time of writing), what the future holds is anyone’s guess,” he said. “What we do know is that good patching and strong cyber-security practices by IT teams are the cornerstone in mounting a strong defence,” he added.


January 4, 2018  3:32 PM

OpenStack vs the rest of the public cloud: Where next from here?

Caroline Donnelly Profile: Caroline Donnelly
AWS, Google, Microsoft, OpenStack, Private Cloud, Public Cloud

In this guest post, Rob Greenwood, technical director at Manchester-based cloud and DevOps consultancy Steamhaus weighs up the pros and cons of using OpenStack over other public cloud platforms.

Just over a year ago, Cisco confirmed the closure of its OpenStack-based public cloud. Shortly after, several smaller datacentres and cloud providers, whose architecture relied on it, closed down, fell into administration or downsized their use of it.

These types of announcements always trigger predictions the OpenStack is on its way out, despite the fact it still powers more than 60 public cloud datacentres around the world.

Regardless of whether you’re a champion of the platform or not, there are some attractive OpenStack alternatives available on the market right now. Namely, Microsoft Azure, Google Cloud Platform and Amazon Web Services (AWS), who’ve all gained a significant share of the public cloud market since OpenStack arrived on the scene.

Each of these platforms have their pros and cons, particularly when it comes to issues such as vendor lock-in, scalability and technical support.

Locked in for life?

The proprietary vendor vs open source debate has raged for some time, with OpenStack advocates claiming their platform offers greater freedom than using any of its public cloud contemporaries.

Personally, I think people get too hung up on the vendor lock-in issue, because, in reality, you’ll always be tied to something – whether that’s a vendor or not. For example, your platform will often be tied to a piece of code regardless of where it’s hosted. And whenever you want to move from one platform to another, there will always be some work required in that migration.

Regardless of your preference however, you should always be tracking the way the market is moving and be prepared to respond accordingly.

The OpenStack ease of use issue

While you do get flexibility with OpenStack, you’re also dealing with lots of moving parts which require skilled engineers who understand its inner workings. Unfortunately, these people are few and far between.

Unless organisations have a highly-skilled engineering team in place, it can quickly become a minefield. It requires a lot of effort to keep stable and debug, and the learning curve is steep.

With that in mind, organisations need to decide if investing significant amounts of capital into building out and maintaining their own kit is something they want to do, especially when compared to the opex public cloud model.

Where next for OpenStack?

Over the last few years, Kubernetes has quickly become the established orchestration tool for managing containers. That layer of abstraction means we can control our containers regardless of the environment they are hosted in – be that in Google, AWS, Azure, on-premise or a combination of all four.

Serverless offerings, such as AWS’ Lambda, are also making it easier to deploy and scale applications without having to worry about the supporting infrastructure.

Both these solutions are changing the cloud landscape and reducing the reasons you would maintain your own OpenStack environment.

The momentum certainly seems to be moving in the direction of public cloud. That said, I’m not joining the doomsayers – those individuals who seem to relish in the demise of any technology.

I don’t believe this is the beginning of the end for OpenStack, and I’ve no doubt it will find its own niche. The Global Passport, an OpenStack initiative designed to bring disparate OpenStack clouds together, may help it carve out a new path.

But, eight years have passed since OpenStack was hailed as a revolution, so it’s time to acknowledge the market has changed — and appealing alternatives exist.


December 14, 2017  10:51 AM

G-Cloud 10 delayed: The pros and cons for public sector cloud users

Caroline Donnelly Profile: Caroline Donnelly
Cloud Computing, G-Cloud, Public sector

In this guest post, Rob Anderson, principal analyst for central government at market watcher GlobalData, takes a look at the implications of delaying the launch of G-Cloud 10 by up to a year.

Making in-roads into the public sector can be a precarious task for IT providers, and does not get any easier when the procurement rules of engagement seem to keep changing.

When the Crown Commercial Service (CCS) recently confirmed the ninth iteration of G-Cloud could remain in place for up to 12 months, it is understandable why supplier concerns were raised.

According to CCS, the extension will give it more time to radically transform the framework for the benefit of both users and suppliers.

Similar extensions were announced in October for both Digital Outcomes and Specialists 2 (DOS 2) and Cyber Security Services 2 (Cyber 2).

Yet eighteen months ago, suppliers were told CCS (alongside the Government Digital Service) were undertaking a new discovery process and that G-Cloud 9 design would go ’back-to-basics’ to deliver improvements for all.

A number of tweaks were made, including the removal of one lot, the elimination of overlapping iterations and an increase in the management levy. These changes, however, were not enough to stimulate significant growth in throughput.

CCS claims, with some degree of credibility, the frameworks have helped drive up cloud adoption and expanded the pool of SME suppliers public sector organisations can buy from, but more of the same will not further increase the volume or value of transactions.

To that end, delaying the rollout of G-Cloud 10 is at step that should be welcomed.

G-Cloud 10 delays: The pros and cons

Where some will see the hiatus as a benefit (if it results in a more attractive and flexible route to market) others will be unhappy with the change in G-Cloud release cadence. In a market accustomed to regular Digital Marketplace updates, a year’s delay may seem like an eternity.

There is also a danger delaying G-Cloud 10 will potentially slow the momentum of SMEs building their businesses to reach the 33% market share the government proclaims to want.

Of greater concern for CCS and suppliers is the continuing antipathy of the wider public sector to all of The Digital Marketplace;  around 90% of spend has been by Whitehall departments and metropolitan councils.

To be recognised as a true aggregator of commodity IT products and services, the buying agency needs to attract more buyers from a broad cross-section of public service delivery organisations.

CCS is also under pressure from its Cabinet Office paymasters, who are still awaiting the magnitude of savings long-since promised from the rolling review of frameworks.

Yet the unit continues to adopt a Field of Dreams “build it and they will come” mindset, rather than listening to user needs and adapting accordingly.

One must hope the CCS uses the breathing space to address those needs to create an enhanced Digital Marketplace and a firm foundation for the broader Crown Marketplace to be built upon.

For whatever the future holds for CCS, the outcome will have a profound effect on both public sector buyers and suppliers.


December 13, 2017  10:32 AM

Combining AI, automation & cloud is key to improving workplace productivity

Caroline Donnelly Profile: Caroline Donnelly
ai, Automation, cloud, Machine learning, SAP

In this guest post, Darren Roos, president of SAP ERP Cloud, explains why automation, AI, machine learning and cloud are key to improving workplace productivity.

If you take productivity at its most basic – as a measure of how much output you get for how much you put in – then it follows that to increase it you must remove as much unnecessary strain as possible from the input cycle.

Over the past decade, cloud computing has established itself as a key tool for helping companies streamline workflows. And, by allowing multiple users to work remotely on the same data at once, boost productivity.

But, just as productivity is effectively a measure of cause and effect, there’s been a negative – or at least stultifying – impact as a result of this enthusiastic uptake of cloud technology too.

Managing the process of digitisation through the cloud, particularly when done at increasingly large scale, has become unsustainably complex. Processes become jumbled, admin-heavy, and offset by unavoidable human error.

The somewhat ironic result is of course a slowdown in productivity. There’s a growing sense, however, that more can be done with this technology – and that we’re currently looking at just the tip of the cloud computing iceberg.

AI, automation and cloud?

Automation, artificial intelligence and machine learning have dominated both dominated the tech press over the past year, whether for beating humans at board games or stirring up fears of job replacement.

They’re actually already helping to streamline people’s daily lives in more ways than they might realise – whether at a one-click online checkout or in sending auto-generated email responses. When automation is applied well, the user won’t notice its presence at all.

And there’s no reason why the same logic shouldn’t apply to how organisations are managing their resources. In fact, it stands to reason that they absolutely should be looking at where automation can be applied.

Automation, bolstered by AI and machine learning, presents an opportunity to minimise the menial, repetitive, administrative tasks that can so often eat up an employee’s day. The result is more free time that can be put to use tackling more complex, thoughtful tasks that require actual intelligence.

Manual tasks have been automated since the industrial revolution, as humans find new roles and ways of working that are built on automation. While automation isn’t new, its scale and intelligence is.

What might you achieve with your day if you didn’t need to mindlessly wade through sales order data, capacity planning outlines, or cumbersome financial operations? The opportunity is immense, and it all comes back to productivity as people get more done during their work hours – and at a smaller overall cost to the business.

Combining the cloud

Beyond the mitigation of the immediate productivity issue, though, there are further benefits to be reaped from the integration of AI and cloud-based services.

Real-time insights, and the ability of computers to learn from them over time, open up the possibility of AI-informed service improvements.

Analysing enormous amounts of data generated on a daily basis, noticing patterns, and responding to them with suggestions or automatic adjustments is bread and butter for the AI and machine learning programs of today.

And they’re ripe for integration with the flexible, agile business approaches that have become commonplace through widespread cloud adoption. In fact, you might go as far as to argue that the two make ideal – if not essential – bedfellows for this exact reason.

In a hyper-connected world, it’s beyond the realms of acceptability for organisations to fall back on established, static practices.

Conventional wisdom indicates that failing to keep up with the pace of technological change will lead to both irrelevance and a decline to the point of non-existence. In fact, mere irrelevance may soon be considered getting off lightly.

It’s a Catch-22 situation: meeting increased, perhaps more ambitious, productivity targets will be key to survival. But to do that, employees need to be able to make the most of the time they spend at work.

The technology industry, at its heart, is driven by a desire to solve problems. I can’t think of one more valuable – or surmountable – for it to take on today.


December 8, 2017  12:00 PM

Cloud bursting: Examining the enterprise use cases and benefits

Caroline Donnelly Profile: Caroline Donnelly
cloud, Cloud bursting, IaaS, Infrastructure, SaaS

In this guest blog post, Naser Ali, head of solutions marketing for EMEA at Hitachi Vantara, explores the use cases and benefits for using cloud bursting in the enterprise.

When NASA needed to process petabytes of data from its Orbiting Carbon Observatory 2 (OCO-2), it expected to wait 100 days and spend $200,000 on new datacentre hardware to make it work.

Instead it used the Amazon Web Services (AWS) cloud to achieve the same thing for $7,000 in less than six days, allowing NASA engineers to gain deeper insights into the Earth’s carbon uptake.

This ephemeral use of cloud computing, known as ‘cloud bursting’ offers the scientific community an effective, affordable solution for processing high-volume data spikes. But can it be applied in the more earthly business world? Indeed it can. In fact, we’re working with two large global banks that are doing just that.

Banks are the perfect use case for cloud bursting because they process large volumes of data about things like interest rate returns and risk weighted assets. This data isn’t personal, but requires frequent short bursts of compute power to meet compliance reporting requirements.

Most banks can’t justify the capital expense of investing in on-premise capacity to process data that only comes in intermittently knowing that hardware will mostly remain idle. On the other hand, running a 24/7 technology stack in the cloud racks up too much operating expense. Cloud bursting not only strikes the perfect balance between storage capacity and cost, it gives data scientists a space to ‘play’ and gain new insights.

Cloud bursting in practice

Put simply, the basic steps data science teams take when cloud bursting are: move data into a cloud, spin up some computing power, spin up some storage, process a tranche of data, shut it down and take the results home. But how does this look in practice? One banking customer’s cloud bursting approach follows these steps:

  • First it uses our data integration platform to automatically ship and load data to object storage in an AWS cloud, processing it in batches.
  • Next it runs a script to fire up Carte, which is a simple web server for remotely executing data transformations (converting data from one format or structure into another) using Amazon Elastic Map Reduce (EMR) in Hadoop or the Amazon Redshift data warehouse.
  • Once the data is cleaned, transformed and loaded into the temporary cloud, the data scientists can then “play” with the data by running ‘what-if’ scenarios on things like risk and interest rate return rates.

The greatest benefit of cloud bursting is agility, but there are hard cost savings as well. This bank had been spending 1 million on hardware and software for its risk reporting Hadoop cluster. We took a subset of its data, 12 million rows, ran it in AWS for $2.20 and in Google for .50 cents. So you can process a pretty large chunk of data for very little cost and no upfront investment. Another cost advantage is the ability to take advantage of ‘spot pricing’ on cloud capacity. This new type of dynamic pricing saves customers money by letting them running some applications only when spot prices fall below a specified price point. It also allows users to quickly run large jobs by outbidding other customers for available capacity.

Critical success factors

Although I’ve used banks as an example, cloud bursting for data science is suitable for any industry with similar requirements. Here’s what our experience tells us that organisations need to be successful with this approach:

  • Agile data integration – choose a data integration tool that doesn’t need to be installed on every Hadoop cluster within the cloud. This is the key capability that allows you to spin up a cloud and process data in a matter of hours. Conversely, tools that require you to install software on multiple Hadoop nodes cancel out all the agility advantages of cloud bursting.
  • Strong DevOps – organisations that are good at DevOps, particularly continuous integration, will have a smoother ride when it comes to preparing data to be processed in cloud bursts.
  • Open standards environment – it’s very important that organisations abstract the data processing from the cloud environment and don’t get locked into a single cloud vendor. Open standards and open frameworks ensure that everything is moveable if necessary.
  • Plan for repatriation – what happens if an organisation is successful with cloud bursting but later needs to add secure data to its analysis? In this case the whole compute process and data would need to be repatriated back on premise. This requires having an analytics platform that supports a hybrid computing environment and ideally supports containers like Docker, which make it fast and easy to repatriate processes and data. This is essential in financial services, where regulatory compliance mandates that systems are in place to prepare for this potential situation.

One giant leap

Following in NASA’s lunar footsteps, banks’ first cloud bursting efforts have been encouraging, achieving savings of multimillion pounds annually. These have largely come from reducing total cost of ownership and increased agility.

Of course the ‘giant leap’ organisations make by applying such a practical and cost-effective way to process data is that they minimise their exposure to financial risk.

The Financial Conduct Authority (FCA) has mandated that banks significantly raise their cash reserves and understand their risk exposures to avoid repeating the scenario that brought down Lehman Brothers and led to the RBS bail-out.

In a typical scenario where a bank has investments in many different financial institutions and companies, it takes just one entity to go bust and take down the others. In our era of economic turbulence, this ability to play with data in a temporary space and game different scenarios could affect a company’s very survival.


Page 2 of 912345...Last »

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: