Ahead in the Clouds

Page 1 of 712345...Last »

June 8, 2017  9:24 AM

Enterprise DevOps: Is it anything new?

Caroline Donnelly Profile: Caroline Donnelly
Agile, Agile software development, Continuous delivery, DevOps

In this guest post, Jon Topper, co-founder and principal consultant of  hosted infrastructure provider The Scale Factory, shares his thoughts on how the DevOps movement is maturing as enterprises move to adopt it.

In 2009, a group of like-minded systems administrators gathered in Gent, Belgium to discuss how they could apply the principles of agile software development to their infrastructure and operations work, and – in the process – created the concept of “DevOps”.

Over the course of the intervening years, DevOps has become a global movement, and established itself as a culture of practice intended to align the values of both developers and operations teams around delivering value to customers quickly and with a high level of quality.

The DevOps community understands that, although software and tools play a part in this, that doing DevOps successfully is often much more about people than technology.

The emergence of enterprise DevOps

Perhaps inevitably, the term “Enterprise DevOps” has recently emerged in its wake, with new conferences and meetup groups springing up under this banner, with consultancies, recruitment agencies, and software vendors all rushing to refer to what they do as Enterprise DevOps too.

Many original DevOps practitioners are sceptical of this trend, seeing it as the co-opting of their community movement by mercenary, sales-led organisations, and some of their scepticism is warranted.

The newcomers, in some cases, have shown themselves to be tone-deaf and have missed the point of the movement entirely. Some more established organisations have just slapped a “DevOps” label on their existing offerings, and show up to meetings in polo shirts instead of suits.

Different challenges at scale

As companies grow, new challenges arise at different scale points. Enterprises with thousands of employees are vast organisms, whose shape has been informed by years of adaptation to changing business environments.

In an SME, it is reasonably to assume the whole technology team knows each other. In enterprises, however, there are likely to be hundreds or thousands of individual contributors, across several offices and in different time zones. A DevOps transformation needs to facilitate better communication between these teams, which can sometimes require reorganisation.

Over time, large businesses accrue layers of process and scar tissue around previous organisational mistakes. These processes govern procurement practices, change control and security practice, and can be incompatible with a modern DevOps and agile mind-set.

A successful DevOps transformation necessitates the questioning and dismantling of those processes where they are no longer adding value.

How is Enterprise DevOps different?

In all honesty, I’m not sold on the idea Enterprise DevOps is an entirely unique discipline. At least not from an individual contributor perspective. Much of the same mindset and culture of practice is just as relevant for enterprise teams as they are in smaller businesses.

To allow these contributors to succeed in the context of a larger enterprise, substantial structural and process changes are required. Whether the act of making this change is something unique, or just the latest application of organisational change is up for debate, but the term seems to be here to stay.

How to succeed with Enterprise DevOps

Although Enterprise DevOps is a recent addition to the lexicon, some larger businesses have been doing DevOps for years now, and those that have been successful in their transformations have a number of things in common.

One major success factor is that there’s a high level executive in place who’s championing this sort of work. A powerful, trusting, business sponsor can be crucial in removing obstacles, and for ensuring transformations are provided with the resources they need.

Successful organisations seem to reorganise by building cross-functional teams aligned to a single product. These teams include project and product management, developers, ops team members, QA and others. They’re jointly responsible for building and operating their software. It should come as no surprise to learn these teams look like miniature start-up businesses inside a wider organisation.

Crucial to the success of these teams is a culture of collaboration and sharing. There’s little point in having multiple teams all trying to do the same thing in myriad different ways, or in all making the same mistakes. Successful teams in organisations like ITV and Hiscox have described their experiences of building a “common platform”. Design patterns and code are shared and reused between teams allowing them to build new platforms quickly, and at a high standard.

The cost of business transformation can be high, but now DevOps is proven to work in the enterprise, can you really afford not to make this change?

May 24, 2017  12:00 PM

All eyes on Etsy as investors push public cloud to cut costs

Caroline Donnelly Profile: Caroline Donnelly
Cloud Computing, DevOps, Etsy, Private Cloud, Public Cloud, Strategy

Online marketplace Etsy is under growing investor pressure, following a drop in share price, to cut costs. But will embracing public cloud do the trick?

Etsy is one of a handful of household names enterprises regularly name-check whenever you quiz them about where they drew inspiration from for their DevOps adoption strategy.

The peer-to-peer online marketplace’s engineers have been doing DevOps for more than half a decade, and are well-versed in what it takes to continuously deliver code, while causing minimal disruption to the website and mobile apps they’re trying to iteratively improve.

According to the company’s Code as Craft blog, Etsy engineers are responsible for delivering up to 50 code deploys a day, allowing the organisation to respond rapidly to user requests for new features, or improve the functionality of existing ones.

The fact the company achieves such a ferocious rate of code deploys per day, while running its infrastructure in on-premise datacentres, has marked it out as something of an anomaly within the roll-call of DevOps success stories.

For many enterprise CIOs, DevOps and cloud are intrinsically linked, while the Etsy approach proves it is possible to do the former successfully without the latter.

Investor interference at Etsy

One company investor, Black & White Capital (B&WC), seems less impressed with what Etsy has achieved through its continued use of on-premise technology, though, and has publicly called on its board of directors to start using public cloud instead.

B&WC, who owns a 2% stake in Etsy, made the demand (along with a series of others) in a press statement on 2 May 2017. Its aim being to draw public attention to the decline in shareholder value the company has experienced since it went public in April 2015.

According to B&WC, Etsy shares have lost 33% of their value since the IPO, while firms in the same category – listed on the NASDAQ Internet Index and the S&P North American Technology Sector Indexes – have seen the average price of their shares rise by 38% and 35%, respectively.

Given that Etsy is reportedly the 51st most popular website within the United States, and features around 45 million unique items for sale, B&WC’s argues the firm should be making more money and returning greater value to shareholders than it currently does.

Part of the problem, claims B&WC’s chief investment officer, Seth Wunder, is the lack of “expense management” and “ill-advised spending” going on at Etsy.

“This has allowed general and administrative expenses to swell to a figure that is more than 55% higher than what peers have spent historically to support a similar level of GMS [gross merchandise sales],” said Wunder, in the press statement.

“The company’s historical pattern of ill-advised spending has completely obfuscated the extremely attractive underlying marketplace business model, which should produce incremental EBITDA margins of greater than 50% with low capital investment requirements.”

Room for improvement?

The statement goes on to list a number of areas of operational improvement, compiled from feedback supplied by product engineers and former employees, that could potentially help cut Etsy’s overall running costs and drive sales.

These include reworking the site’s search optimisation protocols, and introducing marketing features that would open up repeat and cross-sale opportunities for people who use the site.

The press statement also goes on to make the case for freeing up Etsy’s research and development teams by using cloud, because they currently spend too much time keeping the firm’s on-premise infrastructure up and running.

“It is Black & White’s understanding that more than 50% of the approximately 450 people on the R&D team focus on maintaining the company’s costly internal infrastructure,” the statement continued. “A shift to the public cloud would provide long-term cost savings while also establishing a more flexible infrastructure to support future growth.”

Private vs. public cloud costings

Apart from the quotes above, there is no further detail to be found within the private letters B&WC sent to Etsy’s senior management team (and subsequently made public) about why it feels the firm would be better off in the public cloud.

If the investors’ main driver for doing so is simply to cut costs, it should not be assumed that shifting Etsy’s infrastructure off-premise will deliver the level of savings they are hoping for.

Indeed, there is growing evidence that it actually works out cheaper for some organisations to run an on-premise private cloud rather than a public cloud. Particularly ones whose traffic patterns are relatively easy to predict and are rarely subject to unexpected spikes… as is (reportedly) the case with Etsy.

When Computer Weekly quizzed Etsy previously on the rationale behind its use of on-premise datacentres, the company said – based on the predictability of its traffic usage patterns – it makes commercial sense to keep thing on-premise rather than use cloud.

It is difficult to say with any conviction what changes may have occurred at Etsy since then to cause its investors to reach a different conclusion, aside from their new-found commitment to cost-cutting.

The October 2016 edition of 451 Research’s Cloud Price Index report, meanwhile, suggests the investors could end up worse off by blindly favouring the use of public cloud over an on-premise, private cloud deployment.

“Commercial private cloud offerings currently offer a lower TCO when labour efficiency is below 400 virtual machines managed per engineer,” the analyst house said, in a blog post. “Past this tipping point, all private cloud options are cheaper than both public cloud and managed private cloud options.”

Take caution with cloud

There is no guarantee moving to public cloud will equate to long-term costs savings, and here’s hoping Etsy’s investors are not labouring under the misguided view that it will.

Within the DevOps community there are also plenty of war stories about how a change in senior management or investor priorities has resulted in companies abandoning their quest to achieve a sustainable and efficient continuous delivery pipeline, negatively affecting their digital delivery ambitions.

Given the investor’s agenda for change at Etsy is heavily weighted towards harnessing new technologies, as well as equipping its board of directors with more technology-minded folks, it would be a shame if that were to come at the expense of its position as one of the poster childs for DevOps done right.


May 15, 2017  1:55 PM

How OpenStack is coming of age and tackling its growing pains head-on

Caroline Donnelly Profile: Caroline Donnelly
IT trends, OpenStack, Private Cloud, Public Cloud, trends

The OpenStack community of users and contributors are united in their belief in the enterprise-readiness of the open source platform, but accelerating its adoption requires some back-to-basics thinking, finds Ahead In the Clouds.

One of the downsides of growing up is the gradual realisation that, despite our parents’ promises to the contrary, it is actually really hard to be anything we want to be.

You only have to look at how the lofty career ambitions of most small children get revised down with age, as they realise that becoming an astronaut (for example) is actually quite the academic undertaking, and that vacancies in the field are few and far between.

OpenStack appears to be going through a very similar journey of self-discovery and realisation, with many of the discussions at its recent user summit focusing on the need for its community to pare back their ambitions for the open source cloud platform.

This should not be interpreted as push by the OpenStack Foundation, who oversee the development of the software, to curb the creativity of its contributors, but to ensure they’re working on meaningful projects and features that users actually want.

Clearing the cloud complexity

OpenStack started out with the goal of allowing enterprise IT departments to use its open source software  to manage the pools of compute, storage, and networking resources within their datacentres to build out their private cloud capabilities.

In the six or so years since OpenStack has been going, the community contributing code to the platform has changed massively, with some high-profile vendors down-sizing their involvement (while others have ramped up theirs). Meanwhile, the number of add-ons and features the technology can offer enterprises has ballooned.

This has created a lot of unnecessary complexity for IT directors trying to work out if OpenStack is the right technology to create a private cloud for their business. In turn, they also need to work out which vendor distribution is right for them, what add-ons to include and whether they should do everything themselves or enlist the help of a managed services provider.

As enterprise adoption of the platform has grown, the OpenStack Foundation and its stakeholders now have a wider pool of users to glean insights from, with regard to what features they do and don’t use, which is helping cut some of this complexity.

As such, the Foundation set out plans, during the opening keynote of the Spring 2017 OpenStack Summit to start culling projects and removing unpopular features from the platform to make OpenStack easier to use, and ensure the 44% year-on-year deployment growth it’s reported recently continues apace.

Back-to-basics with OpenStack

Various stakeholders Ahead in the Clouds (AitC) spoke to at the OpenStack Summit said the Foundation’s commitment to getting “back-to-basics” is long overdue, with Canonical founder, Mark Shuttleworth, amongst the staunchest of supporters for this plan.

“We’ve always been seen somewhat contrary inside of the OpenStack Community because when everyone else was saying we should do everything, I was saying we should just do the [core] things and do them well,” he told AitC.

For Canonical, whose Ubuntu OpenStack is used by some of the world’s biggest telcos, media outlets, and financial institutions, this means delivering virtual machines, disks and network, on demand with great economics, said Shuttleworth.

“OpenStack does not need to be everything as a service. It just needs to be infrastructure as a service. Focusing just on that has been very successful for Canonical,” he continued.

“We focused on just the core, and that allows people to consume [the] OpenStack private cloud as cleanly as they can consume public cloud.”

As far as Shuttleworth is concerned, scaling down OpenStack’s ambition is a sign of the endeavour’s growing maturity, and will position the private cloud technology well for future growth.

“They say a mid-life crisis is all about realising you’re not going to be simultaneously a rock star, a noble prize winner, a top surfer and a famous poet. That is exactly what is happening with OpenStack,” he said.

“It’s not a bad thing. You can call me OpenStack’s greatest fan, but it’s just I happen to think chunks of it are bulls**t.”

OpenStack comes of age

At just six years old, OpenStack is – perhaps – a little premature to be going through a mid-life crisis, but it is certainly at the right stage of life to be experiencing growing pains, as its quest to become the private cloud of choice for enterprise customers rumbles on.

Its success here heavily relies on the ability of the Foundation and its stakeholders to alter some of the negative perceptions enterprises have about private cloud.

Some of these are borne out of end-users unfairly pitting the private cloud and public cloud against each other, Scott Crenshaw, senior vice president and general manager of OpenStack clouds at managed cloud firm Rackspace, told AitC at the Summit.

“We have enough experience in the industry to understand now – at a high level – what platform works best for each application. It’s not the Wild West any more where fear, uncertainty and doubt are driving buying decisions,” he said.

“There is no longer a Pollyanna view that everything goes into public cloud and then we’re all better off. The situation is lot more nuanced than that.”

Enterprises are gradually coming to the realisation that drawing on public cloud resources to run their applications and workloads is not necessarily cheaper, and – in some situations – can actually work out a whole lot more expensive.

“If you knit all this together, what you see is private cloud and public cloud are not really in competition: they’re very complimentary technologies,” he continued.

“It’s going to be horses for courses and it’s a great thing. It’s more economical, it’s more efficient, it’ good for the economy and for the end users.”

OpenStack’s growing pains have seen some of its big-name contributors downsize or simply tweak their involvement with the community, as their bets on the technology have not quite played out how they predicted.

Shuttleworth said this process is something all open source communities go through as they mature and evolve, and OpenStack will end up stronger for it.

“It’s like the internet in 1999. The internet didn’t stop once the dotcom bubble burst, and the same applies here. The need for OpenStack continues and is bigger than ever,” he added.

The latest OpenStack User Survey (published in April 2017) certainly backs the latter point, and it will be interesting to see how the Foundation’s efforts to trim the fat from OpenStack affects the deployment rates reported in next year’s edition and beyond.


April 21, 2017  1:46 PM

Desktop virtualisation dilemmas: Solving the VDI blame game

Caroline Donnelly Profile: Caroline Donnelly
desktop virtualisation, Liquidware Labs, VDI

In this guest post, Kevin Cooke, product director at desktop virtualisation software provider Liquidware Labs, explains how CIOs and IT departments can avoid playing the blame game when working out why their VDI projects are not going to plan.

The move to virtual desktops, whether full on-premise virtual desktop infrastructure (VDI) or a managed desktop as a service (DaaS) in the cloud, can be wrought with hidden challenges. They may be technical or political, and lead to disruption, unmet user expectations and hit staff productivity.

These challenges or visibility gaps are amplified in larger environments, as there are more fingers in the pie, often combined with distributed technical responsibilities.
Ultimately, the question CIOs and IT directors should be asking is who owns accountability for the user experience?

What good looks like

If delivered properly, the desktop or workspace should offer a consistent and familiar experience—regardless of whether it is delivered via physical PCs, virtualised locally or delivered as a service in the cloud. But who gets the light shined on them when things go astray? Is it the desktop team? Perhaps the infrastructure folks who own the storage, servers and network are to blame? And in the case of DaaS, this demarcation becomes a lot more imprecise.

Don’t play the VDI blame game

The frustration we hear time and time again is who’s at fault. If VDI or DaaS is the last technology employed, it often gets the blame. And don’t discount people or organisational challenges; whereby user rebellion or office politics can be at play.

The lack of visibility and understanding of user experience can occur regardless of the delivery approach or platform. For cloud and managed services, there are issues that centre around where lines of accountability should be drawn. And, without a specific user experience SLA, it can be almost impossible to ensure you can measure, enforce and remediate these issues—even if you could draw appropriate lines between IT teams and find the true root cause.

Environmental challenges

While these challenges are not unique to DaaS, they do muddy the waters when attempting to determine accountability. How do you navigate these issues when your team points the finger at the service provider and the cloud folks claim it’s not their issue? I’ll present a number of common challenges we routinely face in the field. Some are related to infrastructure and delivery. Some are simply good practice and tasks that should be applied to any desktop.

In no particular order, I present a list of common visibility challenges that can play a significant role in user experience.

  • Desktops are like cupboards: If you don’t clean them out once and a while they become wildly inefficient, so be sure to reboot your physical, persistent and non-persistent pools, people.
  • We’ve always done it that way: Stop using old-school approaches to managing desktop patches on new-school architectures like DaaS, such as Microsoft System Center Configuration Manager (SCCM). I understand it’s the way you’ve always done it, but that does not make it correct.
  • User tiering and memory allocation: When moving to DaaS, or moving a physical PC to VDI, it is critical that you understand metrics such as memory and what is consumed by users and user groups. On the one hand you could under-provision, placing power users into a smaller memory footprint than required. These users will never be happy, as their VMs will constantly page to disk. On the other hand, over-provisioning means wasted resources. Resulting in an elusive ROI that will never be realised as you are over paying for VMs.
  • Controlling video, audio, keyboard and mouse signals correctly:  Poor user experience can sometimes be traced backed to the display protocol. Understanding the network, and how your display protocol behaves when constrained is key to tuning and optimising its performance. Would you be surprised to learn it was your wide area network provider that was to blame.
  • Master desktop image is everything: Pushing the same image to everyone – regardless of what they consume – is just plain wasteful. We worked at length with a customer who had not included the proper PCoIP components in their base image. The images installed and all seemed well on the DaaS platform, but the desktops were not accessible from their thin clients. Understanding what your users need, and building the appropriate image, is very important.

In addition to understanding what users need, CIOs need to get a handle on when they need it. In the spirit of cloud, concurrency and resource utilisation, it is important to understand when users require their workspaces.

Being able to identify workers who are not using their workspace is one side of this exercise, but right-sizing your pool and workspace count is another – whether you are paying by the month, by the CPU cycle, or by the named user. Understanding your actual consumption is key to maximising your ROI.

Could you be the problem?

I often hear seasoned IT organisations moan about VDI and DaaS. How it does not work, or it’s not delivering on all promises. I bite my tongue, and simply say VDI may not be to blame.

Many of the issues and challenges noted above are key contributors to a poorly performing DaaS or on-prem VDI platform. With vast experience in helping to diagnose and remediate these complex environments, I can honestly say is not often that the core VMware, Citrix or DaaS platform that is to blame.

More often than not, it is a tangential issue that is the cause. Similarly, before you point the finger at your cloud provider, be sure to understand the contributing and supporting components and how they can affect your overall user experience.


March 30, 2017  3:40 PM

Lock-in: Using cloud-neutral technology to avoid it

Caroline Donnelly Profile: Caroline Donnelly
Amazon DynamoDB, cloud, Google Cloud, lock-in, MarkLogic

In this guest post, Gary Bloom, CEO of database software supplier MarkLogic, explains why adopting a cloud-neutral strategy is essential for enterprises to avoid lock-in.

Not so long ago, choosing a single cloud provider seemed a sensible approach. But as the market has matured, enterprises are realising how easy it is to become locked in to a single provider, and the downsides that can bring.

According to market watcher 451 Research, some organisations are mitigating the risk of vendor lock-in by adopting the operating principle of ‘AWS+1’.

The analyst firm believes 2017 will be the year CIOs move to adopt cloud services from Amazon Web Services (AWS) and one other competing provider to ensure they are not locked into a single supplier or location.

This approach offers greater flexibility to match applications, workloads and service requests to their optimal IT configurations.

But even playing two providers off each other might not allay the risk of lock-in, as Snap’s public filing revealed in February this year.

The company, which owns of the messaging app Snapchat, plans to spend an eye-watering $2 billion on Google Cloud, its existing provider, as well as a further $1 billion on AWS over the next five years.

In its regulatory filing, Snap exposes the real impact of changing cloud providers with words that should send shivers down the spine of any CIO.

“Any transition of the cloud services currently provided by Google Cloud to another cloud provider would be difficult to implement and will cause us to incur significant time and expense,” the document states.

“If our users or partners are not able to access Snapchat through Google Cloud or encounter difficulties in doing so, we may lose users, partners or advertising revenue.”

This alarming note demonstrates the risk of lock-in even with two cloud providers. Snap is also considering building its own cloud infrastructure but this presents similar risks.

The creep of cloud lock-in

IT departments may be wary of lock-in, but it can happen without them realising it. It starts as soon as someone decides to use the cloud provider’s proprietary APIs to reduce the amount of coding required to launch an application in the cloud.

For example, when developers write software for Amazon’s DynamoDB database they are being locked into AWS.

Although it is unlikely that you will want to move workloads back and forth between clouds, there is a high chance you will want to deploy it in another cloud at some point. At this point, it can prove a lengthy and expensive process to rewrite the application.

The solution is to design cloud applications with cloud neutrality at their core, underpinned by cloud-neutral database technology that works across every cloud provider and on-premise.

With a cloud-neutral approach, you are not tying your fortunes to those of your cloud provider, and you are also able to play vendors off against each other to get a better deal.

In a sign that cloud neutrality is coming of age, SAP recently announced that it is making its HANA in-memory database available across all the major public cloud platforms, as well as its own private cloud, to give its customers the opportunity to switch providers.

Other enterprise ISVs are likely to follow suit in time, but none of us can predict how the cloud market will evolve long-term and cloud neutrality keeps the door open. So, when an alternative vendor launches a new service or specialty that is more suited to your needs, you can switch with relative ease.

Being cloud neutral is an effective insurance policy in an age when it seems that nobody is immune from the threat of cyber attacks. If your cloud provider has a breach, you want to be able to move to another provider quickly.

CIOs have not forgotten the dark days when datacentre outsourcers held enterprises to ransom and nobody would willingly make the same mistake in the cloud. With cloud neutrality at the heart of your strategy, you can have the peace of mind that comes from future proofing your business.


March 6, 2017  12:54 PM

Amazon S3 outage: Acknowledging the role humans play in keeping the cloud going

Caroline Donnelly Profile: Caroline Donnelly
amazon s3, AWS, Cloud outages

The Amazon cloud storage outage provides a neat reminder about the role humans continue to play in the delivery of online services, but – when things go wrong – end-user sympathy for the plight of the engineers involved is often in short supply, writes Caroline Donnelly.

The internet age has massively inflated end-user expectations around the uptime and availability of online services. So much so, when the platforms we rely on to stream music, send emails or collaborate on work projects fall over, consumer patience is often in short supply.

Evidence of this can be found on Twitter during an outage, and seeing what users have to say about the fact a service they need to use is not available when they expect it to be.

Depending on the nature of the service that has gone down, the tone and content of messages can vary considerably from resigned acceptance to all-out fury, with a few snark-filled, meme-laced barbs often thrown in for good measure.

A couple of years ago, Ahead In the Clouds (AITC) sat through a DevOps presentation at the AWS user conference in Las Vegas about The Day in a Life of a Netflix Engineer.

During the session, Dave Hahn, a senior engineer at Netflix, touched upon the histrionic online outbursts its user base are prone to indulging in whenever the streaming service runs into technical difficulties.

“If any of you have ever monitored social media when there is an occasional Netflix outage, you’ll notice some people believe they’re going to die. I want to let you know, we checked and no-one has actually died,” he said.

While Hahn’s comments were made in jest, they serve as a handy reminder that – while it is annoying when services we rely on fall over, it’s usually relatively short-lived and rarely the end of the world.

Prolonged and widespread

The exception to that, of course, is when the downtime is prolonged, as was the case with SSP Worldwide’s two-week service outage in the summer of 2016, or when the failure of one service has far-reaching implications for many others.

The Amazon Web Services (AWS) cloud storage outage on 28 February is an example of the latter, with its multi-hour downtime drawing attention to just how many people rely on its Simple Storage Service (Amazon S3) to underpin their online services and systems.

According to AWS, the cause of the downtime was a typo, generated by an engineer while inputting a command. This in turn contributed to a larger than expected number of servers (hosted within the firm’s US East-1 datacentre region) falling offline.

During the course of the downtime, and for several days after, Twitter was full of people making light of the situation, and the fact a humble typo could prove so disruptive to the world’s biggest cloud provider.

It seems it is all too easy to forget, or simply overlook, the critical role humans play in the creation, development and delivery of the online services, particularly in light of the column inches regularly devoted to how automation and robotics are changing the way lots of industries operate nowadays.

Whenever an errant server misbehaves, it is still the job of an engineer to respond to the system alert and get to work on the solving the problem, possibly with the assistance (but sometimes not) of their colleagues.

If that call comes in the middle of the night, it is the engineer whose sleep gets disrupted or whose personal life gets put on-hold so they are ready to respond to any incidents that may occur on their watch.

Human error in the Amazon S3 outage

In the case of a company the size of Amazon, the pressure to perform and rectify the problem as quickly as possible will be all the greater, given just how many organisations and people depend on its platforms.

Among all of the social media snark about the Amazon S3 outage was a sizeable number of tweets, indexed under the #HugOps hashtag, taking a whole more empathetic point of view on the situation and the plight of the people tasked with sorting it.

Rather than point fingers and make jokes, people were using the hashtag to wish the AWS engineering team well, and pass on their support for the engineer whose typo reportedly caused it all.

Someone has even created a GoFundMe page for the engineer concerned to raise money for – as the post says – all the “alcohol or therapy, or both” the individual concerned will need to get over what occurred.

“This campaign is intended in the most light-hearted and supportive way possible. It’s not easy to be the root cause of an outage, and this was a big one,” the page reads.

A lot of the people making use of the #HugOps hashtag work in IT, and are sympathetic to the plight of the person involved as they’ve probably had first-hand experience of being in a similar situation themselves.

Which is why so many of the posts sporting the hashtag have an air of “there but for the grace of god go I” about them, but – for users – all they see is the inconvenience caused by not having “always-on” access to their favourite services.

As is the case with on-premise systems, sometimes things just fail or don’t perform the way we think they should, and it is time users grew to appreciate and understand that because internet access is a privilege, not a human right.

And, by ranting and raving online about why something isn’t behaving the way it should is likely to exacerbate an already god-awful situation for someone, somewhere tasked with repairing it.

While letting off some online steam might make you feel better, it’s not going to get what’s broken back up and running any quicker.

So next time an outage occurs, spare a thought for the engineers, beavering away behind the scenes trying to get things up and running again, before you go off on an extended rant at the company on social media.

Put yourself in their shoes. If you went to work and made a mistake that tens of thousands of people on the internet shouted at you about, how would that make you feel?


February 20, 2017  1:18 PM

The seven employee types that can hasten (and hinder) enterprise cloud adoption

Caroline Donnelly Profile: Caroline Donnelly

In this guest post, Mollie Luckhurst, global head of Customer Success at cloud-based collaboration firm, Huddle, outlines the seven employee types that could make or break an organisation’s shift off-premise.

Enterprise IT departments are now well aware of the benefits that cloud-based services can bring: quickly deployed, scalable technology, at a lower cost.

As the use of them become ubiquitous both inside and outside the office, success relies heavily on one thing: user adoption. Without it, the promised benefits of cloud become either overshadowed or simply evaporate.

When implementing any new workplace technology, successful cloud adoption depends on understanding the personas of the people and users involved. Eagerness to change and adopt will vary according to individual motivations and concerns.

By failing to understand this – cloud adoption schedules could be undermined, resulting in project overspend, which is why IT departments should familiarise themselves with the seven most common personas they are likely to encounter during a company’s move to the cloud.

The Champion

The first, and most typical persona, IT teams will encounter will be The Champion. Usually an executive-level supporter, the Champion is energetic, enthused and, of course, personally invested in the success of the project. Having suggested the service, Champions will have a strong appreciation of its potential benefits to the business but may struggle with the practicalities of managing cloud adoption.

Given their vested interest in the success of the deployment, Champions need to see progress being made quickly, and will want to be kept informed at every step for feedback purposes. Accordingly, implementation leaders must take care to alert Champions to the achievement of roll-out milestones and the delivery of benefits.

The Passives

While Passives are the least vocal of all personas, they determine whether the move to cloud has been a success or not – simply through their sheer majority. They’ll comply with training and adopt the solution for the purpose it was intended, but it’ll be slow. They are not super-users, nor your internal advocate, and are motivated by wanting to see their team become more effective – which means meeting KPIs and receiving bonuses.

It’s important IT departments recognise the incentives that spur this persona into adoption, and tailor internal communications accordingly. Failing to address the typical objection to internal change – “what’s in it for me?” – can quickly undermine success.

The Enthusiasts

As their names suggests, Enthusiasts are very vocal about their eagerness to adopt new technologies and improve working practices, and can be readily harnessed to relay best practice to multiple other personas with less excitement.
Enthusiasts, however, are often walking a fine line. They will start positively; proactively investigating features, functions and accelerating themselves to super-user status faster than most. But if they discover inadequacies, they can quickly lose confidence. And because they were so vocal about their enthusiasm, this reversal can become very poisonous to the move to cloud.

This trap is best avoided by giving them advance access to the rollout roadmap, or by including them on beta tests of new functions. This acknowledges any faults they may identify, while also making them part of the solution and therefore less likely to openly critique.

The Detractors

This group are resistant to new technologies and are frustratingly vocal about it. Initially a Passive, lack of appropriate management and minor concerns can contribute to turning a Detractor against the entire project.
These concerns may include not being involved in the platform selection, or fears that new, transparent working processes will expose their own inefficiencies.

Detractors need to be tackled positively, and quickly. Their views will be reinforced by a confrontational approach, so it is often best to surround them with Enthusiasts of the same rank or function, who are adapting to the new processes, and enjoying success from doing so.

The Dependents

What they lack in capability, Dependents more than make up for with their eagerness, enthusiasm and positivity towards new technologies. While their dedication to it helps encourage others to adopt, they consume the most time in training, management and helpdesk queries. Yet, while they are high risk, they can also be high reward. Help a Dependent to ‘master’ the technology, and they quickly become Enthusiasts. But, the point where change becomes too hard is when they turn into a Detractor.

The time drain associated with a Dependent is minimised by offering constant reminders and support, tailored training sessions and even 1-to-1 training sessions.

The Laggards

Time-poor employees, who view training as an onerous task and aren’t completely convinced by the need to innovate, could be considered Laggards. They’re not vocal. In fact, they’d rather stay hidden from the IT department so that don’t have to commit to learning how to use the platform. To convince a Laggard otherwise, present evidence of the platform’s necessity – to themselves, the company or the market.

Just as with Detractors, IT teams should surround them with Enthusiasts of similar ranks and roles. But of course, if they are handled poorly, there is a danger they take the very small step to becoming Detractors themselves.

The Sceptics

While happy to go along with training and day-to-day use, Sceptics suffer from a lack of understanding of the tool or a frustration that it negatively impacts their role. They are often also of senior rank and are engrained in existing processes. They are not as vocal or obstructive as Detractors, but still need to be managed carefully as their concerns can quite easily be picked up on by others in their immediate team, given their rank.

To avoid Sceptics coming to the fore, IT teams must ensure the changes to workflows are appropriate and take into consideration every role’s typical processes and need for change. In many cases, this will require that certain roles are given additional permissions, functions and training.

Clearing the cloud adoption confusion

Driving user adoption is not just a simple case of identifying personas and communicating with them appropriately. It’s a delicate balance that – if dealt with poorly – can lead to Laggards and Enthusiasts becoming Detractors, unengaged Passives, or leave Champions becoming frustrated – ultimately jeopardising user adoption on the move to cloud. But get it right, and adapt your deployment plans according to the personas in your business, and you’ll not only accelerate your ROI goals, but also reduce your own time burden.


December 9, 2016  1:17 PM

Enterprise evolution: How the cloud conversation is changing

Caroline Donnelly Profile: Caroline Donnelly
AWS, CIO, cloud

As 2016 draws to a close, Ahead in the Clouds looks at how enterprise attitudes to cloud have changed over the last 12 months.

At the first Amazon Web Services (AWS) Re:Invent in 2012, the cloud giant worried it might not fill the 4,000 seats it had laid on for the event. Fast forward to 2016, and 32,000 people made the pilgrimage to Las Vegas to see what AWS had to say for itself.

And there was certainly no shortage of product announcements, which include performance enhancements to its cloud infrastructure services, off-premise data transfer appliances and the fleshing out of its enterprise-focused artificial intelligence proposition.

On the back of this, its execs predict the company will have – by the end of the year – expanded its cloud portfolio to include more than 1,000 new services and feature, which is 300 or so more than in 2015.

Charting the changes

Having attended four out of five of the previous Re:Invents, it’s not just the size of the crowds the event attracts these days that have got bigger. The early years featured a developer-heavy crowd, while the customer sessions were dominated by talks from startups, scaleups and a few cloud-first enterprises.

Nowadays, the startup and developer communities are still well-represented, but the number of global enterprises speaking out about their cloud plans has ramped up considerably. Particularly those working in regulated industries and the public sector.

The way the speakers talk about cloud has shifted significantly too, since the early years, when the conversation rarely strayed beyond the cost savings and agility gains replacing on-premise tin with cloud-based infrastructure services can bring.

There was still an undercurrent of that during many of the Re:Invent customer sessions but this used to dominate the entire discussion. Now it’s essentially moved to setting the scene for a conversation about how – having laid the groundwork for cloud – they’re building on this to support their wider digital transformation plans.

In parallel with this Amazon (and others) have all rapidly expanded their product sets way beyond basic infrastructure, and are building out their value-added service propositions accordingly. Proof of that can be seen by their forays into machine learning, databases and analytics, with customers all readily tapping into these tools so they can move faster, innovate and expand into new markets.

Industry-wide cloud evolution

And it’s not just at Re:Invent either. A growing proportion of the case studies that land on AitC’s desk these days rarely focus exclusively on just the “lift and shift” part of the cloud migration story any more, with cost savings and business agility seemingly now just an accepted and assumed benefit of moving off-premise.

The way CIOs frame this part of the work has changed too, in terms of the language they employ to describe the services they use, with  many referring to the server, storage and networking portion of cloud as “the basics” or “the boring bits”, as one CIO termed it recently.

Speaking to AitC at Re:Invent, Jeff Barr, chief evangelist at AWS, said how cloud migrations are proceeding nowadays has changed considerably.

In years gone by, it was not uncommon to see enterprises wait until they had moved some workloads to the cloud before they started kicking the tyres of the other services in the firm’s product portfolio, for example.

“We see both those paths being pursued in parallel. They’ll say, let’s use IaaS, and bring things over and take advantage of some of those features, but then in parallel there’s a team saying let’s build cloud native applications and we’re going to use AI and NOSQL databases and we’re going to use a very agile methodology,” he said.

Time trials

Another notable change is the time it takes enterprises to complete their cloud migrations, added Barr, and how their expectations on this point have evolved over time.

“It used to be, when you talked to senior IT, they would talk in terms of years, and that our cloud is a 2-to-4 year kind of a plan. And that, to me, really meant maybe their successor will deal with doing that because it was far too often non-specific,” he said.

“What I’ve been hearing for at least the past two years is that we have the 18-month plan or the 11-month plan. They are very specific in levels of months and the person currently in the job is the one who is going to carry it out.”

And this is all a very good way of gauging how the cloud adoption journey (particularly for the enterprise) is progressing, he said.

“These are great indicators that they’re serious about doing it, and the short duration of these plans says they want to do it now,” said Barr.

“They see immediate actual value now they would like to realise and they are really important shifts we see happening.”

And while enterprises might not be all the way there on cloud yet, as we prepare for the start of 2017, the momentum is almost certain to continue, and – if anything – accelerate.


November 4, 2016  10:56 AM

The datacentre industry’s “identity crisis” and its impact on recruitment

Caroline Donnelly Profile: Caroline Donnelly

The datacentre industry operates under a veil of secrecy, which could be having a detrimental impact on its ability to attract new talent, fear market experts.  

When you tell people you write about datacentres for a living, the most common response you get is one of bafflement. Not many people – outside of the IT industry – know what they are, to be honest.

Depending on how much time I have (and how interested they look), I often try to explain, while making the point that datacentres are a deceptively rich and diverse source of stories that cover a whole range of subjects, sometimes beyond the realms of traditional IT reporting.

For instance, I cover everything from planning applications to M&A deals, sustainability, data protection, skills, mechanical engineering, construction, not to mention hardware, software and all the innovation that goes on there.

But, while it means there is no shortage of things to write, it makes the datacentre industry a difficult one for outsiders to work out what it’s all about.

Clearing the datacentre confusion

This was a point touched upon during a panel debate on skills at the Datacentre Dynamics Europe Zettastructure event in London this week, overseen by Peter Hannaford, chairman of recruitment company Datacenter People.

During the discussion – the core focus of which was on how to entice more people to work in datacentres – Hannaford said the industry is in the midst of an on-going “identity crisis” that may be contributing to its struggle to draw new folks in.

“Are we an industry or are we a sector? We’re an amalgam of construction, electrical, mechanical engineering, and IT,” he said. “That’s a bit of a problem. It’s an identity crisis.”

To emphasis this point another member of the panel – Jenny Hogan, operations director for EMEA at colocation provider Digital Realty – said she struggles with how to define the industry she works in on questionnaires for precisely this reason.

“If you go on LinkedIn and you tick what industry you work under, there isn’t one for datacentres. There is one for flower arranging, and gymnastics, but there isn’t one for datacentres,” added Hannaford.

Meanwhile, Mariano Cunietti, CTO of ISP Enter, questioned how best to classify an IT-related industry whose biggest source of investment is from property firms.

“The question that was rising in my head was, [are datacentres] a sector of IT or is it a sector of real estate? Because if you think about who the largest investors are in datacentres, it is facilities and real estate,” he added.

While a discussion on this point is long overdue, it also serves to show the sheer variety of roles and range of opportunities that exist within the datacentre world, while emphasising the work the industry needs to do to make people aware of them.

Opening up the discussion

This is ground Ahead In the Clouds has covered before about year ago.  Back then we made the point that – if the industry is serious about wanting more people to consider a career in datacentres – they need to start raising awareness within the general population about what they are. And, in turn, really talk up the important role these nondescript server farms play in keeping our (increasingly) digital economy ticking over.

When you consider the size of this multi-billion dollar industry, it almost verges on the bizarre that so few people seem to know it exists.

According to current industry estimates, the datacentre industry could be responsible for gobbling up anywhere between two and five per cent of the world’s energy, putting it on par with the aviation sector in terms of power consumption.

The difference is it’s not uncommon to hear kids say they want to be a pilot who flies jet planes when they get older, but I think you’d be hard pushed to find a single one who dreams about becoming a datacentre engineer.

At least, not right now, but that’s not to say they won’t in future.

Looking ahead

One of the really positive things that really came across during the aforementioned panel debate was the understanding within the room that this needs addressing, and the apparent appetite among those there to do something about it.

And, as the session drew to a close, there were already discussions going on within the room about coordinating an industry-wide effort to raise awareness of the datacentre sector within schools and universities, which would definitely be a step in the right direction.

Because, as Ajay Sharman, a regional ambassador lead at tech careers service Stem Learning, eloquently put it during the debate, this is an industry where there are plenty of jobs and the pay ain’t bad, but it is up to the people working in it to make schools, colleges and universities aware of that.

“We are not telling the people who are guiding engineering students through university about our industry enough, because when you talk to academics, they don’t know anything about datacentres,” he said.

“We need to do that much more at all the universities in the UK and Europe, to promote the datacentre as a career path for engineers coming through, because there are lots of jobs there and it pays well. So why wouldn’t you steer your students into that?” Well, quite.


October 4, 2016  2:17 PM

Byte Night 2016: IT community unites to raise money and end youth homelessness

Caroline Donnelly Profile: Caroline Donnelly
CIO, Guest Post

Alan Crawford, CIO of City & Guilds, is taking some time out of leading the cloud charge at the vocation training charity to join the thousands of IT workers taking part in Action for Children’s annual charity sleep out event, Byte Night, on Friday 7 October.

In this guest post, the former Action for Children IT directors shares his past Byte Night experiences, and explains why he continues to take part year after year.

When I joined Action for Children as IT director in 2013 I knew Byte Night was an event every major IT company got involved with, and I considered it part of my job description to sleep out too.

On my first Byte Night, we heard from a teenager whose relationship with his mother had broken down to such an extent, he ended up spending part of his final A-Level year sofa surfing with friends. But when they were unable to give him somewhere to stay, he began sleeping in barns and public toilets.

It was at this time, thanks to the intervention of his school, an Action for Children support worker stepped in.

By the time the October 2013 sleep out rolled round, the young man had shelter, was rebuilding the relationship with his family, had passed his A-Levels and started at university. Just thinking about his story gives me goose bumps,

Unfortunately, 80,000 young people each year find themselves homeless in the UK, and it is because of this I’ve agreed to sleep out again on Friday.

Byte Night: What’s in store?

Every Byte Night follows a similar pattern. Participants are treated to a hot meal, and take part in quiz (or some other fun activities), which are often overseen and supported by a range of celebrities.

For some participants, which include CIOs, IT directors and suppliers, the evening also provides them with an opportunity to network and swap details, with a view to doing business together sometime at a later date.

In the case of the London sleep out, all this takes place at the offices of global law firm Norton Rose Fulbright, and there will be more than 1,700 people taking part in the event at 10 locations across the UK this year. At the time of writing, the 2016 cohort are on course to raise more than £1m for Action for Children.

Regardless of where the sleep out is taking place, at 11pm all participants head out with our sleeping bags under our arms, ready to spend the night under the stars.

While that may sound a tad whimsical and romantic, the fact is sleep will come in fits and starts, and by daybreak we will all be cold and tired. But, as I trudge up Tooley Street on my way home, my heart will be warmed by memories of the night’s camaraderie and the feeling I’ve spent the evening doing something good and worthwhile.

While Byte Night may only be a few days away, there is still time to get involved and support the cause by agreeing to take part in a sleep out local to you, or by sponsoring someone already taking part.

Thank you for reading, on behalf of Byte Night, Action for Children and the vulnerable young people at risk of homelessness in the UK.


Page 1 of 712345...Last »

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: